As employers increasingly rely on AI tools to assist with hiring and promotion, they must be vigilant about the legal risks. Several states and municipalities, including New York City (via Local Law 144) and Illinois, have passed AI-related employment laws.
Employers may face liability for discrimination by their AI tools that violate Title VII and the ADA, even if the tools are designed or administered by third parties. AI-driven insights are a powerful tool for employment law.
How can AI be useful in employment law?
Here are a few ways that AI can help in this field:
Legal Research and Analysis
AI-powered tools are able to sift through vast amounts of documents including statutes, regulations, and case law in order to identify precedents as well as key legal principles. Natural Language Processing (NLP), algorithms, can summarize and explain complex legal texts in a concise manner. This makes it easier for lawyers to apply the law.
AI can help businesses comply with employment laws that are constantly changing by monitoring and alerting the HR and legal departments to any relevant changes. These tools can be used to analyze policies, contracts and procedures of companies in order to make sure they are compliant with the current legal requirements.
Contract Analysis
AI can analyze employment contracts and identify clauses that may be a legal risk or conflict with current labor law. It can be used to create standard employment contracts that are compliant with the relevant regulations.
Predictive Analysis
Machine learning algorithms can be used to analyze previous employment law cases in order to predict the outcome of new cases or disputes. These predictions can be used to guide legal strategies including settlement negotiations, litigation decisions and other strategies. AI can help organizations to identify areas of improvement, and legal risks associated with discrimination and bias.
AI-driven systems provide HR professionals with guidance on hiring, firing and employee management in order to ensure compliance to employment laws. These systems can alert you to potential problems and recommend best practices.
AI-powered chatbots and virtual assistants are able to handle employee questions and reports relating to employment law violations while maintaining anonymity. They also ensure adherence to the legal requirements.
Document review and E-Discovery
AI can reduce the time and costs associated with electronic discovery by streamlining the review of documents in legal proceedings. AI can be used to identify documents relevant and extract important information for litigation. AI can help ensure that employee data is treated in compliance with privacy regulations such as GDPR or CCPA. You can redact sensitive data from HR documents and protect the privacy of employees.
Education and Training
AI-driven platforms offer resources and training modules to HR professionals and their employees on employment laws and regulations.
AI-driven insights in employment law can be very powerful, but they shouldn’t replace the expertise of legal professionals. AI can assist with data analysis and process optimization but human judgment and experience are often needed to interpret legal nuances and make strategic decisions. AI implementation in employment law must also take into account ethical issues, including bias mitigation and data security.
legal risks
Unconscious Bias
Unconscious bias occurs when employees make decisions based on subconscious stereotypes and preconceived notions. Often these biases are unintentional, but they can still have a significant impact on hiring and promotion decisions. Unconscious biases can contribute to the pay gap, preventing women and minorities from getting promoted into leadership roles. They can also limit equal opportunities for different races and genders and affect employee satisfaction, confidence, and overall well-being.
It’s important to address unconscious bias in the workplace by educating employees and conducting training sessions on how to recognize and avoid biased behaviors. The training should focus on how even seemingly harmless words and actions can be harmful. This is especially true when interacting with individuals who are a part of underrepresented groups in the company.
In the context of AI, it’s important to understand how these biases can affect employment law. For example, an employer using an AI decision-making tool to make hiring or promotional decisions may be subject to Title VII discrimination lawsuits. The EEOC’s Updated Guidance discusses how the use of such software could result in disparate impact on protected groups if it’s used as part of the “selection procedure” for a job.
One way to prevent this is by ensuring that the AI tool is developed and administered by a team of diverse individuals. It is also important to disclose to employees and job applicants that the AI tool will be used in the evaluation process and provide a clear explanation of how it works.
An important point is that just as an employer may be liable for ADA violations when it relies on third-party software developers, it can be liable under Title VII for the same reason when it uses a vendor-developed AI tool to evaluate or screen employees or job applicants. This is because reliance on the vendor’s assurances that the tool will not negatively impact a particular group does not protect the employer from liability for disparate impact violations.
In addition, employers must carefully review the data used to train AI tools and ensure that it is representative of their workforce. Many AI systems have been shown to show learned biases, including those related to race and gender. For example, an AI program was abandoned after it was found to screen out female candidates by rating resumes with feminine verbs more favorably than those with masculine verbs.
Disparate Impact
Disparate impact claims are less straightforward to spot and require a stronger showing of discriminatory intent than Title VII or ADA violations. For example, an employer’s neutral policy that gives supervisors subjective discretion to hire and promote may violate federal and state anti-discrimination laws if it results in the exclusion of female applicants or certain races from promotions. In some cases, this can even occur if the employer does not have any intentional discriminatory purpose in mind but is simply subject to implicit biases that are revealed by statistical evidence.
In addition, there are a growing number of local and state laws that regulate how companies use AI-powered tools in the workplace. NYC law 144, for example, prohibits the use of automated employment decision tools (AEDT) in hiring and promotion decisions unless they are specifically necessary to do so. This includes screening and selecting candidates for interviews or scoring them for a promotion. It does not, however, include compensation, termination, labor deployment, benefits, workforce monitoring or performance evaluations.
ADA Violations
Employers using AI-powered tools in the workplace should consider whether those tools may trigger the Americans with Disabilities Act (ADA) and other laws that prohibit discrimination based on race, national origin, sex, religion, age and other categories. This is especially important when implementing an automated screening system or other AI decision-making tool because the underlying technology may fall within the scope of NYC Law 144, which applies to predictive tools used in hiring and promotion decisions.
The new law requires employers to use AI-driven decision making tools only when a human employee has the final say in hiring and promotions, and only after the employer reviews the results of the tool. It also requires employers to provide a description of how the tool works and its key features, including what data it uses to make its predictions or classifications. Companies that use third-party predictive tools that are not designed by human employees may find it more difficult to comply with the requirements of the new law because the AI-driven tool’s developers will likely keep its algorithm a secret and protect it under trade secrets or confidentiality agreements.
Even if an employer’s AI-based decision making tool is marketed as “bias free,” it could still violate the ADA if the tool unintentionally screens out applicants with disabilities who would otherwise be able to perform the job after reasonable accommodation. This is particularly true if the tool relies on an applicant’s voluntary response to an unlawful disability-related question or to a request for medical information as part of a conditional offer of employment.
When the underlying data for an AI-powered predictive tool includes bias against certain social groups or genders, the result can be a false negative that leads to a denial of a job application and potentially exposes the employer to civil penalties. For this reason, it is critical to incorporate diversity among the people who design and review AI tools and to test them for bias as part of their development process. This can help ensure that the tools do not disproportionately screen out people with protected characteristics, and that they are not reinforcing the existing biases of their creators.
Title VII Violations
There are specific groups of employees who employers are not allowed to discriminate against under Title VII of the Civil Rights Act and similar laws. These groups include sex, religion, race, national origin and age. It is also illegal to make employment decisions based on stereotypes about these groups. For example, if an employer does not hire a woman because they think she would be too tired to perform well in the role, this could violate Title VII and the Equal Pay Act.
Title VII violations can involve many types of misconduct in the workplace, including sexual harassment, retaliation for filing a discrimination claim and other illegal actions. The Equal Employment Opportunity Commission (EEOC) investigates these claims and may sue an employer if necessary. EEOC must prove that an adverse employment action occurred to win this type of case. This could mean a demotion, termination, reassignment, or pay reduction.
In addition, retaliation is illegal for many reasons that do not affect on-the-job performance. This includes retaliation for exercising an appeal or complaint right, testifying or assisting someone in exercising this right, participating in an EEOC investigation or hearing of another person’s discrimination claim, cooperating with an inspector general or special counsel, or refusing to obey an order that would break the law.
Depending on the circumstances of your discrimination claim, your attorney may file a complaint with the EEOC and/or a lawsuit in federal court. Often, the EEOC will try to resolve your claim through voluntary conciliation. If this fails, it will send you a Notice of Right to Sue and can file a lawsuit on your behalf.
A successful claim for discrimination can lead to monetary damages, such as back wages and reinstatement. It can also result in changes to company policies and practices. This can be particularly important for federal government employees, who are protected under Title VII of the Civil Rights Act and by similar state and local statutes.
Discrimination in the workplace can be devastating. It can leave you humiliated, unable to work and feeling hopeless. It is vital to know your rights and speak with an experienced lawyer about how to protect yourself and take action if you believe you are a victim.