Navigating the Legal Landscape of AI Hiring Tools: What Employers Need to Know



Friday, May 12, 2023

New York City’s Department of Consumer and Worker Protection (“DCWP”) is set to enforce a first-in-the-nation ordinance, Local Law 144 of 2021, regulating the use of hiring tools driven by Artificial Intelligence (AI). On April 5, 2023, DCWP published final regulations implementing the ordinance, which will go into effect on July 5, 2023. The ordinance prohibits employers and employment agencies from using an automated employment decision tool (AEDT) to make an employment decision unless the tool has been audited for bias annually, a public summary of the audit is published, and certain notices are provided to job applicants and employees who are subject to screening by the tool.[1] 

While the law's reach was initially thought to be wide-ranging, the final regulations clarify that it only applies to AEDTs derived from "machine learning, statistical modeling, data analytics, or artificial intelligence." The law only covers machine-learning-based predictive AI tools, which use algorithms to identify inputs and optimal weights to place on each feature. Conversely, a human-designed statistical model, where specifications are all set by a human being and not a computer, is not covered by the law. Moreover, NYC 144 likely applies only to New York City residents who are either candidates for hire in, or employees up for promotion to positions in, New York City, although neither the statute nor any regulation clearly states this limitation. 

Three-Step Analysis to Determine if a Tool Is Subject To the Law

The primary consideration to determine whether the law applies is whether an employer uses an AEDT for a covered employment decision, which would make them subject to the law's audit and notice requirements. To determine this, employers must analyze the technology comprising the tool and how it is used. While these terms are sometimes redundant, a simple three-step analysis can help employers determine whether they fall within the definition of AEDT. Pursuant to the local law, AEDT is any computational process that derives from machine learning, data analytics, statistical modeling, or artificial intelligence, which provides simplified output used to assist or replace discretionary decision-making for employment decisions which impact individuals. Employers should ask themselves three questions to determine if a tool fits within the definition of AEDT:

  1. Does the tool's technology fall within the AEDT definition? 
  2. Is the tool used in a way that brings it under the AEDT definition?  
  3. Is the tool used to make a covered employment decision within the AEDT definition?  
1. Is the Underlying Technology Within the Definition of AEDT 

Under NYC Admin. Code 20-870, only tools that meet the criteria of being derived from "machine learning, statistical modeling, data analytics, or artificial intelligence" are covered by NYC 144. The final regulations define this term as a set of mathematical and computer-based techniques that generate a prediction or classification and use a computer to identify inputs and their relative importance, among other parameters, to improve accuracy. This means that the law's scope is limited to predictive tools that employ specific algorithmic techniques that constitute machine learning, such as feature extraction and training. In contrast, human-designed statistical models fall outside the scope of NYC 144. 

2. Is the Tool Used in a Manner That Triggers the AEDT 

If the tool is intended to be used in hiring or promotion decisions, the next step is to consider how the tool's output will be utilized in making those decisions. To fall within the scope of NYC 144, an AI tool must produce a "simplified output[2]" and be used to "substantially assist or replace discretionary decision making." A covered tool's output, which may be a score, classification, or recommendation, must be used in one of three ways: 

  • As the sole criterion for the employment decision; 
  • As the most heavily weighted criterion; or 
  • To override conclusions made from other factors, including human decision-making. 

Any other use of the tool's output, such as where the ranking or classification is just one of many factors considered in the decision-making process, might exempt the tool from being classified as an AEDT under the statute. For example, if an employer uses a tool to rank candidates but does not rely solely on that ranking to decide, does not give the ranking more weight than other criteria, or does not allow the ranking to overrule a human decision-maker, then the use may not be covered by NYC 144. However, if the tool is used to rank 100 resumes and only the top 20 candidates are advanced, while the other 80 are rejected solely based on the ranking, this use is likely to fall under the definition of "substantially assisting or replacing" human decision-making. 

Determining the weight given to the tool's output will depend on specific facts and circumstances, and employers who claim their tool is exempt from coverage under NYC 144 because it does not "substantially" assist or replace human decision-making may face challenges. Employers who use scores or rankings in their selection process will need to demonstrate through sufficient documentation that their model is not being used for any of the covered purposes. 

3. Covered Employment Decision 

Lastly, if the tool meets the first two criteria, then the employer must determine whether it is being used for a covered "employment decision." The law only applies to hiring and promotion decisions and does not extend to compensation, termination, workforce planning, benefits, workforce monitoring, or performance evaluations.[3] Additionally, the regulations limit the scope of the law to candidates who have applied for a specific job, so tools used to identify potential candidates who have not yet applied for a position are not covered. 

It's also worth noting that the law applies only to hiring or promotion decisions that "screen" candidates by determining whether they "should be selected or advanced" in the process. For instance, if an employer uses a tool to sort applicants into three tiers (such as "extremely qualified," "moderately qualified," and "less qualified") and, in practice, does not advance any candidate who ranks "less qualified," that use likely falls within the scope of the law. Conversely, if an employer uses a tool to evaluate a candidate's creativity, and that assessment is only one of many factors considered in deciding whether the candidate advances, then that use may fall outside the definition of AEDT and, consequently, outside the scope of the law. 

How To Proceed if The Tool Is An AEDT 

If the AI tool meets all three criteria mentioned earlier, then it falls under NYC 144's requirements. In such cases, employers must follow four steps: 

  • conduct an unbiased audit by an independent auditor; 
  • publish a summary of the audit's findings on the employment section of their website in a clear and conspicuous manner; 
  • provide notification to applicants and employees regarding the tool's use and operation no less than ten business days before such use[4]; and 
  • inform the affected individuals that they can request accommodation or an alternative selection process and also provide instructions for how an individual can make this request.[5] 

According to the law and initial regulations, an employer that breaches the law is subject to a civil penalty of $375 for the first violation, with each additional violation on the first day of the first breach incurring the same penalty. For each subsequent breach or default left uncorrected after the first day, the penalty ranges from at least $500 to a maximum of $1,500. Additionally, each instance of non-compliance with the applicant notice requirements is regarded as a distinct breach. Similarly, failing to satisfy the "bias audit" requirements results in separate violations, compounded daily. [6]

Best Practices For Employers 

In addition to complying with existing laws and regulations, employers can also take proactive measures to ensure that their use of AI employment-assessment tools is fair and transparent. Here are some best practices that employers can keep in mind: 

First, it's important for employers to be aware of whether and how they are already using AI to make hiring decisions. In other words, an employer must take the time to learn which processes are dependent on AI and which judgments are being made by AI rather than humans.[7] By having a clear understanding of the use of AI employment-assessment tools, employers can develop policies that ensure compliance with relevant laws and regulations. 

Second, employers should ensure that any AI employment assessment algorithms they use do not result in unlawful discrimination. If the employer creates the AI employment-assessment tools that it will use, then the employer should consider seeking input from people of diverse backgrounds when designing the algorithm. Doing so may demonstrate the employer’s commitment to equal opportunities and create a more transparent and objective selection process, reducing the likelihood of discriminatory biases and regulatory challenges.

Third, employers must alert job applicants and employees when they are being evaluated using AI decision-making tools and notify those individuals that reasonable accommodations are available to them if they have a disability. Employers should also ensure that their staff members are trained to recognize requests for reasonable accommodation, such as a request for an alternative test format. If another company administers the AI decision-making tool the employer uses, the employer should ensure that the outside company is forwarding requests for accommodation to the employer for review and processing.

By following these best practices, employers can mitigate risks associated with their use of AI employment-assessment tools. As the laws regulating the use of AI in employment continue to evolve, employers should remain vigilant and adapt their policies and practices accordingly. 

Contact

The law outlines detailed requirements regarding the bias audit, which exceed the scope of this article. If you would like to discuss the specific requirements or inquire about your company’s compliance with the law, please contact Scott Matthews, Marky Suazo, or your relationship partner at Windels Marx to discuss the applicability of the law to your company or inquire about your company's compliance status.

Disclaimer

In some jurisdictions, this material may be deemed as attorney advertising. Past results do not guarantee future outcomes. Possession of this material does not constitute an attorney / client relationship.


[1]NYC Admin. Code § 20-871.

[2] The final regulations define a “simplified output” as a prediction or classification that may take the form of a score (e.g., rating a candidate’s estimated technical skills), tag or categorization (e.g., categorizing a candidate’s resume based on key words, assigning a skill or trait to a candidate), recommendation (e.g., whether a candidate should be given an interview), or ranking (e.g., arranging a list of candidates based on how well their cover letters match the job description). § 5-300.

[3] Pursuant to § 20-870, the term “employment decision” means to screen candidates for employment or employees for promotion within the city.

[4] § 5-304(b)-(c); § 20-871.

[5] § 5-303(a).

[6]  § 20-872(a).

[7] Equal Employment Opportunity Commission, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022).