Effective January 2, 2023, a new law prohibits employers from using artificial intelligence (“AI”) tools to make hiring decisions about New York City residents unless the tools have undergone an independent bias audit.

The law is aimed at preventing employers from discriminating against job applicants through the use of algorithms before a recruiter or hiring manager reviews their applications. Fortunately, there are tech companies that are already ahead of the curve with solutions that improve diversity by using AI to source, rather than screen, candidates, ensuring compliance with the new law. 

New York City’s Law

The New York City law makes it unlawful for an employer in New York City to use “automated employment decision tools” to screen applicants for employment or promotion unless two prerequisites are met: (1) the tool has been the subject of an independent, third-party, bias audit no more than one year prior to its use; and (2) the employer makes a summary of the results of the audit and the distribution date of the tool publicly available on its website.

An automated employment decision tool is defined by the new law as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

The New York City law also imposes notice requirements on employers using automated employment decision tools. Specifically, the employer must notify each applicant or employee who resides in New York City, at least ten business days before the use of the tool, that: (1) an automated employment decision tool will be used to evaluate and/or screen New York City residents’ applications and allow the applicant to request or accommodation or alternative screening process; and (2) the job qualifications and characteristics that the tool will use to assess the applicant.

Additionally, if it is not already stated on the employer’s website, the employer must provide information about the type of data collected for the tool, the source of the data and the employer’s data retention policy.

An employer’s failure to comply with the law will be subject to monetary penalties of $500 per violation on the first day of violation and $1,500 per violation for each subsequent violation. Each day an automated employment decision making tool in violation of the law, and the failure to provide each required notice that is not given to an application is counted as a separation violation.

AI Regulation is on the Rise

New York City’s law is the latest in a growing trend to investigate and regulate the use of AI by employers and other business. In 2019, Illinois passed a law requiring employers to disclose the use of AI in video interviews for job applicants. In 2020, Maryland also passed a law prohibiting employers from using facial recognition technology during the job interview process without the applicant’s consent.

The Attorney General of Washington, D.C., recently proposed broad legislation aimed at requiring businesses to prevent biases in their automated decision-making algorithms, and to report and correct any biases that are detected.

Federal legislation aimed at addressing bias and discrimination in AI is on the horizon as well.  The U.S. Equal Employment Opportunity Commission (“EEOC”) has announced an initiative to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws.

AI Tool Improves Diversity

Although the New York City law imposes a new burden on employers, there are already companies that specialize in AI recruiting tools aimed at increasing diversity. 

Rather than using AI to screen candidates and run the risk of screening out diverse candidates, as is prohibited by New York City’s law, companies can use AI to source potential candidates and invite more diverse candidates that fit the job requirements to apply. 

AI uses a variety of strategies to boost diversity in the pipeline. It uses algorithms to predict and add skills that may be missing from diverse candidates’ profiles, suggest changes to job requirements that are likely to increase diverse talent participation and remove photographs and names from profiles reviewed by recruiters so that the focus is on the merits of the candidate instead of perceived gender or ethnicity. 

AI can automatically select and engage with potential candidates, eliminating any chance for bias from recruiters. If designed properly, AI can eliminate any diversity related parameters from its job/candidate matching algorithms.

Diverse candidates are simply given a more equitable opportunity to be considered by recruiters. However, they are not given any preferential treatment. This strategy gives employers the ability to create a larger pipeline of diverse applicants, while removing any chance for bias.

Conclusion

Although the New York City law imposes new requirements on employers, it also provides the opportunity for employers to consider new AI tools and ensure that they are engaging in hiring practices that reduce bias and provide a broader more diverse applicant pool. 


Authors
Allison Hollows

Allison is an Associate at Fox Rothschild LLP

Richard I. Scharlat

Richard Scharlat is a partner and member of Fox Rothschild LLP's national Labor & Employment Group defending management in many industries and in complex commercial litigation and class actions. Richard is a seasoned trial lawyer, having tried bench and jury cases in federal and state courts in numerous jurisdictions in the United States, as well as before FINRA and American Arbitration Association panels. He also regularly handles matters before the New York Division of Human Rights and the New Jersey Department of Labor.