RELATED: Government Agencies Join Forces Against Bias and Discrimination in AI (May 2, 2023)
Artificial intelligence (AI) continues to transform the world and the
workplace. From facial-analytic tools that assess interviewees to bots that
screen applicants, AI technology is gaining traction in many human
resources departments. But due to its risk of bias and discrimination,
regulators are enacting laws — and even filing lawsuits — to restrain the
use of AI in employment decisions.
Whether companies intentionally integrate AI, or their employees
merely experiment with chatbots, employers should assess their
current policies, practices and training to keep pace with the law.
Accordingly, this article summarizes recent AI regulations that affect
employers and offers guidance for evaluating compliance.
I.
The Rise of AI
AI and automated technology have been around for decades, but only in the
past year has AI become so widely adopted by employees to conduct business.
Through machine learning and generative AI, chatbots can “learn” from
incredibly large data patterns and generate text that mimics human language
and creativity. As a result, the use of generative AI has exploded.
ChatGPT, for instance, reached 100 million users in two months
after its release.
Employers and HR departments may use AI to automate routine tasks, analyze
large amounts of data and resumes, draft emails and policies, and help with
hiring and firing decisions. While AI may bring benefits in efficiency,
cost savings and growth, AI also brings important risks and concerns. One
concern is bias and discrimination. AI algorithms can amplify biases that
exist in the job market and from their data sources, leading to unfair
outcomes in hiring or firing decisions. An AI tool also may negatively
discount age or a gap in employment when analyzing a resume, inadvertently
discriminating against women who took necessary maternity leave. Another
concern is data privacy. Employees can expose companies to liability and
other risks by using AI tools to analyze personal and confidential
information.
II.
The Rise of AI Regulation
Now more than ever, regulators are trying to balance the benefits of new AI
technology with its risks, particularly in the employment context. As a
result, federal, state and foreign regulation is on the rise.
-
Equal Employment Opportunity Commission (EEOC):
On Jan. 10, 2023, the EEOC issued a
draft strategic enforcement plan that placed AI-related employment discrimination at the top of its
priorities. But the EEOC’s guidance on AI began as early as May 2022,
when it issued
guidance on the Americans with Disabilities Act’s application to the use of AI
technology in recruiting and employment decisions. That same month, the
EEOC also filed its first lawsuit against an employer for allegedly
discriminating in its use of AI technology during the hiring process. See EEOC v. iTutorGroup, Inc., et al., No.
1:22-cv-02565 (E.D.N.Y. May 5, 2022).
-
New York City:
On July 5, 2023, New York City’s Department of Consumer and Worker
Protection will begin enforcement of
Local Law 144, which regulates the use of AI in “employment decisions.” Before
employers or HR departments use automated employment decision tools to
assess New York City residents, they must generally: (1) conduct a bias
audit; (2) notify candidates or employees residing in the city about
the use of such tools; and (3) notify affected persons that they may
request an accommodation or alternative process. Violations of the law
are subject to civil penalties, which may accrue daily and separately
for each violation.
-
Illinois:
In 2020, Illinois enacted the Artificial Intelligence Video Interview
Act (820 ILCS 42) to govern the use of AI to assess video interviewees for jobs in
Illinois. Employers recruiting in Illinois should take special care to:
(1) obtain consent from applicants before using AI, after explaining
how the AI works and its evaluation standards; and (2) ensure proper
control of video recordings and deletion upon request. Unlike New York
City’s law, however, the Illinois law does not include explicit civil
penalties.
-
Maryland:
In 2020, Maryland passed its AI-employment law, called
H.B. 1202. H.B. 1202 prohibits employers from using facial recognition
technology during an interview for employment to create a facial
template without consent. Consent requires a signed waiver that states:
(1) the applicant’s name; (2) the date of the interview; (3) that the
applicant consents to the use of facial recognition; and (4) whether
the applicant read the consent waiver. Like the Illinois law, the
Maryland law does not include a specific penalty or fine.
-
Other Legislation: Several bills introduced across the United States show active efforts
to regulate the use of AI. For example, in Washington, D.C., the Stop
Discrimination by Algorithms Act (B24-0558)
sought to restrict the use of algorithms that make decisions based on
protected personal traits. In Massachusetts, MA H.B. 136
sought to require certain “data aggregators” using automated technology
to perform: “(i) continuous and automated testing for bias on the basis
of a protected class; and (ii) continuous and automated testing for
disparate impact on the basis of a protected class.” While both bills
appear to have died in chambers, similar bills are likely to resurface
in the future. Finally, many states are creating councils to oversee AI
and new regulations. In Texas, for instance,
H.B. 2060 would establish the Artificial Intelligence Advisory Council to monitor
Texas state agencies’ use of AI systems. In 2020, the Texas Workforce
Commission allegedly was “able to clear its backlog of unemployment
claims with a chat bot.”
-
European Union:
AI regulation is not limited to the United States. In April 2021, the
European Commission proposed the
Artificial Intelligence Act, which could transform AI regulation in much the same way that the
General Data Protection Regulation transformed data privacy. The EU’s
proposed AI Act focuses on accountability, transparency, user rights
and risk assessment, with regulations adapting to the AI technology’s
risk tier: unacceptable, high, limited and minimal. Some countries,
including Italy, have outright banned certain forms of AI.
As AI tools rapidly advance, lawmakers will continue to implement new
federal, state and foreign regulation that will affect a wide variety of
industries.
III.
Employers Should Assess Intentional and Inadvertent Use of AI
Whether they intentionally use AI tools in operations or inadvertently use
AI tools through agents, employers should assess their practices, policies
and training to keep pace with new regulations. Since regulatory compliance
and risk may be unique to each employer’s operations and jurisdiction,
these practices should be evaluated in consultation with multiple
stakeholders, including the employer’s legal counsel, HR department and IT
department.
- Assess which regulations currently apply to company operations, staff
and contractors.
Employers should evaluate which laws will influence their business,
including federal and state anti-discrimination laws, privacy laws, data
security laws and intellectual property laws. It is vital to understand the
specific requirements and standards for using AI (intentionally or
inadvertently) in hiring, promotions, performance evaluations, contracts
with third parties and other employment decisions. Employers should strive
to ensure that all use of AI complies with applicable regulations.
- Evaluate whether the company’s employment handbook, IT policy and code
of ethics cover AI use and consider an AI policy.
One size does not fit all, and some employers may not need a new AI policy,
for now. After a diligent review, a company may determine its employee
handbook, IT policy or code of ethics reasonably addresses the proper use
of emerging technologies like AI and provides steps to avoid
discrimination. But given the breadth of active and pending AI regulation,
employers should consider proactively creating a policy that outlines the
organization’s approach to responsible use of AI, including how AI can be
used, monitored and improved.
- Train employees on risks and benefits of AI tools.
Since tools like ChatGPT are already in use, employers should provide
training to their employees on the risks and benefits of AI use. This
training should include an overview of how AI is used in the organization;
the potential for bias or discrimination; the importance of protecting
private, confidential or trade secret information; and how to recognize and
report issues. Employers also should consider informing their employees
about specific AI technology adopted by the organization.
- Conduct a bias audit to evaluate whether an AI tool negatively affects
a protected trait or class.
When planning to use AI tools, employers should assess the tools’ potential
for discrimination or bias. This may require a bias audit to identify
issues and results negatively impacting protected traits or classes. This
can be difficult for AI tools with “blackbox problems,” where the AI
provides an output or decision without explaining how it was reached. Thus,
employers should consider conducting an audit with an independent auditor
using an established methodology and be prepared to disclose the results to
employees, applicants and regulators when necessary.
- Designate an AI leader or partner to monitor new regulations, trends
and technology. As AI tools rapidly advance, regulations and employers are trying to keep
up. Lawmakers will continue to implement federal, state and foreign
regulation that may affect the company’s industry. And employee use of AI
on company machines — whether for work or personal reasons — can expose
employers to serious risk. Designating a reliable AI leader or partner to
monitor regulations, trends and technology can ensure that company
practices are proactively addressing AI developments — both domestic and
foreign.
AI technology will continue to evolve, and employment law will evolve with
it. While AI comes with many benefits, lawyers and HR departments should
stay informed about the latest regulations — and implement safeguards when
necessary — to protect their companies, employees, customers and
stakeholders from increased risk and liability.