President Joe Biden issued an executive order Oct. 30 requiring new safety assessments, equity and civil rights guidance, and research on AI’s impact on the labor market.
“AI is all around us,” Biden said. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
The following day, Kelly Dobbs Bunting, an attorney with Greenberg Traurig LLP in Philadelphia, outlined what the U.S. Equal Employment Opportunity Commission (EEOC) has said about AI in the workplace and discussed how to leverage AI in a legal, effective manner at the SHRM INCLUSION 2023 conference in Savannah, Ga.
She began the session by referring to the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative of 2021—intended to guide applicants, employees, employers and technology vendors in ensuring that AI technologies are used fairly and consistent with federal equal employment opportunity laws.
Through the initiative, the EEOC will:
- Establish an internal working group to coordinate the agency’s work on the initiative.
- Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications.
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.
“The purpose of this initiative was to ensure AI in the workplace complies with federal discrimination laws,” Bunting said. “It also talks about the impact AI has on hiring.”
Bunting listed several EEOC documents that allude to AI use at work:
- The new strategic enforcement plan for fiscal years 2024-2028. The document emphasizes the agency’s efforts to protect workers from discrimination involving artificial intelligence, pregnancy, long COVID and other protected categories.
- The guidance in 2022 that reinforced the illegality of intentionally or unintentionally “screening out” applicants with disabilities and prohibited disability-related inquiries and medical exams.
- The guidance in 2023 that says employers may be responsible for any discriminatory bias in the AI software they use for HR functions and encourages audits to monitor AI use.
“Google was sued because the algorithm they were using [to sort job applicants] favored young men,” Bunting said. “Many applicants [of other races and ages] didn’t even get a chance to make their case. You shouldn’t automatically exclude anybody from a job—and the EEOC is paying close attention to this.”
AI Discrimination Risks
Bunting outlined how AI can discriminate in the workplace:
- Scanners that select resumes based on certain key words.
- Employee-monitoring software that notes when employees are at their computers or rates employees based on keystrokes.
- Software for video interviews that scans facial expression or speech patterns to determine trustworthiness or honesty.
- Personality, aptitude or cognitive skills tests.
- Chatbots that automatically reject applicants who do not meet specific requirements.
“Americans love stuff that is easy and that makes them money—and AI is easy and saves money,” Bunting said. “It’s the perfect worker because it doesn’t call in sick. But there needs to be a human element to [oversee] AI.”
She referred to several lawsuits involving AI in the workplace:
- EEOC v. iTutor Group. A tutoring company agreed to pay $365,000 to resolve charges that its AI-powered hiring selection tool automatically rejected women applicants over 55 and men over 60.
- Real Women in Trucking v. Meta Platforms, Inc. A nonprofit group that advocates for women truck drivers filed a class action charge with EEOC against Meta Platforms, alleging that Meta routinely discriminates against women and older people when deciding which users receive employers’ job ads on Facebook.
- Derek Mobley v. Workday. Derek Mobley, a Black man over the age of 40 with a disability, filed a complaint against Workday alleging racial, age and disability discrimination after he applied to 80-100 positions from 2018 to 2023 at different employers that use Workday as a screening tool and was denied each time.
Best Practices to Consider
There are no federal regulations on using AI in the workplace. However, Bunting said several states have grown “tired of waiting for the federal government to do something,” so they’ve they take it upon themselves to pass laws, including:
- In 2020, the Artificial Intelligence Video Interview Act went into effect in Illinois. The law regulates several factors related to employers’ use of AI in video interviewing, including obtaining applicants’ informed consent, how videos should be distributed and destroyed, and reporting requirements.
- In 2023, Maryland passed the Facial Recognition Technology Law prohibiting employers from using certain facial recognition services—such as those that might cross-check applicants’ faces against external databases—during an applicant’s interview process unless the applicant consents.
- In 2023, New York City began enforcing a new law requiring employers to audit their HR technology systems for bias and publish the results.
- Massachusetts, New Jersey, New York and Vermont are considering bills that would regulate AI in hiring decisions.
Bunting offered recommendations to help companies avoid lawsuits associated with AI use:
- Provide notice before using AI software in HR functions.
- Obtain employee consent.
- Conduct bias audits on a regular basis.
- Maintain awareness of different laws in different jurisdictions.
- Create a “Use of AI” policy.
- Educate HR and IT about dangers of misusing AI.
- Have human oversight of HR decisions that involve AI.
Make sure applicants consent to your use of AI, Bunting urged.
“Remember, the EEOC says you have to provide an alternative if somebody does not consent to AI [in the interview process],” she said. “That could be an in-person interview, where you can actually read body language and [the applicant] can explain a gap in their resume.”