The capacity and deployment of artificial intelligence (“AI”) is dizzying. As businesses vet and/or actively integrate AI into their business processes, it is critical to understand not only AI’s potential but the potential risks. This includes inadvertently contributing to systemic discrimination issues and being subject to claims of violation of existing legal protections. According to researchers, this technology can yield discriminatory results due to the nature of the data originally entered into the technology that may be corrupted by human error and bias.
Businesses are not shielded from risk by relying upon representations made by the vendor or software developer of an AI program or service they employ. Businesses using AI software can be held directly liable for potential violations of federal or state laws.
What is the big deal?
- Rapid growth and opportunity: According to PWC, AI could contribute $15.7 trillion to the global economy by 2030.
- Economywide impacts, especially in several notable industries: Health care and medical, financial services, retail information security and cybersecurity.
- Regulatory responses and oversight: Here, Brownstein addresses the Biden administration’s May 2023 new guidance to address AI. In addition to potential regulatory efforts to address AI deployment and development, companies should be aware of existing regulatory risks surrounding the use of AI.
What is AI and algorithmic discrimination?
- AI: While there is no consensus of a definition of AI, it is often characterized as machines and automated systems with the capacity to “operate with varying levels of autonomy” to “determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.”
- Per the White House: “Algorithmic discrimination occurs when these systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” According to the co-founder of AI Now, Kate Crawford, this can be either:
- Algorithmic discrimination can occur when a computerized model makes a decision or a prediction that has the unintended consequence of denying opportunities or benefits more frequently to members of a protected class than to an unprotected control set. A discriminatory factor can infiltrate an algorithm in a number of ways, but one of the more common methods is when the algorithm includes a proxy for a protected class characteristic because unrelated data suggests the proxy is predictive or correlated to a legitimate target outcome.
Who does this apply to and what key agencies are involved?
- Automated system designers and developers should be proactive to ensure protection for individuals and communities from algorithmic discrimination as they design and improve systems.
- Entities that utilize and deploy automated systems should clearly understand that they are also subject to scrutiny and risk.
- Employers should be clear that Title VII applies to AI.
- Any entity using algorithmic decision-making tools to assist with hiring and employment-related decisions should be aware of the risks and responsibilities associated with those practices.
- In 2021, EEOC launched an initiative to ensure that the use of AI and other emerging technologies used in hiring and employment decisions comply with federal civil rights laws.
- Digital Marketers: In August 2022, the Consumer Financial Protection Bureau (CFPB) issued an interpretive rule stating that digital marketers who identify potential customers or place content in ways meant to affect consumer behavior may qualify as service providers under the Consumer Financial Protection Act.
- If their actions violate federal law, even if the violation might have stemmed from an algorithmic decision, the marketers can be held legally accountable.
- In 2022, the FTC issued a report to Congress warning “that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.”
- In April 2023, the FTC, DOJ, CFPB and EEOC issued a joint statement pledging to “uphold America’s commitment to the core principles of fairness, equality, and justice” in the use of AI. The agencies “ … expressed concerns about potentially harmful uses of automated systems and resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.”
What to do
Human intervention should be the norm.
- Companies should start by evaluating the risks that their use of AI could lead to discrimination.
- Utilizing a law firm to oversee these processes provides a safeguard in terms of confidentiality regarding results and recommendations for improvement:
- Ensure you have a sound understanding of the factors and algorithms your AI is considering and how information is utilized throughout processes. This includes vetting potential AI vendors for unbiased datasets and explainable AI decisions.
- Compare those factors to existing federal and state laws. Determine whether the use of those factors or the process by which your AI systems weighs them is prohibited by any state or federal laws. Ensure that none of the factors serve, even inadvertently, as proxies for inappropriately considering any protected statuses. Consider whether even those factors that are legal present any risks.
- Determine any discriminatory impact by working with appropriate experts.
- Designate internal responsibility to an AI lead, information technology official or task force to develop a comprehensive corporate policy on acceptable AI use in the workplace.
- Implement internal processes, training and oversight to continually monitor as AI and regulations evolve.