The Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, aims to protect prospective employees and consumers from algorithmic discrimination when AI systems are used to make consequential decisions around employment, financial lending, housing, healthcare, and more. The law creates requirements for both developers (those who create AI and similar tools) and deployers (such as employers using AI to make decisions around hiring, promotion, firing, and other selection procedures). The intent is to prevent discrimination based on protected characteristics like race, gender, or age.
Colorado employers who use AI in employment decisions must:
- Use reasonable care to avoid any known or reasonably foreseeable risks of algorithmic discrimination.
- Implement risk management policies and conduct impact assessments of the AI systems.
- Review AI systems annually to ensure they do not cause discrimination.
- Notify the employee or potential employee and provide an opportunity for human review (if feasible) when an AI system makes a negative consequential decision about that person. Employees and prospective employees must have the opportunity to correct any incorrect personal data that a high-risk system might have used in making a negative consequential decision.
- Disclose to the state attorney general any discovery of algorithmic discrimination that the high-risk system has caused, within 90 days of the discovery.
There are some exemptions, including for employers with fewer than 50 full-time employees who do not train their AI systems with their own data.