The fragmented approach to regulating artificial intelligence across U.S. states presents a growing challenge for businesses. With varying AI laws already emerging in states like Colorado, Utah, and California, companies must remain vigilant and adaptive to ensure compliance in an increasingly complex legal landscape.
In the absence of federal legislation in the U.S., states are enacting their own artificial intelligence (AI) laws. This is occurring even though most of the 20-plus new state data privacy laws provide regulatory authority over AI algorithms. This means the prospect of inconsistent state laws we see with data privacy is repeating itself for artificial intelligence.
The most noteworthy developments so far (two statutes and one regulation) have occurred in Colorado, Utah and California. Here is a breakdown of each state’s actions and key takeaways for businesses moving forward.
Colorado Artificial Intelligence Act
Colorado’s AI Act was passed this year and will take effect Feb. 1, 2026, making it the first comprehensive state regulation to address AI. It requires developers of high-risk AI systems to prevent algorithmic discrimination, which is the discrimination of individuals based upon their age, color, disability, ethnicity, genetic information, race, religion and veteran status.
High-risk AI systems are those that make or substantially contribute to the making of a “consequential decision.” Those are any decisions that affect, deny or alter the cost or terms of services of education, employment, financial services, government services, health care, housing, insurance or legal services. The act applies to such discrimination, whether or not the system processed personal information. That is similar to how the application of the European Union Artificial Intelligence Act does not alter the application of the General Data Protection Regulation (GDPR).
The act defines two categories of stakeholders. There are developers who build AI systems and deployers who sell or license them, often with their own modifications and enhancements.
Developers must meet the following requirements:
- Document the foreseeable proper and harmful uses of the AI system
- Explain the type and lineage of the training data used in the system
- Report on the logic of the algorithms and the mitigation measures implemented against algorithmic discrimination
- Provide the information necessary for deployers to conduct an impact assessment
- Publicly publish a statement detailing how the system was developed and how it manages known or foreseeable risks of discrimination
- Promptly report to the attorney general any instances of algorithmic discrimination
They must conduct annual impact assessments and must inform customers of their right to opt out of the processing of their personal information for purposes of profiling. Deployers also must explain any adverse decisions by the system and provide the user with the right to appeal such decisions.
In an acknowledgment of the expected costs of compliance, the act exempts deployers with fewer than 50 full-time employees — not from the act but from having to implement risk management programs and annual impact assessments.
Unlike the European Union AI Act, the Colorado AI Act does not expressly cover general-purpose artificial intelligence. Under the EU AI Act, general purpose AI is defined as systems that can process audio, video, text and physical data and have wide applications across various industries, including healthcare, finance and life sciences.
The Colorado AI Act excludes coverage of generative AI unless the technology is used to generate content, decisions, predictions or recommendations relating to consequential decisions.
The Colorado law makes a strong push for use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework for the governance of AI. This framework was designed to help companies and organizations implement the responsible development and management of AI. A violation of the AI act constitutes a violation of Colorado’s Unfair and Deceptive Trade Practices Act.
Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy & Consulting
Utah Artificial Intelligence Policy Act
Utah was the first state to enact an artificial intelligence statute, which took effect May 1. It is narrowly focused on the use of generative AI, which it incorporates into Utah’s existing consumer protection statutes.
The act says that it is no defense to any violation of a Utah consumer protection statute that a generative AI system made the unlawful statement or committed or contributed to the unlawful act.
The act creates two categories of disclosure obligations. The first relates to a person who uses generative AI in connection with a business regulated by the Utah Division of Consumer Protection; they must disclose if asked that the user is interfacing with a machine. The second disclosure category relates to a person who provides services of a licensed occupation, such as healthcare professionals, who must affirmatively inform consumers in advance that they are interacting with AI.
The act creates an office of AI policy and an AI learning laboratory program to facilitate the development of AI technologies. Companies accepted into the program can enter into a regulatory mitigation agreement with the state that reduces the regulatory burden on AI development.
Violations of the Utah AI Policy Act may incur an administrative fine of up to $2,500 per violation by the Utah Division of Consumer Protection.
This statute is narrowly drawn, so the main task for covered Utah companies is to implement a disclosure regime for the use of generative AI that takes into account whether users are in regulated industries.
California ADMT Regulations
The California Privacy Rights Act of 2020 (CPRA) amended and expanded the California Consumer Privacy Act (CCPA) and created the California Privacy Protection Agency (CPPA) as the nation’s only dedicated state data privacy agency. CPRA directed the agency to promulgate regulations for activities posing a significant risk to consumers’ privacy or security, including requirements for:
- Comprehensive and independent cybersecurity audits
- Regular risk assessments for the processing of personal information
- Regulation of the use of automated decision-making technology (ADMT)
The state’s privacy protection agency has drafted these regulations but has not yet promulgated them. While there are still policy disagreements about the scope of the ADMT regulations, there is little doubt that new regulations will be coming soon that incorporate the following elements:
Cybersecurity Audits
- Every business whose processing of personal information poses a significant risk to consumer’s security and meets a certain processing threshold must complete a cybersecurity audit. The initial audit is due 24 months from finalization of the regulations, with follow-on versions prepared annually thereafter.
- Cyber audits must be performed by a qualified independent professional and must be reported to the board of directors or highest-ranking executive. They must assess the following controls:
- Authentication, including multi-factor authentication and strong password policies
- Encryption of personal information at rest and in transit
- Zero-trust architecture
- Account management and access controls
- Secure configuration of hardware and software
- Vulnerability scans, penetration testing and network monitoring
- Cyber education and training
- Retention and disposal policies
- Security incident response management
If the business has had to make stakeholder incident notifications, a sample copy of those notices must be included. The business must file a certification of its completion of the audit with the CPPA signed by a member of its board of directors or highest-ranking executive.
Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy & Consulting
Risk Assessments
Every business whose processing of personal information presents a significant risk to consumers’ privacy must also conduct a risk assessment. Significant risks to privacy include selling or sharing personal information, processing sensitive information, using automated decision-making technology (ADMT) for a significant decision or for extensive profiling and using personal information to train ADMT or artificial intelligence.
The risk assessment must balance the risks of data processing activities against the benefits of such activities to the business, taking into account the purpose of the processing and any risk-mitigation actions of the company. The analysis must address those actions taken to maintain the quality of the data, the logic of the algorithms and the use of system outputs.
If the ADMTs are made available to other entities, the developer must provide the recipient with the information necessary to understand the operations and limitations of the technology. Risk assessments must be conducted before a processing activity is initiated and whenever a material change is made to the processing. An abridged version of the assessment, including a certification of its proper completion, must be submitted to the protection agency.
Automated Decision-Making Technology (ADMT) Regulations
A business using ADMT concerning consumers for “significant decisions,” such as access to financial services, housing, insurance, education, employment, compensation, essential services or healthcare, or for extensive profiling of the consumer, must comply with these regulations.
The business must provide any consumers subject to ADMT with a “pre-use notice” explaining the purpose of the business’s use of ADMT, the consumers’ right to opt out of the use of ADMT and additional information about how the ADMT works, including its logic, its intended output and how the business will use the output.
The business must also explain the consumers’ right to appeal any automated significant decision to a qualified human reviewer who possesses the authority to overturn the decision. Other terms apply to the use of ADMT in admission or hiring decisions, work assignments, compensation and other work or education-related profiling.
If a consumer requests access to the ADMT action, the business must explain the purpose of its use of the technology, the output that was produced, how the business used the output and how the logic of the technology was applied to the consumer.
Key Takeaways
Just as a privacy impact assessment is the best means of implementing data privacy, companies should incorporate AI governance principles into all AI development efforts and into all data protection assessments. Other key takeaways include the following:
- With its comprehensive privacy act and new AI act, Colorado is taking a major role in the regulation of data and AI. Its AI act will apply to the use of AI systems even when personal information under its privacy act is not involved.
- The Utah AI Policy Act deals with AI solely as a matter of consumer protection. Requiring disclosure of the use of generative AI with consumers is a widely accepted requirement.
- California is headed for a groundbreaking regulation on cyber audits, risk assessments and automatic decision-making technologies. These regulations will accelerate the regulation of AI in and among the states unless preempted by federal law.
DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com