Home » California Takes the Lead in Regulating Artificial Intelligence

California Takes the Lead in Regulating Artificial Intelligence

by delta

California Issues Dual Legal Advisories on Artificial Intelligence

California has taken a significant step in regulating artificial intelligence (AI) with the release of two detailed legal advisories from the Attorney General’s Office (AGO). These advisories aim to address the obligations of businesses, developers, and users of AI under existing California law while introducing new legislative measures effective January 1, 2025. Together, the advisories provide a comprehensive framework for ethical and lawful AI development and usage, underscoring California’s leadership in balancing innovation with accountability.

The first legal advisory outlines how consumer protection, civil rights, competition, and data privacy laws apply to AI, ensuring businesses operate transparently and responsibly. The second legal advisory focuses on healthcare-related AI applications, detailing the responsibilities of healthcare providers, insurers, and developers to prioritize patient safety and equity. Both advisories emphasize that AI’s rapid evolution does not exempt entities from complying with California’s robust legal framework.

The Dual Nature of AI: Immense Potential and Significant Risks

The advisory begins by acknowledging the immense potential of AI systems to drive scientific breakthroughs, boost economic growth, and benefit consumers. As the home to many of the world’s leading technology companies, California has a vested interest in the responsible development and growth of AI tools.

The advisories stress the importance of transparency, ethical practices, and rigorous testing. Businesses, developers, and users are urged to ensure AI systems are safe, fair, and compliant with California laws to foster trust and accountability in their deployment. The AGO encourages the use of AI in ways that are safe, ethical, and consistent with human dignity to help solve urgent challenges, increase efficiencies, and unlock access to information.


Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy


However, the AGO also highlights the significant risks associated with AI, including:

  • Exacerbation of bias and discrimination
  • Spread of disinformation
  • Increased opportunities for fraud
  • Potential harm to California’s people, institutions, infrastructure, economy, and environment

The First Advisory: Application of Existing California Laws to Artificial Intelligence

The first legal advisory provides an overview of existing laws that govern AI development and usage, highlighting key areas where businesses must remain vigilant.

California’s consumer protection laws, including the Unfair Competition Law (UCL) and False Advertising Law, are critical in ensuring that AI is not used to mislead or deceive. Businesses are reminded of their responsibility to provide transparent and truthful information about AI’s capabilities, ensuring that consumers are not misled by exaggerated claims or hidden uses of the technology.

Civil rights laws, such as the Unruh Civil Rights Act and the Fair Employment and Housing Act (FEHA), prohibit discrimination in housing, employment, and other areas. Developers must address biases embedded in algorithms to prevent unlawful impacts on protected groups. These laws ensure AI systems support fairness and equity, reflecting California’s commitment to civil rights.

Competition laws, including the Cartwright Act, emphasize the need for fair practices in AI-driven markets. The advisory warns businesses against practices like price-fixing or monopolistic behavior that undermine market fairness. Even inadvertent violations can attract scrutiny, reminding entities that innovation must not come at the expense of healthy competition.

Privacy laws, led by the California Consumer Privacy Act (CCPA), impose stringent requirements on how personal information is collected, used, and shared in AI systems. Developers are expected to safeguard sensitive data, including newly defined categories like “neural data,” while respecting consumers’ constitutional rights to privacy. Additional protections, such as those under the California Invasion of Privacy Act (CIPA) and the Confidentiality of Medical Information Act (CMIA), highlight the heightened responsibility of entities handling healthcare and educational data.

New AI-Specific Legislation

The first advisory also introduces new laws effective in 2025, reflecting California’s forward-thinking approach to AI governance.

  • Disclosure Requirements for Businesses:
  • AB 2013 mandates transparency in AI development, requiring public disclosure of training data by 2026.
  • AB 2905 requires telemarketing calls using AI-generated or modified synthetic marketing to disclose that use.
  • SB 942 addresses the proliferation of generative AI, requiring visible markings on AI-generated content and accessible detection tools to combat misinformation.
  • Unauthorized Use of Likeness in the Entertainment Industry:
  • B 2602 and AB 1836 strengthen protections against the unauthorized use of digital replicas, imposing significant penalties for violations.
  • Use of AI in Election and Campaign Materials:
  • AB 2355 and AB 2655 ensure election integrity by regulating the use of AI in political campaigns, including requirements for disclosures on AI-altered campaign materials and measures to combat misinformation.
  • Expanded Prohibition and Reporting of Exploitative Uses of AI
  • AB 1831 and SB 1381 expands existing criminal prohibitions on child pornography to include the use of AI in the creation of those material.
  • SB 926 extends criminal penalties to the creation of nonconsensual pornography using deepfake technology.
  • SB 981 requires social media platforms to provide users mechanism to report those.
  • Supervision of AI Tools in Healthcare Settings:
  • SB 981 requires health insurers to ensure licensed physicians supervise the use of AI tools.

The Second Advisory: Artificial Intelligence in the Healthcare Sector

The second legal advisory focuses on the use of AI in healthcare, where it has the potential to transform patient outcomes, administrative efficiency, and medical research. However, it also highlights the risks of bias, inequitable resource allocation, and privacy violations.

Healthcare providers, insurers, and developers are reminded of their obligation to ensure that AI systems enhance care without causing harm. The advisory emphasizes the need for rigorous validation, testing, and auditing of AI systems. Developers must also be transparent with patients about how their data is being used and whether AI plays a role in medical decision-making.

California’s privacy laws, including the CCPA and CMIA, impose strict requirements on healthcare-related AI. These laws govern the collection, use, and sharing of personal and medical information, ensuring that patient autonomy and privacy remain protected. Recent updates extend these protections to include mental health and reproductive health data, reflecting California’s commitment to safeguarding vulnerable areas of healthcare.


Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy


Transparency and Accountability in AI Systems

The AGO stresses the importance of transparency in AI usage, stating that consumers must be informed about when and how AI systems impact their lives and whether their information is being used to develop and train these systems. This transparency is crucial for maintaining public trust and ensuring that AI systems are not operating in a black box, potentially causing unintended harm. Key points on accountability include:

  • AI systems must be tested, validated, and audited for safety, ethics, and legality
  • Developers and users must understand and mitigate risks associated with AI use
  • AI should not be used in a manner that causes harm to individuals, entities, infrastructure, competition, or the environment

Both advisories emphasizes that businesses must understand how the AI systems they utilize are trained, what information the systems consider, and how they generate output. This understanding is crucial for maintaining accountability and ensuring that AI systems are not perpetuating biases or making decisions based on flawed or incomplete data.

Furthermore, the AGO calls for developers and users of AI to be transparent with consumers about whether consumer information is being used to train AI and how they are using AI to make decisions affecting consumers. This transparency is essential for maintaining consumer trust and allowing individuals to make informed decisions about their interactions with AI-powered systems.

The advisory notes that AI systems are proliferating at an exponential rate, affecting nearly all aspects of everyday life. AI is being used in various sectors, including:

  • Financial services (credit risk evaluation and loan decisions)
  • Real estate (tenant screening)
  • Marketing (targeted advertising)
  • Employment (hiring decisions)
  • Education (learning systems)
  • Healthcare (medical diagnoses)

Despite this widespread use, many consumers remain unaware of how AI systems are impacting their lives. The AGO highlights that AI systems are often novel and complex, with their inner workings not fully understood even by developers and entities that use them, let alone consumers. This lack of understanding can lead to situations where AI tools generate false information or biased and discriminatory results, often while being represented as neutral and free from human bias.

The rapid deployment of AI tools has resulted in numerous instances where these systems have produced unintended and sometimes harmful outcomes. For example, AI-powered hiring systems have been found to discriminate against certain demographic groups, while AI-generated content has sometimes spread misinformation or deepfakes that are difficult to distinguish from reality. The AGO’s advisory serves as a wake-up call to both developers and users of AI, urging them to be more vigilant about the potential consequences of these powerful tools.

Legal Framework: California’s Consumer Protection, Civil Rights, and Competition Laws

The advisory emphasizes that California’s Unfair Competition Law provides broad protections against unlawful, unfair, or fraudulent business practices. This law applies to AI-related activities, prohibiting:

  • False advertising of AI capabilities
  • Use of AI for deception (e.g., deepfakes, chatbots, voice clones)
  • Unauthorized use of a person’s likeness through AI
  • AI-assisted impersonation for fraudulent purposes
  • Unfair use of AI that results in negative impacts outweighing its utility

The Unfair Competition Law’s broad scope allows it to address both familiar forms of fraud and deception as well as new, cutting-edge forms of unlawful or unfair behavior enabled by AI. This flexibility is crucial in the rapidly evolving landscape of AI technology, where new applications and potential misuses may emerge quickly.

Moreover, the law makes violations of other state, federal, or local laws independently actionable under the Unfair Competition Law. This means that AI developers and users must be aware of and comply with a wide range of laws and regulations, as violations in other areas could lead to additional liability under this broad statute.

California’s False Advertising Law provides another layer of protection for California’s citizens against deceptive advertising related to AI products and services. This law prohibits:

  • Misrepresentations about AI capabilities, availability, and utility
  • False claims about the use of AI in connection with goods or services
  • Any false advertising, whether or not it is generated by AI

The False Advertising Law’s broad prohibition extends to all forms of advertising and marketing communications, including those related to AI products and services. This means that companies must be extremely careful in how they represent their AI capabilities to the public, ensuring that all claims are truthful and can be substantiated.

The law also applies to AI-generated content used in advertising, meaning that companies cannot escape liability for false or misleading claims simply because they were produced by an AI system. This underscores the importance of human oversight and verification in AI-generated marketing materials.

Competition Laws: Fair Markets in the AI Era

The advisory warns that AI developers and users must be aware of potential risks to fair competition created by AI systems, such as those used for pricing. Even inadvertent harm to competition resulting from AI systems may violate California’s competition laws, including:

  • The Cartwright Act, which prohibits anticompetitive trusts
  • The Unfair Practices Act, which regulates practices such as below-cost sales and loss leaders
  • The Unfair Competition Law, which also prohibits acts and practices that violate antitrust laws

The use of AI in pricing and market strategies presents new challenges for competition law. For example, AI systems that optimize pricing across multiple companies could potentially lead to unintended price fixing or collusion. The advisory makes it clear that even if such anticompetitive effects are unintentional, they may still violate California’s competition laws.

Furthermore, the advisory notes that anticompetitive actions by dominant AI companies may harm competition in AI markets and violate both state and federal competition laws. This suggests that regulators will be closely watching market leaders in the AI industry to ensure they are not using their position to stifle competition or engage in other anticompetitive practices.

Civil Rights Laws: AI-Driven Discrimination

The AGO highlights the potential for AI systems to incorporate societal and other biases into their decision-making processes. Developers and users are cautioned to be wary of these biases, as they may violate California’s civil rights laws, including

  • The Unruh Civil Rights Act
  • The California Fair Employment and Housing Act (FEHA)
  • Section 11135 of the Government Code

These laws protect Californians from discrimination based on various characteristics, including race, color, religion, sex, national origin, disability, age, and more. The advisory emphasizes that AI systems used in employment, housing, or public accommodations must not perpetuate or exacerbate discrimination against protected groups.

The AGO notes that businesses may be liable for discriminatory screening carried out by an AI agent, and the agents themselves may be directly liable to individuals who were discriminated against. This underscores the importance of thoroughly testing and auditing AI systems for potential biases before deployment, and continuously monitoring their performance to ensure they are not producing discriminatory outcomes.

Moreover, the advisory points out that certain laws require entities to provide specific reasons for adverse actions taken against individuals, even when AI was used to make the determination. This requirement for explainability poses a significant challenge for complex AI systems, particularly those using deep learning or other opaque decision-making processes.


Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy


A Call for Responsible AI Development and Use

Both comprehensive legal advisories from the California Attorney General’s Office mark a significant step in regulating artificial intelligence. By applying existing laws to new AI technologies, California is setting a precedent for how other jurisdictions might approach the legal challenges posed by AI. It highlights the need for transparency, accountability, and ongoing vigilance to ensure that AI systems do not inadvertently violate consumer protection, civil rights, or competition laws. It underscores the importance of proactive legal and ethical considerations in AI development and deployment, setting a standard for responsible innovation in this transformative field.

Both advisories will likely serve as a crucial reference point for legal professionals, tech companies, and policymakers worldwide.


DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com


You may also like

Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved