California Enacts Landmark AI Transparency Law
Table of Contents
On Monday, September 29, California Governor Newsom signed a new AI law mandating notable new disclosure, reporting, and transparency obligations for developers of large AI models. The measure, known as the Transparency in Frontier Artificial Intelligence Act (SB 53), requires certain developers of large AI “frontier” models to: (i) proactively disclose AI governance and risk mitigation practices in an “AI framework” report; (ii) adopt practices to ensure greater transparency in defining and assessing catastrophic risk thresholds arising from potential uses of large AI frontier models; and (iii) report certain AI safety incidents. The bill will become effective January 1, 2026.
This measure adopts key elements of a California expert working group report (the California Report on frontier AI Policy) issued earlier this year on the advancement of AI safety guardrails, and mirrors a similar measure approved by the New York legislature, which is now pending before New York Governor Hochul.
The author of SB 53,California State Senator Scott Wiener,previously proposed a more thorough AI regulatory proposal that was ultimately vetoed by Governor Newsom. This new measure reflects Senator Weiner’s attempt to achieve similar goals through increased transparency and reporting duties.
Disclosure of AI Governance Practices in “AI Framework”
The measure directs “large frontier developers”-entities that (1) develop AI models trained using a computing power greater than 10^26 integer or floating-point operations (definitions to be updated annually), and (2) have gross annual revenues in excess of $500 million-to disclose on their website a “frontier AI framework” that describes the company’s AI governance, safety and security practices. Such disclosure must describe, among other things, how the company:
* Incorporates national and international governance standards, and industry-consensus best practices into the frontier AI framework’s governance procedures;
* Defines thresholds to identify and assess whether a frontier AI model is capable of causing “catastrophic risks”;
* Uses mitigation strategies to address the potential for catastrophic risks;
* Implements cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties; and
* Responds to critical safety incidents.
Frontier AI framework reports mandated by this measure must be updated at least once a year, or whenever the frontier model developer undertakes a “material modification” to the framework.
Transparency and Disclosure Obligations
All frontier model developers-i.e., not only those that have over $500 million in annual revenue-are also required to disclose on their website certain facts about the frontier model in a so-called “transparency report.”
Key Takeaways:
* Latest California AI law mandates heightened transparency, disclosure, and reporting obligations for developers of large AI “frontier” models. Such developers must publicly disclose AI governance practices and guardrails in “AI frameworks,” publish “transparency reports” concerning key features of new models, and report certain “critical safety incidents” to state agencies.
* Measure codifies elements of a recent California expert working group report on AI safety which recommends heightened transparency and disclosure obligations to regulate developers of AI models.
* Legislature is concerned that large models have “capabilities that pose catastrophic risks from both malicious uses and malfunctions, including artificial intelligence-enabled hacking, biological attacks, and loss of control.”
* Transparency and disclosure duties are buttressed by provisions of the new law that protects whistleblowers who disclose to regulators or employers dangers to public health or safety resulting from a catastrophic risk, or violations of the law regarding large AI model behavior.
* Measure establishes a public cloud compute cluster, “CalCompute” at the University of California, that will provide AI infrastructure for startups and researchers.
Summary of California SB 53: AI Safety and Transparency
California’s SB 53 establishes a comprehensive framework for regulating “frontier AI” models, focusing on safety, transparency, and accountability. Here’s a breakdown of the key provisions:
1. Definition & Scope: The law targets “large frontier developers” – those creating AI models with capabilities exceeding the most advanced existing models.
2. Transparency & Reporting on Frontier AI Frameworks:
* Regular Reports: Large frontier developers must submit reports detailing assessments conducted under their “frontier AI framework” (their internal safety protocols).
* Disclosure Requirements: These reports must include assessment results, involvement of third-party evaluators, and steps taken to fulfill their framework. Existing system/model cards fulfilling these requirements are considered compliant.
* Catastrophic risk Assessments: Developers must disclose assessments of “catastrophic risk” from internal model use to a state agency every three months (or a reasonable timeframe). False or misleading statements about these risks are prohibited.
3. reporting of “Critical Safety incidents”:
* Definition: Critical safety incidents include:
* Unauthorized access/modification/exfiltration of model weights leading to death or injury.
* Harm resulting from a “catastrophic risk” materializing.
* Loss of control of a model causing death or injury.
* Reporting Timeline: Developers must report incidents within 15 days of finding, with faster reporting required for imminent threats.
* Report Content: Reports must detail the incident’s nature and date, and whether it involved a frontier model.
* Information Sharing: The Attorney General or Office of Emergency Services can share reports with the legislature, Governor, federal government, and other state agencies.
* Confidentiality: Reports are exempt from public records requests, with safeguards for trade secrets, public safety, and cybersecurity.
4. safe Harbor:
* The Office of Emergency Services can designate comparable or more rigorous federal laws/regulations/guidance as a “safe harbor” for compliance.
5. Whistleblower Protections:
* Prohibition of Retaliation: Developers are prohibited from preventing or retaliating against employees (“covered employees”) responsible for assessing/managing critical safety risks from disclosing information to authorities or internally.
* Internal Disclosure Processes: Developers must establish anonymous internal reporting channels for covered employees to report potential dangers to public health/safety or violations of the law.
6. Enforcement & Penalties:
* Civil Penalties: Violations of transparency/disclosure rules can result in penalties up to $1 million per violation.
* Legal Recourse: Successful plaintiffs can recover attorneys’ fees.
7.CalCompute – Public Cloud Computing Cluster:
* The law authorizes the development of a public cloud computing cluster (“calcompute”) to expand access to resources for AI development, training, and research benefiting the public.
In essence, SB 53 aims to proactively address the potential risks of advanced AI by mandating transparency, establishing reporting mechanisms for safety incidents, protecting whistleblowers, and fostering responsible innovation.