California’s First-in-the-Nation Frontier AI Law Sets Reporting and Compliance Requirements

October 3, 2025

As California continues to lead in the exploration of AI, Gov. Gavin Newsom, on Sept. 29, 2025, signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act. The Act is the first state law to set safety, transparency and incident reporting rules for the most advanced AI systems, while leaving room to align with future, potential federal standards.

The Act covers frontier models (a foundational model trained above a compute threshold greater than 10^26 integer or operations) and large frontier developers (a developer that, with its affiliates, collectively has annual gross revenues of more than $500 million).

For these developers, SB-53 establishes reporting and compliance requirements aimed at enhancing and preserving AI transparency, oversight and innovation.

  • 22757.12 — Requires developers to publish a frontier AI safety framework on website. The framework must address risk thresholds for catastrophic harms, mitigation strategies, deployment reviews, internal governance, use of third-party evaluations, cybersecurity efforts, and processes to identify and respond to safety incidents. Developers must review the framework at least annually, and any material changes to the framework must be published and explained within 30 days.
  • 22757.12 (c)-(d) — Requires transparency reports at deployment and ongoing internal-use risk summaries. Frontier developers must publish a public transparency report with key details and use restrictions prior to or concurrent with the deployment of a new or substantially modified frontier model. The California Office of Emergency Services (OES) also requires periodic reporting of confidential summaries of any catastrophic risk assessments from developers’ internal use of frontier models.
  • 11546.8 — Establishes CalCompute. This new consortium within the Government Operations Agency is charged with ensuring the safe, ethical, equitable and sustainable development and deployment of AI.
  • 22757.13 — Requires critical safety incidents to be reported to the OES. Developers must submit reports within 15 days of discovery or within 24 hours if there is an imminent risk of death or serious injury. Beginning in 2027, the OES will annually provide anonymized, aggregated information about safety incidents.
  • 1107.1 — Protects whistleblowers for employees of frontier developers. Developers must maintain internal reporting processes.
  • 22757.15 — Allows for enforcement actions by the attorney general only. The attorney general can seek civil penalties of up to $1 million per violation for noncompliance, including for false and misleading statements.
  • 22757.14 — Directs the California Department of Technology to annually assess and recommend appropriate updates to the law. Updates are based on various factors, including international and federal standards and stakeholder feedback.

Although the Act is primarily directed at large AI developers, downstream businesses that deploy AI for underwriting, credit decisions, cybersecurity, fraud prevention or other automated decisions should evaluate vendor obligations, contractual flow-downs and operational impacts. Downstream users should expect stronger safety and security terms, shorter incident-notice timeframes and increased transparency as AI continues to proliferate. SB-53 is aimed to provide “trust but verify” oversight of catastrophic AI-related risks, but its ultimate impact on AI innovation is unclear.

Subscribe