ActiveFence is now Alice
x
Back
Blog

California’s New AI Laws: What SB 243 and AB 489 Mean for AI Safety in 2026

Michal Brand Gold
-
Oct 19, 2025
New to AI Regulation? learn what you need to know
with our latest GenAI Regulations Report

Table of Contents

TL;DR

California has enacted two major AI safety laws: SB 243 regulating AI companions and SB 53 creating transparency rules for frontier AI models. Together, they establish new requirements for disclosures, harm-prevention safeguards, reporting, and accountability. Their influence will extend far beyond California and signal a growing global push for responsible AI development.

On January 1, 2026,  two new California laws will come into force, and with them, a quiet but meaningful shift in how AI systems are expected to behave in the real world.

The state-level SB 243 and AB 489 will become enforceable. These are not policy statements or future frameworks. They impose concrete, enforceable expectations on product behavior that apply de facto to any AI system accessible to users in California, regardless of where the company is headquartered.

For years, AI governance has lived mostly in whitepapers, voluntary standards, and internal risk frameworks. Now, California law moves AI safety out of the abstract and into production. What your system says, how it presents itself, and how it responds to vulnerable users are no longer just design choices. They are legal obligations.

First, let’s review the main concepts in these two bills. Then, I’ll share my perspective on what they mean for our industry.

SB 243: Regulating “AI Companions”

The intent behind SB 243 is closely related to the public concerns highlighted in cases such as this recent lawsuit involving a minor and an AI chatbot.

SB 243 responds to a growing category of AI systems designed for emotional or social interaction. These are not customer service bots or productivity tools. They are systems that present themselves as companions, confidants, or sources of support.

The law reflects concerns highlighted in recent public cases involving minors and AI chatbots, where users formed emotional reliance on systems that were never designed to manage crisis-level situations.

SB 243 focuses specifically on AI companion chatbots, meaning systems built to meet emotional or social needs. Its key requirements include:

  • Clear AI disclosure: Chatbots must explicitly state that they are not human. For known minors, this disclosure must be repeated every three hours.
  • Harm prevention: Operators must implement protocols to prevent the chatbot from encouraging self-harm or suicide and must route users to appropriate crisis services when necessary.
  • Protection of minors: Companies must take reasonable measures to prevent the generation of sexually explicit content for minors.
  • Transparency and reporting: Beginning July 1, 2027, operators must file annual reports with California’s Office of Suicide Prevention detailing crisis-detection and intervention practices.
  • Enforcement: The law includes a private right of action, allowing harmed users to seek injunctive relief and damages of $1,000 per violation or actual damages, whichever is greater.

SB 243 makes one thing clear: if an AI system is designed to feel human, it must also be designed to protect humans.

AB 489: Guardrails for AI in Healthcare and Wellness

AB 489 addresses a subtle but increasingly common problem in AI-powered health and wellness tools: systems that don’t claim to be medical professionals, yet communicate in ways that feel clinically authoritative.

As AI becomes more embedded in wellness apps, symptom checkers, and health-adjacent chatbots, many systems use confident language, medical terminology, and reassuring design cues to appear helpful. In practice, users often interpret this as expertise, regardless of disclaimers buried in the interface.

Beginning Jan. 1, 2026, AB 489 restricts how AI systems operating in healthcare or wellness contexts can present themselves. These systems may not:

  • Use language, titles, or interface elements that imply licensed medical authority
  • Describe outputs as equivalent to professional medical judgment unless that is factually accurate
  • Rely on indirect cues or framing that could reasonably lead users to assume clinical expertise

Importantly, enforcement is not limited to consumer protection authorities. Professional licensing boards are empowered to act, and each misleading interaction may be treated as a separate violation.

For teams building health-related AI products, AB 489 turns a long-standing design tension into a compliance issue. Helpful guidance must now be clearly distinguishable from medical advice, and product teams will need to be deliberate about how tone, terminology, and interface choices shape user perception.

From Policy to Production: 2026

What makes SB 243 and AB 489 especially significant is not just what they regulate, but when.

These laws take effect on January 1, 2026. That leaves no room for long-term roadmaps or phased interpretations. AI governance is no longer a future-state discussion mapped through frameworks like the NIST AI Risk Management Framework. It is a production deadline.

For AI builders, like engineers, product managers, and founders, this is not merely a legal concern. It is a fundamental shift in product requirements:

  • How your system introduces itself
  • How it handles emotionally vulnerable users
  • How it avoids implying expertise it does not have

These are now compliance issues.

A Political Collision: Federal Acceleration vs. State Accountability

All of this is unfolding against a growing political tension at the federal level.

On December 11, 2025, President Trump signed an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order frames state-level AI regulation as a potential obstacle to American AI competitiveness and directs federal agencies to scrutinize and, where appropriate, challenge what it characterizes as “onerous” state requirements, particularly those that could constrain model outputs.

The result is a form of product limbo. At the federal level, the signal is to move fast and minimize friction. At the state level, California is making clear that if AI systems cause harm (especially to minors or patients), there will be consequences.

For product teams, the operational reality is straightforward: executive orders do not override state law. Unless and until a federal court invalidates California’s statutes, January 1, 2026 remains the compliance deadline for any AI system accessible to users in California.

Some Thoughts on the New Regulation

California may have written these laws, but their reach is far broader. Like the GDPR and the EU AI Act, they apply to any company offering AI services to users in California, regardless of geography. In practice, they set expectations that will influence product design far beyond state borders.

As both a lawyer and a parent, I welcome legislation that puts user protection, especially for minors, at the center. At the same time, these laws are not perfect. Their scope is limited, and many enforcement details will only become clear through practice. But within those constraints, they represent a meaningful shift: safety and accountability are no longer optional principles. They are becoming enforceable design standards.

The message across both laws is consistent. If an AI system mimics human interaction, it must clearly disclose what it is. If it operates in emotionally sensitive contexts, it must intervene to prevent harm. If it touches health or wellness, it must avoid implying expertise it does not have.

In other words, trust is no longer an outcome to hope for. It is a feature that must be built.

Companies that act early by reviewing safety frameworks, documenting risk mitigation, strengthening internal escalation paths, and stress-testing user-facing behaviors will be far better positioned—not just to comply, but to earn and sustain user trust. These laws are not the ceiling for responsible AI. They are the floor.

--

Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI.

Need help preparing for the next era of AI and internet safety regulation?

Talk to an Expert
AI Regulation