TL;DR
U.S. State Attorneys General are signaling a major shift toward AI enforcement. Even without new AI-specific laws, companies deploying AI remain liable under consumer protection, child safety, and tort law, and must prove safeguards are in place.
A coalition of 42 U.S. State Attorneys General just fired a warning shot at the Generative AI industry. In a 12-page letter, they say that in the face of growing public harm, self-regulation in the GenAI space won’t cut it anymore, and are demanding that foundational AI companies, including OpenAI, Apple, Microsoft, Anthropic, Google, xAI, and Meta take action.
The demands focus on adding strong safeguards to prevent harmful or misleading AI responses, protecting children, and creating real accountability through independent oversight, transparent audits, and clear legal responsibility for how AI systems behave.
This call for more robust AI safety directly impacts both the foundational model providers, and the companies using their models to deploy public-facing apps.
The AI Safety Crisis State AGs Refuse to Ignore
The letter includes cases where models invent information, reinforce harmful beliefs, or present themselves as human in ways that can mislead or manipulate users. The Attorneys General point to several tragic incidents, including suicides, acts of violence, and severe psychological harm that have been linked to unsafe chatbot interactions. Also concerning are reports of generative AI engaging in inappropriate sexual, violent, or emotionally manipulative conversations with minors. The Attorneys General argue that these events should not be viewed as isolated mistakes but as signs of deeper, systemic safety failures.
The letter pushes for the implementation of checks and balances that currently do not exist at scale in the fast-moving AI sector, with a core demand for robust, third-party accountability. They’re calling for independent, third-party audits, having outside experts test models for bias, safety issues, and standards compliance, and then openly publish their findings, free of company response or retaliation.
The idea is similar to how financial audits became established decades ago: people need to trust the systems running behind the scenes, especially when those systems have a great impact on their everyday lives.
A Brewing Federal–State Power Clash Over AI Regulation
All of this is unfolding in a politically tense moment. As states increase pressure on AI developers, President Trump has issued an executive order (EO) that attempts to limit state involvement and centralize AI regulation at the federal level. The EO directs the Administration and Congress to develop a national AI framework that is less restrictive and reduces regulatory friction for AI companies. However, executive orders do not currently preempt state law, and courts will ultimately determine that it’s not settled.
Understanding Your Exposure Under Existing Law
With so many Attorneys General from nearly every state signing on to the letter, it’s obvious that this isn’t a fringe effort but a coordinated push by the States to implement consumer-protections.
While the US still lacks a dedicated AI liability framework, regulators and courts rely on established authorities, including the FTC’s powers over unfair or deceptive practices and child safety statutes such as COPPA, to hold companies accountable for AI-driven harm. The Trump Administration’s AI Action Plan also reinforces this direction by emphasizing accountability and consumer trust as core expectations for AI development and deployment.
This means that AI does not provide a liability shield for businesses integrating AI chatbots and other AI apps, and they cannot use a chatbot to skirt professional licensing laws. Companies that deploy AI apps remain fully accountable for the risks those systems create, including:
- Consumer Protection Claims (The “Hallucination” Risk): Companies are responsible for inaccurate, misleading, or deceptive information generated by their chatbots, even if the error is a “hallucination”. For example, if a chatbot provides false pricing or policy details, the company may be forced to honor the statement, similar to cases like Moffat v. Air Canada. Also, overstating a chatbot’s capabilities, claiming it gives “certified financial advice”, for example, is a clear FTC violation and risks state consumer fraud enforcement.
- Product Liability and Negligence: Companies face risks related to defective design or failure to warn. If a chatbot encourages self-harm, provides dangerous medical advice, or causes other direct harm, companies could face claims of negligent misrepresentation or failure to anticipate foreseeable risks.
The Emerging Tort Law Dimension
Finally, it’s important to recognize that recent U.S. lawsuits involving self-harm, psychological deterioration, and even third-party injury are signaling a broader legal shift: courts are beginning to assess AI-related harm under traditional tort principles such as negligence, foreseeability, and duty of care. This doesn’t mean every claim will succeed, but it does mean that companies deploying AI should expect courts to ask whether reasonable safeguards were in place. Demonstrating strong guardrails, monitoring, and documented safety processes is becoming not only a best practice, but an essential way to reduce organizational – and in some cases personal – exposure as tort-based claims continue to expand into the AI space.
Stay Ahead of Regulatory Scrutiny
If your organization is launching public-facing AI, respond to the public sentiment expressed by the Attorneys General and comply with existing regulations by putting technical controls in place that show you’re taking risky AI outputs seriously, not just hoping your model behaves. Be proactive by layering traditional data-security measures with frontline AI safety controls.
That starts with input filtering and prompt checks that block obviously harmful or disallowed requests before they ever hit the model. On the flip side, output filters (such as safety or content-moderation classifiers) catch dangerous responses around topics like self-harm, violence, hate, and anything involving minors. Use custom guardrail policies to make sure the AI follows specific rules based on your product or user groups.
Utilize prompt-injection defenses to keep user messages from impacting system prompts, and detailed logging and audit trails for the proof that incident responses and regulators require.
You can stack even more protection by adding rate limits, abuse detection, and anomaly monitoring to catch jailbreak attempts or automated probing. Locking down data access with tight role- or attribute-based controls makes sure the AI only uses information it’s actually allowed to touch.
Before launch (and continuously after), you should be running safety evaluations to check for bias, hallucinations on sensitive topics, privacy leaks, and unsafe instructions. Ongoing automated and human red-teaming, which include adversarial prompt generation, help you spot weaknesses before real users ever see them.
And if something goes wrong, you need kill switches and dynamic configs that let you adjust safety thresholds or block entire categories on the fly.
There’s a lot to stay on top of, mostly on the front line. The good news is you can check off those frontline safety controls with Alice WonderFence and Alice WonderBuild.
Learn how Alice can help avoid costly litigation
Talk to an ExpertWhat’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.

