A fabricated rape allegation against a sitting U.S. senator has forced Google to remove its lightweight AI model from public-facing tools, marking the latest flashpoint in an emerging legal battle over whether Section 230 protections shield AI companies from defamation liability.

Google removed Gemma from its AI Studio after U.S. Senator Marsha Blackburn accused the AI model of fabricating sexual misconduct accusations. When Gemma was asked if Blackburn had been accused of rape, she falsely claimed that during a 1987 state senate campaign, a state trooper alleged she pressured him to obtain prescription drugs and that the relationship involved non-consensual acts.

Blackburn wrote that none of this was true, not even the campaign year, which was actually 1998 and noted that the provided links led to error pages.

The Tennessee Republican’s October 30 letter escalated beyond technical complaints into accusations of systematic defamation. She demanded Google explain how Gemma generated these false accusations and what measures would be implemented to remove defamatory material.

During a recent Senate Commerce hearing, Blackburn brought up conservative activist Robby Starbuck’s lawsuit against Google, in which Starbuck claims Google’s AI models generated defamatory claims about him being a child rapist and serial sexual abuser. Starbucks filed a lawsuit in Delaware state court on October 22 seeking at least $15 million in damages.

Google responded Friday night on X, stating it never intended Gemma to be a consumer tool, removing it from AI Studio while continuing to make the models available via API.

The Section 230 Question 

Other than Google’s removal decision, the critical unresolved issue is whether Section 230 of the Communications Decency Act protects AI companies when their models generate defamatory content, a question courts have not yet decided and Congress has failed to clarify.

Section 230 has historically protected tech companies from liability for third-party content, but legal experts say its applicability to AI-generated content is unclear. The statute protects platforms from being treated as publishers of information provided by others, but doesn’t shield platforms when they are responsible for creating content themselves.

Some courts have said that content-neutral algorithms organizing user inputs might qualify for protection. However, legal experts note that Section 230 was built to protect platforms from liability for what users say, not for what platforms themselves generate.

A plain reading of Section 230 seems to offer no protection when AI creates or develops false information in response to user prompts.

Former Representatives Chris Cox and Senator Ron Wyden, Section 230’s co-authors, claim the law was never intended to shield companies from consequences of products they create. OpenAI CEO Sam Altman testified that Section 230 does not provide an adequate regulatory framework for generative AI tools.

Senator Josh Hawley’s No Section 230 Immunity for AI Act sought to exclude generative AI from liability protections in 2023, but faced resistance from critics arguing it would harm AI innovation. Until Congress clarifies Section 230, all AI defamation cases exist in a dubious gray area between immunity and liability.

What This Means

Google’s swift removal of Gemma, an open-source model released in February 2024, suggests the company recognizes the legal exposure difference between controlled enterprise API access and open public usage.

President Donald Trump signed an executive order banning “woke AI” earlier this year, intensifying political scrutiny of perceived bias.

As of May 2025, no AI defamation case has reached a final judgment globally, though several lawsuits are pending. A Georgia state court granted summary judgment in favor of OpenAI in Walters v. OpenAI, determining that no reasonable reader would interpret ChatGPT’s output as stating actual facts.

However, a Minnesota federal defamation lawsuit by Wolf River Electric alleges Google’s AI Overview caused financial damage.

The challenge facing courts centers on whether existing tort law can hold companies accountable for defamatory hallucinations, as it’s impossible to judge what a Large Language Model knew prior to publishing an output.

Whether disclaimers can shield companies when their models fabricate criminal allegations against real people remains an urgent question as worldwide AI spending reaches $1.5 trillion in 2025.