Meta CEO Approved AI Chatbots for Minors Despite Internal Safety Warnings, Court Filing Says

GeokHub

GeokHub

2 min read
Meta CEO Approved AI Chatbots for Minors Despite Internal Safety Warnings, Court Filing Says
1.0x

SANTA FE (GeokHub) — Internal company records made public in a U.S. court filing allege that Meta’s chief executive approved allowing minors to access AI chatbot companions despite warnings from internal safety teams about the risk of inappropriate interactions.

The documents were submitted as part of a lawsuit brought by the New Mexico attorney general, which accuses the social media company of failing to adequately protect children on its platforms. The case is scheduled to go to trial next month.

According to the filing, internal discussions show that safety staff raised concerns that AI companions designed for social interaction could be misused in ways harmful to minors. Employees warned that insufficient safeguards could expose young users to conversations that were not appropriate for their age.

Internal Pushback Over Safety Measures

Messages cited in the court documents indicate that several members of Meta’s safety and policy teams urged stricter controls, including parental oversight and tighter restrictions on how AI companions could be used around minors.

Some employees specifically objected to allowing romantic or companionship-style interactions involving under-18 users, arguing such features carried serious reputational and ethical risks. Others warned that failing to add safeguards could result in regulatory backlash.

The filing claims these recommendations were not fully adopted, with internal notes suggesting leadership favored a less restrictive approach centered on user choice and reduced limitations.

Company Disputes Allegations

Meta has rejected the claims outlined in the filing, saying the lawsuit relies on selectively presented internal communications and does not accurately reflect company policy or leadership intent.

The company has said its leadership supported blocking explicit interactions for younger users and restricting adults from engaging with under-18 AI personas in inappropriate ways.

Growing Regulatory Scrutiny

The lawsuit comes as regulators worldwide intensify scrutiny of how technology companies deploy artificial intelligence, particularly where minors are involved. Lawmakers have increasingly questioned whether existing safeguards are sufficient as AI tools become more social, conversational, and personalized.

Following public criticism and regulatory pressure, Meta recently said it had removed teen access to its AI companions entirely while it works on a revised version with stronger protections.

The case highlights the broader tension facing tech companies as they race to expand AI products while regulators demand stricter accountability for child safety.

Share this article

Help others discover this content

Topics

#Meta AI chatbots#child safety online#AI regulation#social media lawsuits

Continue Reading

Discover more articles on similar topics that you might find interesting