A leaked internal document sparks major concerns about child safety in AI
A U.S. senator has launched an official investigation into Meta Platforms after a leaked internal file suggested the company’s AI chatbots were allowed to engage in “romantic” and “sensual” conversations with children.
The document, reportedly titled GenAI: Content Risk Standards and obtained by Reuters, revealed scenarios where Meta AI could provide inappropriate responses to minors.
Senator Calls Document “Outrageous”
Senator Josh Hawley (R-Missouri) described the revelations as “reprehensible and outrageous,” demanding access to the full file and details on which Meta products it covers.
He posted on X (formerly Twitter):
> “Now we learn Meta’s chatbots were programmed to carry on explicit and sensual talk with 8-year-olds. It’s sick. I’m launching a full investigation. Big Tech: leave our kids alone.”
Meta’s Response to Allegations
Meta quickly pushed back, saying the leaked notes were “erroneous, inconsistent with our policies, and have been removed.”
The company insists its policies strictly prohibit:
Sexualizing children
Sexual roleplay between adults and minors
Inappropriate medical misinformation
Meta explained that many of the examples in the file were hypothetical test cases used internally to assess risk scenarios—not actual chatbot behaviors.
Parents and Experts Raise Concerns
The internal report also mentioned that Meta AI could:
Share false medical details
Make provocative comments on sex, race, or celebrities
Spread false celebrity information with disclaimers
Senator Hawley argued that parents “deserve the truth” and that kids must be protected from unsafe AI interactions.
One disturbing example cited said the AI could describe an eight-year-old’s body as “a work of art” and “a masterpiece…a treasure I cherish deeply.”
Good
ReplyDelete❤️❤️❤️
ReplyDeletethat is good
ReplyDelete