Meta Enhances AI Protocols Following Alarming Child Safety Report
AI/News

Meta Enhances AI Protocols Following Alarming Child Safety Report

Meta is revising its AI chatbot regulations after serious allegations emerged about child safety concerns.

Meta has announced that it is revising its policies and training procedures for its AI chatbots following a troubling report by Reuters, which exposed significant child safety concerns. The report highlighted dangerously lax rules regarding how Meta’s chatbots engage with minors, especially in romantic or sexual contexts.

In a statement to TechCrunch, Meta spokesperson Stephanie Otway addressed the issue: “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”

The report has triggered a Senate inquiry and a stern letter from the National Association of Attorneys General, condemning the exposure of children to sexualized content as unacceptable.

Duncan Crabtree-Ireland, national executive director of SAG-AFTRA, remarked: “If a chatbot is using the image of a person and the words of the person, it’s readily apparent how that could go wrong.”

These incidents highlight the urgent need for stricter regulations in the realm of generative AI, particularly concerning the safety of minors and the impersonation of celebrities.

Next article

Five Games Departing Xbox Game Pass Today, Including Popular Co-Op Titles

Newsletter

Get the most talked about stories directly in your inbox

Every week we share the most relevant news in tech, culture, and entertainment. Join our community.

Your privacy is important to us. We promise not to send you spam!