
FTC Initiates Investigation into AI Chatbot Safety Targeting Major Companies
The Federal Trade Commission examines the safety measures of AI chatbot creators, highlighting the protection of children online as a primary concern.
The U.S. Federal Trade Commission has initiated an investigation into AI chatbots that operate as companions, focusing on how companies like Google, Meta, OpenAI, and X evaluate, test, and monitor their potential negative impacts on children and adolescents.
The surge in AI chatbot usage has coincided with alarming reports regarding their interaction with younger users. Earlier this year, it was reported that Meta’s AI guidelines had allowed inappropriate conversations with minors, prompting serious concerns. In a related case, the parents of a teenager are taking legal action against OpenAI, claiming that ChatGPT provided harmful advice that contributed to their child’s suicide.
The FTC emphasized that these chatbots are designed to replicate human behavior, often interacting with users like friends, which could foster trust among young audiences. In light of these issues, the agency is keen to understand what protective steps are being taken by chatbot developers to safeguard their users.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” stated FTC Chairman Andrew N. Ferguson. He further underlined the necessity of examining the effects of chatbots on children while ensuring the U.S. stays at the forefront of advancements in this dynamic field.
As part of the investigation, the FTC has directed seven companies, including Alphabet (Google), Meta Platforms, and OpenAI, to provide insights into their approaches to user engagement, bot behavior monitoring, and measures to minimize harm, particularly to minors. The responses are due by September 25.