
As AI becomes increasingly prevalent, some individuals are resorting to AI models for therapy and life coaching. This trend has raised alarms for OpenAI, prompting the organization to adjust how ChatGPT offers advice on sensitive topics.
For instance, when questioned about whether to break up with a partner, OpenAI states that ChatGPT “shouldn’t give you an answer”; instead, it will engage in a nuanced discussion around the topic. The company aims to implement these changes soon, focusing on crucial personal decisions.
Despite these modifications, the broader issue remains: many users are employing this untested technology to navigate significant life choices. Sam Altman, OpenAI’s CEO, emphasizes that this is a societal concern.
Altman acknowledges that some individuals exhibit a strong attachment to certain AI versions, particularly after significant updates. This leads to emotional responses with users expressing disenchantment when new models are released.
He noted, “People have used technology including AI in self-destructive ways. If a user is mentally fragile, we do not want the AI to reinforce their delusions.” However, he expressed concerns about how effectively AI can recognize these states.
He further highlighted that while most users maintain a clear distinction between reality and fantasy, a minority struggle with this, raising worries about AI potentially encouraging deception.
Altman elaborated that while AI could serve beneficial roles as informal therapists or life coaches, misuse could lead to detrimental outcomes for users unaware of their long-term well-being.
Reflecting on the future, Altman expressed his unease at the prospect of individuals fully delegating vital life choices to AI, describing the situation as a societal obligation to find constructive ways to integrate AI into our lives responsibly.