OpenAI's Testing Reveals Increasing Hallucinations in ChatGPT Models
AI News/Tech

OpenAI's Testing Reveals Increasing Hallucinations in ChatGPT Models

OpenAI's latest models, GPT-3 and GPT-4, demonstrate a worrying trend of heightened hallucination rates compared to their predecessors.

In recent evaluations, OpenAI uncovered a troubling trend: its latest large language models (LLMs), GPT-3 and GPT-4, are exhibiting significantly higher rates of hallucinations, or incorrect outputs, when compared to the earlier model GPT-1.

  • According to reports, OpenAI’s GPT-3 hallucinated during 33% of tests concerning public figures, a stark increase over the previous iteration, which had a lower rate.
  • Furthermore, GPT-4-mini demonstrated an alarming 48% hallucination rate in similar circumstances.

A comprehensive analysis indicates that while OpenAI continues to develop these advanced models, there is an urgent need for further research to address their propensity for hallucinating. Industry analysts argue that the introduction of reasoning capabilities in AI might be contributing to the increased error rates. This raises questions about reliability, as reliance on such models necessitates careful verification of their outputs.

Next article

LG Introduces Innovative Stretchable OLED Technology for Automotive Displays

Newsletter

Get the most talked about stories directly in your inbox

Every week we share the most relevant news in tech, culture, and entertainment. Join our community.

Your privacy is important to us. We promise not to send you spam!