
AI ‘hallucinations’ are a well-known issue. When these issues produce misleading information, the consequences can escalate to legal troubles. Recently, a Norwegian named Arve Hjalmar Holmen engaged with ChatGPT, experimenting by inputting his name. To his shock, the AI fabricated a narrative claiming he had been incarcerated for the murder of his sons. The details it included were disturbingly accurate, referencing Holmen’s children and hometown.
Noyb, a privacy advocacy organization, soon intervened, conducting their own investigation. They found no evidence supporting the fabricated story produced by ChatGPT.
Noyb has filed a complaint with the Norwegian Data Protection Authority claiming the AI violated GDPR regulations by spreading false information. They assert that the model still holds the erroneous data, which could potentially resurface.
Under GDPR, it is required that personal data be accurate; if inaccuracies exist, they either need to be corrected or deleted. Noyb insists that OpenAI must ensure that such incidents do not occur in the future.