AI’s Worst-Case Scenario: A Man Wrongly Accused of Killing His Own Children
In a deeply disturbing case that sheds light on the darker potential of artificial intelligence, a Norwegian father, Arve Hjalmar Holmen, has filed a formal data protection complaint after OpenAI’s ChatGPT falsely accused him of murdering two of his children and attempting to kill a third. The fictional and defamatory narrative was generated when Holmen ran a query of his name through the popular chatbot, only to be met with a fabricated horror story that combined true personal information with grotesque lies.
According to Holmen, the AI-generated account claimed he had been sentenced to 21 years in prison for the fictitious murders, portraying him as a convicted criminal without basis or fact. What made the hallucination more dangerous, according to the advocacy group Noyb (None of Your Business), is that it blended real-world identifiable data, like the names, genders of Holmen’s children, and his hometown, with false criminal accusations.
“This wasn’t just a technical glitch. It was a violation of my dignity, my rights, and a direct attack on my identity,” Holmen stated in a press release issued by Noyb.
Defamation by Design? AI’s Hallucination Problem Hits Home
This incident illustrates the growing problem of AI hallucinations, where AI models like ChatGPT generate completely false or misleading information that sounds plausible to human users. While AI has been praised for its ability to streamline communication and automate tasks, it has also come under criticism for generating misinformation, especially when it comes to sensitive topics such as personal reputations.
In Holmen’s case, the hallucination had severe emotional and reputational consequences. “Some people think there’s no smoke without fire,” Holmen said. “The idea that someone could read that output and believe it to be true is terrifying. My name, my family, my life, permanently stained by a lie from a machine.”
Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy & Consulting
Noyb Files GDPR Complaint: “You Can’t Just Hide the Lie”
Noyb, the Vienna-based privacy rights group founded by renowned lawyer Max Schrems, has filed a formal complaint with the Norwegian Data Protection Authority (Datatilsynet). The organization alleges that OpenAI has violated the European Union’s General Data Protection Regulation (GDPR), particularly Article 5(1)(d), which mandates that personal data must be accurate and, where necessary, kept up to date.
Furthermore, the complaint emphasizes that blocking or filtering outputs is not a valid substitute for permanently deleting harmful and inaccurate content. “Adding a disclaimer that says ‘we may not be right’ doesn’t make your violation legal. That’s like printing false information in a newspaper and then adding in small print that it may be inaccurate,” Sardeli added.
Not an Isolated Incident: Other Defamation Cases Linked to ChatGPT
Holmen’s experience is only one among many that highlight ChatGPT’s failure to safeguard personal reputations. Several high-profile false claims have emerged in content. In Australia, a mayor threatened to sue OpenAI after ChatGPT falsely alleged he had been convicted and imprisoned for bribery, an event that never occurred. Following legal pressure, the AI’s outputs were adjusted to remove the claim, but no confirmation was given as to whether the underlying data had been deleted or simply hidden.
Similarly, in the United States, a law professor was shocked to find that ChatGPT linked him to a completely fabricated sexual harassment scandal, while a radio host filed a lawsuit after ChatGPT falsely claimed he had embezzled funds. These false narratives were entirely invented, yet included enough real-world details to be believable, demonstrating the serious threat posed when hallucinations include personal identifiers.
Each of these incidents showcases a broader systemic issue: once misinformation is generated and shared, even briefly, its reputational damage can be difficult, if not impossible, to undo.
OpenAI’s Defense: “We Can Block, But Not Erase”
OpenAI, the developer of ChatGPT, has long maintained that it cannot alter or delete information embedded within its model architecture. Trained on vast datasets from across the Internet, the model does not store information in a traditional database where a record can be edited or removed. Instead, it relies on probabilistic generation based on patterns in its training data.
This technical limitation, however, is increasingly at odds with Europe’s stringent data privacy laws. In 2024, OpenAI was hit with a €15 million fine and a temporary ban in Italy following a data breach that exposed user conversations and payment details. Italian authorities mandated that OpenAI must provide tools enabling users to request corrections of inaccurate data, a condition that is still under scrutiny for actual effectiveness.
If Norwegian authorities follow the Italian precedent, OpenAI could be forced to take stronger steps to comply with GDPR, possibly including overhauling how it handles requests for data deletion or retraining its model to remove harmful information.
Learn more about Future Jobs & Manager Programs: DELTA Data Protection & Compliance Academy & Consulting
The GDPR Argument: Internal Data Must Be Accurate Too
One of the most significant arguments in Noyb’s complaint centers around the scope of GDPR’s applicability. While OpenAI might argue that filtering out harmful content is sufficient, Noyb maintains that GDPR applies not only to published outputs but also to how data is stored and processed internally.
“Even if the hallucinated story is no longer being shown to users, that does not mean it has been erased,” Noyb noted. “If ChatGPT continues to process false data as part of its internal operations or future training cycles, the individual affected can never be sure the defamatory content is truly gone.”
This interpretation has far-reaching implications, not only for OpenAI but for all companies deploying generative AI models in Europe. If upheld, it could force tech developers to build mechanisms for model-level corrections, which would likely require new architectures or even full retraining of certain systems, an expensive and technically complex task.
A Turning Point for AI Regulation in Europe
The growing number of AI-related complaints, investigations, and fines across the EU signals a regulatory shift. In 2023, the European Data Protection Board (EDPB) launched a dedicated ChatGPT task force to evaluate how generative AI tools comply with privacy laws, following increasing reports of personal data being misused or misrepresented by AI outputs.
Lawmakers are also debating the upcoming EU AI Act, which, while separate from the GDPR, aims to introduce more detailed requirements for high-risk AI systems, including transparency, accountability, and human oversight. Tools like ChatGPT may soon fall under more formal obligations to prove that they can avoid harm, including reputational and psychological harm, caused by false outputs.
Holmen’s Case Could Set a Legal Precedent
As one of the first GDPR-based defamation complaints involving AI hallucinations and identifiable individuals, Holmen’s case could set an important precedent. If Norway’s data authority agrees with Noyb’s arguments, it may become a landmark ruling requiring OpenAI, and other AI developers, to make their systems not only safer but also more compliant with human rights standards in digital environments.
For Holmen, the fight is personal. “This is about more than just me,” he said. “It’s about making sure no one else ever has to experience waking up one day to find out the world thinks you murdered your children, because a machine invented the story.”
An AI Reckoning Is on the Horizon
As generative AI tools become more advanced and widely adopted, their potential to cause real-world harm grows exponentially. The European response, led by privacy watchdogs, legal advocacy groups, and a mounting body of case law, suggests that the era of AI immunity may be coming to an end.
For OpenAI, and for the tech industry as a whole, Holmen’s case is a warning: when artificial intelligence systems can generate detailed, believable lies that destroy lives, simply adding a disclaimer is no longer enough.
DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com