Home » Child Psychiatrist Jailed After Using AI to Make Pornographic Deep-Fakes of Kids

Child Psychiatrist Jailed After Using AI to Make Pornographic Deep-Fakes of Kids

by delta

A child psychiatrist was jailed Wednesday for the production, possession, and transportation of child sexual abuse material (CSAM), including the use of web-based artificial intelligence software to create pornographic images of minors.

The prosecutors in North Carolina said David Tatum, 41, found guilty by a jury in May, has been sentenced to 40 years in prison and 30 years of supervised release, and ordered to pay $99,000 in restitution.

“As a child psychiatrist, Tatum knew the damaging, long-lasting impact sexual exploitation has on the wellbeing of victimized children,” said US Attorney Dena J. King in a statement. “Regardless, he engaged in the depraved practice of using secret recordings of his victims to create illicit images and videos of them.”

He engaged in the depraved practice of using secret recordings of his victims to create illicit images and videos of them

“Tatum also misused artificial intelligence in the worst possible way: to victimize children,” said King, adding that her office is committed to prosecuting those who exploit technology to harm children.

His indictment [PDF] provides no detail about the AI software used; another court document [PDF] indicates that Tatum, in addition to possessing, producing, and transporting sexually explicit material of minors, viewed generated images of kids on a deep fake website.

The trial evidence cited by the government includes a secretly-made recording of a minor (a cousin) undressing and showering, and other videos of children participating in sex acts.

“Additionally, trial evidence also established that Tatum used AI to digitally alter clothed images of minors making them sexually explicit,” prosecutors said. “Specifically, trial evidence showed that Tatum used a web-based artificial intelligence application to alter images of clothed minors into child pornography.”

Two months ago, according to CNN, a South Korean man was sentenced to two and a half years in prison for generating sexual images of children.

The use of AI models to generate CSAM, among other things, has become a matter of serious concern among lawmakers, civil society groups, and companies selling AI services.

In prepared remarks [PDF] delivered at a US Senate subcommittee hearing earlier this year, OpenAI CEO Sam Altman said, “GPT-4 is 82 percent less likely to respond to requests for disallowed content compared to GPT-3.5, and we use a robust combination of human and automated review processes to monitor for misuse. Although these systems are not perfect, we have made significant progress, and are regularly exploring new ways to make our systems safer and more reliable.” Altman said OpenAI also relies on Thorn’s Safer service to spot, block, and report CSAM.

Yet efforts to detect CSAM after it has been created could lead to diminished online security through network surveillance requirements.


You may also like

Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved