Home Artificial Intelligence Snap AI Chatbot Investigation set in UK over Teen-Privacy Concerns

Snap AI Chatbot Investigation set in UK over Teen-Privacy Concerns

by delta
0 comment

(The Snapchat application on a smartphone arranged in Saint Thomas, Virgin Islands, Jan. 29, 2021. Gabby Jones | Bloomberg | Getty Images)

Snap is under investigation in the U.K. over potential privacy risks associated with the company’s generative artificial intelligence chatbot.

The Information Commissioner’s Office (ICO), the country’s data protection regulator, issued a preliminary enforcement notice Friday, alleging risks the chatbot, My AI, may pose to Snapchat users, particularly 13-year-olds to 17-year-olds.

“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’,” Information Commissioner John Edwards said in the release.

The findings are not yet conclusive and Snap will have an opportunity to address the provisional concerns before a final decision. If the ICO’s provisional findings result in an enforcement notice, Snap may have to stop offering the AI chatbot to U.K. users until it fixes the privacy concerns.

“We are closely reviewing the ICO’s provisional decision. Like the ICO, we are committed to protecting the privacy of our users,” a Snap spokesperson told CNBC in an email. “In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available.”

The tech company said it will continue working with the ICO to ensure the organization is comfortable with Snap’s risk-assessment procedures. The AI chatbot, which runs on OpenAI’s ChatGPT, has features that alert parents if their children have been using the chatbot. Snap says it also has general guidelines for its bots to follow to refrain from offensive comments.

The ICO did not provide additional comment, citing the provisional nature of the findings.

The agency previously issued a “Guidance on AI and data protection” and followed up with a general notice in April listing questions developers and users should ask about AI.

Snap’s AI chatbot has faced scrutiny since its debut earlier this year over inappropriate conversations, such as advising a 15-year-old how to hide the smell of alcohol and marijuana, according to The Washington Post.

Snap said in its most recent earnings that more than 150 million people have used the AI bot.

Other forms of generative AI have also faced criticism as recently as this week. Bing’s image-creating generative AI, for instance, has been used by extremist messaging board 4chan to create racist images.

You may also like

Leave a Comment


Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
Update Required Flash plugin