Home Artificial Intelligence China sets stricter rules for training generative AI models

China sets stricter rules for training generative AI models

by delta
0 comment

China has released draft security regulations for companies providing generative artificial intelligence (AI) services, encompassing restrictions on data sources used for AI model training.

On Wednesday, Oct. 11, the proposed regulations were released by the National Information Security Standardization Committee, comprising representatives from the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology and law enforcement agencies.

Generative AI, as exemplified by the accomplishments of OpenAI’s ChatGPT, acquires the ability to perform tasks through the analysis of historical data and generates fresh content, such as text and images, based on this training.

Screenshot of the National Information Security Standardization Committee (NISSC) publication. Source: NISSC

The committee recommends performing a security evaluation on the content used to train publicly accessible generative AI models. Content exceeding “5% in the form of unlawful and detrimental information” will be designated for blacklisting. This category includes content advocating terrorism, violence, subversion of the socialist system, harm to the country’s reputation and actions undermining national cohesion and societal stability.

The draft regulations also emphasize that data subject to censorship on the Chinese internet should not serve as training material for these models. This development comes slightly over a month after regulatory authorities granted permission to various Chinese tech companies, including the prominent search engine Baidu, to introduce their generative AI-driven chatbots to the general public.

Since April, the CAC has consistently communicated its requirement for companies to provide security evaluations to regulatory bodies before introducing generative AI-powered services to the public. In July, the cyberspace regulator released a set of guidelines governing these services, which industry analysts noted were considerably less burdensome compared to the measures proposed in the initial April draft.

The recently unveiled draft security stipulations necessitate that organizations engaged in training these AI models obtain explicit consent from individuals whose personal data, encompassing biometric information, is employed for training. Additionally, the guidelines include comprehensive instructions on preventing infringements related to intellectual property.

Nations worldwide are wrestling with the establishment of regulatory frameworks for this technology. China regards AI as a domain in which it aspires to compete with the United States and has set its ambitions on becoming a global leader in this field by 2030.

You may also like

Leave a Comment


Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved