Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

China: Draft administrative measures for generative AI - key takeaways

On April 11, 2023, the Cyberspace Administration of China (CAC) released draft Administrative Measures for Generative Artificial Intelligence (Draft Measures) on April 11, 2023. The Draft Measures, which comprise 21 articles, aim to promote a healthy development and standardized application of generative artificial intelligence (AI) technology, while allowing room for research and development in this area.

Kevin Duan, Partner at Han Kun Law Offices, analyzes the regulatory issues and potential challenges that the Draft Measures may pose in practice.

borzaya / Essentials collection / istockphoto.com

Background

In recent times, there has been a surge in the popularity of generative AI products across the world. Large language models have demonstrated exceptional capabilities in understanding human language, human-computer interaction, text and code writing, logical reasoning, and generating results that can rival or even surpass human performance.

However, the widespread use of generative AI has also given rise to potential risks, such as privacy breaches, trade secret disclosures, dissemination of false information, creation of information bubbles, and cybercrime abuse. This has led to growing concerns among regulators in various countries.

Scope of application

According to Article 2 of the Draft Measures, they apply to the development and use of generative AI products that provide services to the public within the territory of the People's Republic of China (PRC). The Draft Measures define generative AI as the technology that employs algorithms, models, and rules to generate various content, including texts, pictures, sounds, videos, and codes.

However, the interpretation of providing 'services for the public within the territory of the People's Republic of China' may raise some controversy. Considering the context of the statutes and legislative intent, we believe that regardless of whether the service provider is based in China or abroad, and whether the generative AI provides services directly to end-users or indirectly by accessing other services, it still needs to comply with the provisions of the Draft Measures.

Onerous obligations on organizations producing generative AI and little attention on organizations using generative AI services

Content security

Ensuring content and ideological security is a top priority for the competent authorities, and the Draft Measures reflect this by devoting considerable attention to these issues. The following aspects highlight this emphasis:

Service providers are responsible for content security

The Draft Measures emphasize that organizations and individuals (i.e., service providers) using generative AI products to provide services, such as chat and text, image, and sound generation, are responsible for the content they produce. However, in practice, users may deliberately seek illegal and harmful content from generative AI services, which raises questions about the responsibility of service providers. This issue remains open for discussion.

Generated content must be true and accurate

According to the Draft Measures, 'the content generated by generative AI should be true and accurate, and measures should be taken to prevent the generation of false information.' However, this clause has sparked controversy, as large language models often produce nonsensical output. This could be due to discrepancies in the source content or decoding errors in the transformer, which are still technically challenging to avoid. Thus, placing too much emphasis on the authenticity and accuracy of generated content could impose an unreasonable burden on service providers at the current state of the art.

Handling illegal content

Article 15 of the Draft Measures requires service providers to handle illegal content by optimizing their models. This means that if generated content found in operations and reported by users does not meet the requirements of the Draft Measures, service providers must take measures, such as content filtering, and prevent the illegal content from being generated again through model optimization training within three months. However, in practice, there may be technical obstacles to identifying the causes of illegal content and eliminating it through training, which presents significant challenges for implementing this requirement.

In addition to model optimization, the Draft Measures impose more conventional, ex-post obligations on service providers to curb violative content. These include:

  • if service providers discover or learn that the generated texts, pictures, sounds, and videos infringe on the image rights, reputation rights, personal privacy, and trade secrets of others, or fail to meet the requirements of the Draft Measures, they must take measures to stop the generation and prevent harm from continuing; and
  • service providers must suspend or terminate the service if they find that a user is violating laws and regulations, business ethics, and social morality in using generative AI products, such as engaging in internet hype, malicious posting and commenting, spamming, writing malicious software, or carrying out improper commercial marketing.

Marking of generated content

Article 16 of the Draft Measures stipulates that providers shall mark the generated pictures, videos, and other content in accordance with the Provisions on Administration of Deep Synthesis in Internet Information Services (Deep Synthesis Regulations). However, in contrast to the provisions of the Deep Synthesis Regulations, the Draft Measures do not explicitly require marking the generated texts.

Training data compliance

As such, the Draft Measures require service providers to take responsibility for the legality of pre-training data and optimize the source of training data for generative AI products. The Draft Measures also provide detailed provisions on the compliance of training data to ensure that it meets relevant legal and ethical standards.

Compliance of personal information in training data

The Draft Measures require service providers to comply with personal information regulations when using pre-training and optimization training data for generative AI products. This includes obtaining users' consent for the use of their personal information, not illegally retaining input information that can infer user identity, not profiling users based on input information and usage, and not providing user input information to others.

Respect for intellectual property rights

The Draft Measures also stipulate that training data must not contain content that infringes on intellectual property rights. However, this requirement may lead to controversy, as generative AI products often scrape publicly available data on the internet for model training, including works protected by copyright law. The issue of whether such usage constitutes fair use or infringes on IP rights is currently highly debated and requires further discussion in theory and policy.

Ensuring the authenticity, accuracy, objectivity, and diversity of data

The Draft Measures also demand that service providers ensure the authenticity, accuracy, objectivity, and diversity of training data. This requirement is quite demanding, and service providers must take on more stringent responsibilities when screening training data.

Draft Measures bridging other AI regulatory regulations, such as algorithm recommendation services and deep synthesis services

The Deep Synthesis Regulations define 'deep synthesis technology' as the technology used to generate synthesis algorithms, such as deep learning and virtual reality, to make network information, like text, image, audio, video, and virtual reality. This includes text, text-to-speech, music, face, and image generation, three-dimensional reconstruction, and digital simulation. The Provisions on the Administration of Algorithm-Generated Recommendations for Internet Information Services (Provisions on Algorithm-Generated Recommendation) also explicitly include generation and synthesis of the algorithm within the scope of regulation.

Given the definition of deep synthesis technology, generative AI falls under this category and is also considered an 'algorithm-generated recommendation service.' Therefore, in addition to the Draft Measures, generative AI must comply with the requirements of existing AI regulatory regulations, such as algorithm recommendation service and deep synthesis service. The Draft Measures have made convergence and refinement based on existing laws and regulations to ensure that generative AI is regulated appropriately.

Algorithm ethics and fairness

Algorithm ethics and fairness are crucial in the development and deployment of generative AI products. The Deep Synthesis Regulations reaffirm and refine the provisions on algorithm ethics and fairness, as well as the prohibition of algorithm discrimination under the Provisions on Algorithm-Generated Recommendation and other laws and regulations. The Deep Synthesis Regulations emphasize that measures should be taken to prevent discrimination based on race, ethnicity, faith, country, region, gender, age, and occupation during algorithm design, training data selection, model generation, and optimization, and service provision. Providers are also prohibited from generating content that is discriminatory based on the user's race, nationality, gender, etc.

Algorithm security evaluation and algorithm filing

Article 6 of the Draft Measures stipulates that, prior to the provision of services to the public by using generative AI products, service providers must conduct and report on security assessment to the competent cyberspace administration in accordance with the Provisions on the Security Assessment for Internet-Based Information Services with Public Opinion Attributes and Social Mobilization Capability, and shall complete procedures for the registration, change of registered particulars, and deregistration (as applicable) of services by following the Provisions on Algorithm-Generated Recommendation.

Based on the above provision, the Draft Measures seem to expand the security assessment and filing obligations under the Provisions on the Security Assessment for Internet-Based Information Services with Public Opinion Attributes and Social Mobilization Capability to all types of generative AI services, irrespective of whether it has 'public opinion attributes and social mobilization capacity,' and thus be subject to security assessment and registration requirements under the applicable laws.

Algorithm transparency

According to the Draft Measures, service providers must, as required by the CAC and relevant competent authorities, provide necessary information that may affect users' trust in, and choice of, the relevant services, including a description of the source, scale, type, and quality of pre-training and retraining data, rules for manual labeling, the scale and type of manually labeled data, basic algorithms, and technical systems, among others.

User notification and anti-addiction

Article 10 of the Draft Measures states that providers must clarify and make public the target group, scenarios, and uses to which their services apply. They must also take appropriate measures to prevent users from excessively relying on, or indulging in, the generated content. This provision, placed in tandem with Article 8 of the Provisions on Algorithm-Generated Recommendation which prohibits service providers from 'setting up algorithms to induce users toward addiction or excessive consumption,' requires service providers to ensure proper use of relevant products on various fronts from public disclosure to algorithm management.

Impact and outlook

The Draft Measures set comprehensive penalties for violations. According to Article 20 of the Draft Measures, violation of the Draft Measures will be punishable under the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and other laws and administrative regulations. Where a violation is not covered by the abovementioned laws and regulations, the service provider concerned may be given a warning, subject to public criticism, or be ordered to make rectifications within a time limit; the service provider may even be ordered to suspend or terminate its use of generative AI for service provision and be subject to a fine of up to CNY 100,000 (approx. €12,725). Behaviors in violation of administrative rules for public security will be subject to punishment in accordance with the law, and behaviors that constitute criminal offenses will be subject to criminal liability.

On the whole, by issuing the Draft Measures, Chinese regulators have directly responded to new issues posed by the recent generative AI breakthroughs under the current regulatory framework, which also conveys China's overarching AI regulatory principle of providing guidance and rules for the purpose of promoting the growth of the industry.

However, some of the compliance requirements may be overly stringent given the current technical level. Enforcing these requirements in practice presents many challenges, and organizations will need to combine technical and legal expertise to propose creative solutions that address safety concerns, while also allowing for institutional flexibility and industrial development.

To achieve this, organizations could explore new approaches to compliance that leverage emerging technologies to ensure transparency and accountability in the development and deployment of generative AI products. They could also work closely with regulatory authorities to develop more flexible and adaptive compliance frameworks that take into account the unique characteristics of generative AI products.

Kevin Duan Partner
[email protected]
Han Kun Law Offices, Beijing

Feedback