Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Russia: Current status and development of AI regulations

Similar to many other countries around the world, the regulation of artificial intelligence (AI) in Russia is a hot topic as AI is becoming increasingly popular and widely used in various fields of daily life and business. One of the main issues is determining the legal status of AI. Other key issues include the definition of responsibility for the action of AI, ethical aspects of AI use, and regulation of autonomous systems. Vyacheslav Khayryuzov, Partner at Arno Legal, discusses the salient regulations currently in place in these areas and what developments are expected in the near future.

Jordan Lye/Moment via Getty Images

Aspects of legal regulation

At the moment, the definition of the legal status of AI and the distribution of responsibilities between developers, operators, and users of AI systems are mostly in the initial stages. There are numerous discussions within expert communities. Nevertheless, a certain normative regulation has already appeared or is in the initial stage of development, and work is being done both at the federal and regional levels. The tasks of such regulation include the creation of a legal framework for the development and use of AI.

For the time being, such regulation is mainly of an incentive nature, encouraging businesses to develop the relevant technologies. In addition, at this stage, it is important to identify the existing legal norms that hinder the development and use of AI systems. Another task involves forming a national system of standardization and conformity assessment in the field of AI and robotics technologies.

Currently, AI in Russia is regulated under experimental legal regimes introduced on the basis of federal laws and governmental resolutions of the Russian Federation, including:

  • Federal Law of 24 April 2020 No. 123-FZ on the Experiment to Establish Special Regulation in Order to Create the Necessary Conditions for the Development and Implementation of Artificial Intelligence Technologies in the Region of the Russian Federation - Federal City of Moscow and Amending the Articles 6 and 10 of the Federal Law of 27 July 2006 No. 152-FZ on Personal Data (Law 123);
  • Federal Law of 31 July 2020 No. 258-FZ on Experimental Legal Regimes in the Sphere of Digital Innovations in the Russian Federation;
  • Federal Law of 2 July 2021 No. 331-FZ on Amending Certain Legislative Acts of the Russian Federation in Connection with the Adoption of the Federal Law On Experimental Legal Regimes in the Sphere of Digital Innovations in the Russian Federation; and
  • 13 resolutions of the Government of the Russian Federation directly establishing various experimental legal regimes, including the operation of unmanned aerial systems, an unmanned highway corridor on the M-11 Neva highway, and in the area of digital innovations in the area of medical activities using technologies for collecting and processing information on the state of health and diagnoses of citizens.

Furthermore, the President of the Russian Federation instructed the Government of Russia with tasks including:

  • increasing funding for the creation of breakthrough solutions in the field of AI;
  • analyzing the needs of economy sectors in specialists for a five-year period, and, based on the results of this analysis, amending professional standards and federal state educational standards;
  • providing support for the development and implementation of large-scale generative models and technological solutions in the field of AI, as well as creating an infrastructure for their widespread use;
  • ensuring the development of a mechanism for using the archives of state and municipal bodies and library collections to create datasets;
  • developing and implementing measures aimed at increasing the computing power of supercomputers; and
  • including issues related to the formation of ethical standards in the field of AI, balanced regulation, and scientific and technical cooperation in this area on the agenda of BRICS meetings within the framework of the Russian Federation's chairmanship in 2024.

Law 123

Law 123 established an experimental legal regime in Moscow from July 1, 2020, to December 31, 2023. Law 123 defines AI as a set of technological solutions that imitate human cognitive functions and produce results comparable to the results of human intellectual activity. AI technologies are defined as technologies based on the use of AI, such as computer vision, natural language processing, speech recognition and synthesis, intelligent decision support, and advanced AI methods.

The goals and objectives of the experiment were to:

  • improve the quality of life of the population;
  • improve the efficiency of state and municipal administration;
  • improve the efficiency of economic entities;
  • form a comprehensive system for regulating public relations related to the development and use of AI technologies;
  • create favorable legal conditions for the development of AI technologies;
  • approve AI technologies and the results of their application in Moscow; and
  • evaluate the effectiveness and efficiency of the establishment of the special regulation.

Experimental regimes in the field of unmanned transportation

Special mention should be made to the experimental legal regimes for unmanned transportation. In addition to Moscow, such experimental legal regimes are in effect in many constituent entities of the Russian Federation.

An experimental legal regime for autonomous vehicles was introduced on the M-11 Neva highway, which involves six companies in the transport and logistics industry, including manufacturers of highly automated vehicles and major carriers. 

It should be noted that automated driving systems are not yet permitted under Russian law in customer operation. The Government of the Russian Federation adopted Regulation No. 1415 dated 26 November 2018 on the Conducting of Test Exploitation of Highly Automated Vehicles on Public Roads (the Regulation) which serves as a legal basis for conducting an experiment in Moscow, the Republic of Tatarstan, Vladimir Oblast, Leningrad Oblast, Moscow Oblast, Nizhni Novgorod Oblast, Novgorod Oblast, Samara Oblast, Republic of Chuvashia, Khanty-Mansi and Yamalo-Nenets Districts, Krasnodar Region, and Saint Petersburg. The experiment was conducted from December 1, 2018, until March 1, 2022, in Moscow and Tatarstan, and in other mentioned regions from March 1, 2020, until March 1, 2022. One of the main participants of the experiment is testing their vehicles in Moscow.

The Regulation defines automated driving systems as hardware and software that can drive vehicles without interference from the driver and with the possibility of automatic switching off of the automated regime in case the driver decides to take control of the vehicle for the purpose of manual driving (if necessary), as well as for the purpose of avoiding road accidents. Automated vehicles participating in the experiment under the Regulation must be equipped with devices performing uninterrupted, non-correctable registration, collection, and storage of data from the sensors of the automated vehicle system. The format of data must only allow reading. The owner of the vehicle participating in the experiment must record a video of the driver's actions and the outside road situation, ensure storage of such data for 10 years, and allow access to the data to the Russian Ministry of Interior (Police) and the Ministry of Industry and Trade.

Currently, there are plans on adopting a new law on highly automated vehicles. The purpose of the new law would be to set out the framework for the operation of automated vehicles in Russia.

Ethical aspects of AI use

As part of the discussions on AI regulation in Russia, the expert community has identified the following important ethical aspects of AI use:

  • data security and privacy: ensuring safe storage and use of user data;
  • transparency and explainability of AI work: developing methods and standards to control and explain decisions made by AI;
  • regulating the use of AI: developing legislation and regulations for different applications of AI;
  • bias and discrimination: ensuring that AI algorithms are unbiased, fair, and are trained on diverse and fair datasets;
  • autonomous systems and robots: developing legal and ethical standards for interactions between humans and autonomous systems; and
  • identifying and detecting fake content: ensuring the reliability and validity of information to prevent negative consequences for society and individuals.

The work of the expert community resulted in the development of the Code of Ethics in the field of AI (the Code). The Code establishes general ethical principles and standards of behavior to guide participants in AI relations and is intended to create an environment of trusted development of AI technologies in Russia. The main provisions of the Code include that:

  • the main priority of AI technology development is to protect the interests and rights of people and individual human beings;
  • it is necessary to take responsibility when creating and using AI;
  • responsibility for the consequences of the use of AI is always borne by a human being;
  • AI technologies should be used for their intended purpose and implemented where it will benefit people;
  • the interests of the development of AI technologies are above the interests of competition; and
  • it is important to maximize transparency and truthfulness in informing about the level of development of AI technologies, their opportunities, and risks.

The Code was adopted at the first international forum 'Ethics of Artificial Intelligence: The Beginning of Trust' in 2021. Currently, 361 organizations, including foreign organizations, have joined the Code.

AI and personal data

Currently, none of the adopted normative acts sufficiently address the protection of personal data from AI abuse. Nevertheless, it is worth highlighting certain risks that AI systems pose to personal data.

AI systems use datasets to assess certain personal aspects, such as a person's health status, preferences, level of productivity, and the like. Such systems can be used for profiling by intelligence agencies, banks, marketing agencies, and other institutions. As we know, such profiling is not always used for good purposes - for example, the case when unknown developers created a version of ChatGPT that could help hackers conduct cyberattacks and steal data from victims' computers.

Machine learning algorithms and profiles can be used in making important decisions like loan approval and employment decisions. The possible stereotyping of AI algorithms could lead to discrimination based on any human traits, such as gender, race, or age.

In general, when criticizing AI systems their peculiarity is often mentioned, which shows that there is a lack of transparency in their algorithms, making it difficult to control. Unlike conventional programs, neural networks write their own algorithms during training, so the logic of decision-making by AI systems is often a mystery. Accordingly, the lack of proper control over the work of AI algorithms generates significant risks.

One should not forget about the classic risks of data leaks when an AI system can 'share' personal data with third parties without users' consent.

Conclusion

The legal regulation of AI in Russia is at a stage of development and improvement. We are still at the beginning of this path, but it is clear that the development of this area is moving very quickly, and very soon we can expect not only new laws regulating AI but also, perhaps, a whole new branch of law and lawyers specializing in AI law.

Vyacheslav Khayryuzov Partner
[email protected]
Arno Legal, Moscow