Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Navigating the AI frontier - understanding regulatory approaches in the EU, USA, and India – part one

In the past few years, the digital market has witnessed an outpour of artificial intelligence (AI) systems, with the AI market expected to reach a valuation of nearly $2 trillion by 2030.  However, the surge in the use of AI has led to the birth of several pertinent issues ranging from concerns about data privacy and intellectual property rights infringements to issues around transparency and ethical concerns, among others. In the first part of this series on navigating the AI frontier, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, aim to analyze and assess the regulatory position around AI in three key jurisdictions, namely the EU, USA, and India. Part two of this series will evaluate the diverse approaches of these jurisdictions and the learnings that India can adopt from the EU and the USA while framing its own set of AI regulations, as well as what lies ahead for India in the AI regulatory space.  

Julien Fourniol/Baloulumix/Moment via Getty Images

The absence of governance had initiated a regulatory arms race, with regulators across jurisdictions trying to play catch up with the fast-evolving growth of the AI landscape. Stanford's AI Index Report 2024 showed that mention of AI in regulatory proceedings across the globe doubled, from 1,247 in 2022 to 2,175 in 2023. That said, it was not until the enforcement of the EU Artificial Intelligence Act (the EU AI Act) this year that the world got its first standalone AI regulation. Further, even leading markets like the US have witnessed several regulatory developments recently, particularly in the nature of guiding principles/self-regulations in relation to the use of AI.

In India, several sectoral regulators have published guidelines and recommendations for the regulation of AI. However, the Government has yet to formulate a central regulatory framework for AI in India. While the Minister of State of the Ministry of Electronics and Information Technology (MeitY) announced in 2023 that the proposed Digital India Act will have provisions for the regulation of AI, there is a lot of uncertainty on the approach that is likely to be taken by MeitY on the regulation of AI, given the aggressive stand taken by it in the recent past.

The EU

On May 21, 2024, the EU AI Act was approved, which aims to regulate AI systems based on their risk level and accordingly classifies risk from AI systems into three categories: 'Unacceptable Risk,' 'High Risk,' and 'Low Risk.' The EU AI Act places obligations on the producer, distributor, or importer of an AI system (the Provider) as per the risk-based classification mentioned above. A Provider of low-risk AI systems is subject to minimal obligations under the EU AI Act such as ensuring that a human interacting with an AI system is made aware that the system is an AI system and a transparency obligation requiring them to disclose if any content has been generated or manipulated by an AI system. Additionally, the Provider of high-risk AI systems is required to comply with key obligations such as threat identification, implementation of risk management, and mitigation system disclosure of input data specifications used for training validation and testing data used, as applicable. The third category of AI systems, unacceptable risk AI systems, cannot be developed or used in the EU, subject to certain exceptions which include a targeted search for specific victims of abduction or trafficking, and identification of alleged criminals. Barring these critical exceptions, AI systems posing unacceptable risk cannot be imported into or exported from the EU.

The EU AI Act also provides for another category of AI systems, namely, general-purpose AI system (GPAI), which is trained on a large database and has the ability to perform diverse tasks. If such a GPAI is integrated into an AI system, this system will also be classified as a GPAI, except when the AI system is used for research, development, or creating prototypes. A GPAI operator is required to carry out transparency obligations including maintaining technical documentation related to the testing process and evaluation results and having a policy for compliance with the EU copyright laws.

Through a risk-based classification, the EU AI Act has attempted to encompass all use cases of AI systems. As such, the EU AI Act regulates every aspect of AI systems through a consolidated Act which will be implemented by all the EU Members. Accordingly, it adopts a centralized yet flexible approach as it seeks to regulate the impact of an AI system and not a particular set of AI systems. That said, with constant new use-cases of AI systems, the regulation of evolving use-cases of AI through a centralized Act will be a cumbersome process. This is because it will be difficult to identify evolving use cases across sectors and amend the centralized act time and again to bring all the use cases of AI under a single central act.

USA

At the federal level in the US, the White House has taken various steps to regulate AI systems. On October 30, 2023, President Biden issued an executive order directing federal agencies to frame guidance for the usage of AI by private and government entities. The order also invokes the provisions of the Defence Production Act to direct private entities developing potential dual-use foundation AI models, which pose a 'serious' risk to national security, public health, and safety, to disclose the results of their safe testing of AI products. Even though it is a significant step taken by the Government, the executive order in itself does not create any law and merely directs federal agencies to propose laws for regulation of AI.

Other significant steps taken by the White House include securing voluntary commitments from private organizations and the issuance of the AI Bill of Rights. Both the voluntary commitment and the AI Bill of Rights provide guidance for the designing, manufacturing, and deployment of an AI system based on certain non-binding principles including safety and security, by ensuring privacy of data and transparency. While they are a step in the right direction, given several disclosure requirements being made to customers amongst other actions, they by themselves do not provide any enforcement system nor do they clarify any objective standards for reporting or testing.

Another non-binding framework in the US that was introduced in connection with AI systems is the risk management framework laid down by the National Institute of Standards and Technology (NIST). The framework enumerates principles of trust, safety, diversity, accountability, and transparency. These principles are to be maintained by entities by adopting policies/practices including intensive governing, mapping, and management of any impact of potential risks stemming from the use of AI.

Apart from regulatory developments at the federal level, governmental agencies have either issued guidelines or rulings to bring AI systems within their regulatory ambit via existing regulations to address issues specific to a particular sector. For instance, the Federal Communications Commission (FCC) issued a declaratory ruling bringing calls using artificial or pre-recorded voices generated through AI under the ambit of Telephone Consumer Protection Act (TCPA). Accordingly, any of the telephone calls made using AI or prerecorded voices will be allowed to be initiated only upon either: (i) obtaining express consent of the call receiving party; (ii) in case of emergency; or (iii) if such calls fall under any exemptions issued by FCC. More recently, on May 17, 2024, Colorado's Governor signed the Consumer Protections In Interactions With Artificial Intelligence Systems bill, which requires developers of a high-risk AI system to take reasonable care to protect consumers from foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of such AI systems. Apart from Colorado, other states have also formulated legislation to implement restrictions on AI systems1.

Overall, although the state and federal agencies have adopted legislation imposing obligations on those dealing in AI systems, at the federal level, the US has so far adopted the self-regulatory approach for the regulation of AI systems. While voluntary commitments are a form of obligation, they do not provide any objective standards for adherence to these commitments. For the majority part, companies have been left to determine methods to demonstrate compliance with the commitments on their own. Similarly, the AI Bill of Rights merely identifies guiding principles for the development and functioning of AI systems without imposing any compliance on the producers or users of AI systems. In doing so, it rather facilitates self-regulation of the entities dealing in AI systems as enumerated through voluntary commitments as opposed to directly governing AI businesses.

India

In the absence of any central regulatory framework in India, several bodies have released guidelines for the regulation of AI in India. To achieve the aforesaid objective, in 2018 MeitY set up the Committee on Cyber Security, Safety, Legal and Ethical Issues. Thereafter, the National Institution for Transforming India Aayog (NITI Aayog) and the Telecom Regulatory Authority of India (TRAI) released the Approach Document for India: Part 2 – Operationalizing Principles for Responsible AI  and Leveraging Artificial Intelligence and Big Data in Telecommunication Sector, recommending mechanisms for regulation of AI in India, respectively.

Interestingly, while all the above reports align in endorsing a risk-based approach for regulation of AI use cases in India, MeitY and NITI Aayog go a step further in recommending a decentralized approach with sector-specific regulation of AI use-cases in India through minimal compliance requirements and autonomous bodies to regulate AI generally. TRAI has recommended setting up the Artificial Intelligence and Data Authority of India2 (AIDAI) while NITI Aayog has recommended the setting up of an expert advisory body. These bodies, while appearing similar, do have distinct functions and roles. The advisory body recommended by the NITI Aayog will be in the background of the development of AI regulation, as it will merely provide advice and assistance for the development of such laws, thereby, entrusting the Central Government with drafting of the guiding principles and policy-making decisions. On the other hand, the AIDAI will be at the forefront of the legislative development, as it is entrusted with drafting the regulations and maintaining oversight.

There are several other significant differences in the framework proposed by these bodies. For instance, while both MeitY and TRAI provide for the constitution of a stakeholder body, NITI Aayog does not mandate consultation with the stakeholders. The other difference stems from self-regulation. While NITI Aayog and MeitY advocate incorporating self-regulatory practices in the regulatory framework, TRAI recommends against it. These reports also lay down the principles for the development of responsible AI. Notably, they employ several vague and undefined terms such as negative harm, fairness, and transparency without providing any definition or objective standards to test the same, thus, giving ample room to the regulators for introducing arbitrariness in the regulation of the AI systems.

Some additional steps taken or proposed towards the regulation and promotion of AI in India, or that may inadvertently impact or regulate AI include the following:   

MeitY AI advisories

On March 1, 2024, MeitY released an advisory requiring AI platforms to seek the Government's permission before releasing their products online in India (the Original Advisory). As per the directive, AI platforms were also required to obtain government approval before launching 'under-tested' or 'unreliable' AI models, with clear labeling to indicate their potential unreliability. Having said that, Union IT Minister Ashwini Vaishnaw clarified that the advisory is not legally binding but serves as a recommendation to ensure AI models are tested before deployment. Further, it was also clarified that the advisory is only directed toward large platforms and not start-up entities. Following the Original Advisory, MeitY released a second advisory on March 15, 2024 (the Second Advisory), through which it removed the prior approval requirements and focussed more on transparency. The Second Advisory also highlighted the need to include labels or metadata or identifiers to indicate to the user that the output has been generated using AI.

MeitY advisory on deepfakes

On December 26, 2023, MeitY issued an advisory to all the intermediaries3 to adopt measures for curbing misinformation spread by AI-generated deepfakes. This advisory highlighted the obligations of intermediaries under Rule 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which requires all intermediaries to make reasonable efforts to prevent users from hosting, displaying, publishing, and transmitting any prohibited content or misinformation, thus, it was advised that intermediaries exercise due diligence by promptly removing deepfakes or they may risk facing legal consequences.

Proposed Digital India Act, 2023

The MeitY released a presentation on the proposed Digital India Act on March 9, 2023, indicating that the Digital India Act (DIA) will regulate high-risk AI systems through an accountability and assessment mechanism. This mechanism will have provisions for content moderation, threat assessment, and ethical use of AI tools. The act will also envisage penalties for deterring unethical use of AI systems leading to violation of constitutional or fundamental rights of citizens. As per the presentation, if the DIA is enacted, intermediaries including e-commerce companies and gaming platforms using AI for any onboarding, data processing, or any other actions will be subject to basic due diligence and other detailed diligence obligations if they want to seek immunity as per the safe harbor principle. Currently, the Information Technology Act, 2000, provides protection to intermediaries from any legal liability for any third-party content hosted by them if they satisfy certain prescribed conditions4.

India AI 2023 report

The MeitY released the India AI 2023 report dated October 14, 2023, which highlights measures proposed by different working groups for the promotion of AI in India. This includes the establishment of an AI Centre of Excellence and Indian Datasets Platform, and the institutionalizing of the National Data Management Office to act as a regulator for classification, collection, and storage of non-personal data by governmental institutions, amongst other actions. The report also recommends the adoption of governmental schemes for funding start-ups developing AI systems/products and ensuring skill development in this space.

Initiatives taken by the Securities and Exchange Board of India

The Securities and Exchange Board of India (SEBI) released the Reporting for AI and Machine Learning (ML) applications and systems offered and used by Mutual Funds circular, dated May 9, 2019, requiring the reporting of the use of any AI by mutual funds for purposes including investment, compliance, or trading.

Initiatives taken by the Reserve Bank of India

The Reserve Bank of India (RBI) proposed, in its Statement on Developmental and Regulatory Policies, the use of AI in facilitating conversational payments through the Unified Payments Interface. UPI is an AI-enabled payment method wherein a customer can complete the payment and authorization process by voice commands to an AI system.

Concluding remarks

It can be seen from the above that the EU is the only jurisdiction with overarching legislation regulating all aspects of AI. That said, India and the US have also initiated significant steps towards regulating AI in their respective jurisdictions. In the absence of consolidated AI legislation, several sectoral regulators in the US and India have released guidelines, approach papers, and other directions concerning AI systems specific to their sectors.

Avimukt Dar Partner
[email protected]
Raghav Muthanna Partner
[email protected]
Himangini Mishra Associate
[email protected]
INDUSLAW, Bangalore


1. For instance, Connecticut- An Act concerning artificial intelligence, automated decision-making and Personal data privacy (S1103) and New York City Law 144: AI Bias – NYC Bias Audit Law.
2. It has been recommended that AIDAI should be included under the TRAI framework through an amendment to the TRAI Act, 1997.
3. 'Intermediaries' are defined under section 2(w) of the Information Technology Act, 2000, with respect to any particular electronic records to mean 'any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.'
4. Section 79, Information Technology Act, 2000.