Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

India: EAC-PM publishes complex adaptive system framework to regulate AI

On January 30, 2024, the Economic Advisory Council to the Prime Minister of India (EAC-PM) published 'A Complex Adaptive System Framework to Regulate Artificial Intelligence.' In particular, the framework suggests, among other things, a complex adaptive system (CAS) approach to effectively regulate artificial intelligence (AI) (algorithm, training data sets, models, and applications) consisting of the following five principles. 

More specifically the five principles suggest: 

  • establishing guardrails and partitions - implement clear boundary conditions to limit undesirable AI behaviors, including creating 'partition walls' between distinct systems and within deep learning AI models to prevent systemic failures;
  • mandating manual 'overrides' and 'authorization chokepoints' - critical infrastructure should include human control mechanisms at key stages to intervene when necessary, emphasizing the need for specialized skills and dedicated attention without limiting the automation of systems. Manual overrides empower humans to intervene when AI systems behave erratically or create pathways to cross-pollinate partitions. Meanwhile, multi-factor authentication authorization protocols provide robust checks before executing high-risk actions, requiring consensus from multiple credentialed humans;
  • ensuring transparency and explainability - open licensing of core algorithms for external audits, AI factsheets, and continuous monitoring of AI systems is crucial for accountability. There should be periodic mandatory audits for transparency and explainability;
  • defining clear lines of AI accountability - includes, among other things, establishing predefined liability protocols to ensure that entities or individuals are held accountable for AI-related malfunctions or unintended outcomes; and
  • creating a specialist regulator - this would also entail having a national registry of algorithms as compliance and a repository of national algorithms for innovations in AI.

You can read the regulatory framework here.

Feedback