Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Singapore: IMDA publishes Model AI Governance Framework for Generative AI

On May 30, 2024, the Infocomm Media Development Authority (IMDA) and AI Verify Foundation announced the publication of the Model AI Governance Framework for Generative AI (the Framework).

The Framework was first published in 2019 and updated in 2020, with the current publication which was released for public consultation in January 2024, seeking to address specific artificial intelligence (AI) risks stemming from generative AI, including hallucination and copyright infringement. The Framework outlines nine dimensions.

Contents of the Framework

  • Accountability - the Framework considers how responsibility can be allocated during the development process, with allocation based on the level of control each stakeholder has in generative AI development.
  • Data - the Framework recommends referring to existing personal data protection legislation as a starting point, the use of Privacy Enhancing Technologies (PETs), and measures to ensure data quality.
  • Trusted development and deployment - the Framework recommends evaluation during the training of AI models, including techniques such as Reinforcement Learning from Human Feedback, and Retrieval Augmented Generation, alongside benchmarking test models and red teaming.
  • Incident reporting - the Framework suggests the establishment of structures and processes to enable incident reporting for timely notification and remediation.
  • Testing and assurance - notably, the Framework outlines the role of external audits as a mechanism to provide greater transparency, detailing the need for such audits to be done according to a standardized method.
  • Security - the Framework, in recognizing novel security threats from generative AI, recommends a 'security-by-design' approach, noting new security safeguards such as input filters and digital forensic tools for generative AI.
  • Content provenance - owing to the creation of realistic synthetic content at scale, the Framework stipulates the need for digital solutions including digital watermarking and cryptographic provenance.
  • Safety and alignment R&D - the Framework provides that safety techniques and evaluation tools at present do not fully address all potential risks, and the need to ensure human capacity to align and control generative AI. At the design stage, the Framework recommends Reinforcement Learning from AI Feedback and the evaluation of a model after it is trained to validate its alignment.
  • AI for public good - the Framework also recognizes the need for generative AI to empower the public, outlining the need to democratize access to technology, public service delivery, workforce, and sustainability.

You can read the press release here and the Framework here.