Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Singapore: Overview of AI governance and regulation

As different countries have issued their own framework and guidelines around how they approach artificial intelligence ('AI'), Singapore has contributed to the global discourse with its own publications, including the Model Artificial Intelligence Governance Framework ('the Model AI Framework')1 and the Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector ('the FEAT Principles')2. Adrian Fisher and Jia-Yi Tay, Partner and Associate at Linklaters Singapore respectively, survey recent developments in AI governance in Singapore, considering the practical approach taken by the country and future opportunities for AI.

ngkaki / Stock Photos /

Emphasis on AI in Singapore 

The Government of Singapore has articulated its vision for Singapore to be a leader in AI by 2030. It has outlined a National AI Strategy and put in place a raft of measures to encourage the development of a vibrant and sustainable AI ecosystem3. For example, the National AI Office was set up to establish the national agenda for AI, and a separate cross-agency government programme called AI Singapore was also set up to catalyse, synergise, and boost Singapore's AI capabilities4.

Recognising the need for a discourse around AI, the Advisory Council on the Ethical Use of AI and Data brings together industry leaders to advise the Government on issues arising from the responsible development and deployment of AI5. The Council gathers feedback and ground sentiments to shape the Government's response to AI regulation. In addition, it develops ethics standards and issues advisory guidelines, practical guidance, and codes of practice for voluntary adoption by industry players.

The Model AI Framework 

In terms of AI governance and regulation in Singapore, organisations, when deploying AI technology, are already required to comply with existing laws in Singapore around safety, data protection, and fair competition. The focus of this article is on more targeted measures specifically for AI, and the Model AI Framework is the key cross-sector framework on AI governance in Singapore.

The Model AI Framework was a trailblazing document when it was unveiled at the World Economic Forum in Davos in January 2019. At that time, it was one of the first few publications by a government in relation to AI governance and regulation.

The Model AI Framework starts off by articulating two key principles around responsible AI:

  • firstly, decisions made by AI should be explainable, transparent, and fair; and
  • secondly, AI solutions should be human-centric.

It also goes on to set out practical guidance in four main key areas:

Internal governance structures and measures

Organisations should have sufficient oversight into how AI is used and deployed. For example, they should delineate clear roles and responsibilities within the organisation, possibly setting up a coordinating body or a separate entity to focus on potential ethical considerations in using AI.

In addition, organisations should also conduct risk management and mitigation measures to assess, implement, manage, and monitor the use of AI models. Personnel working on AI systems should be trained on the sensitivities of the risks, benefits, and limitations of using AI with appropriate escalation mechanisms.

Determining the level of human involvement in AI-augmented decision-making

There is an assessment for organisations to make whereby they weigh up the commercial objectives of using AI against the risks of using AI to individuals or to groups. The Model AI Framework identifies three approaches to balancing risk and human oversight.

  • Human in the loop: where humans are active, involved, and in full control, with the AI systems providing recommendations or input only. These are for situations with the highest risks, such as providing medical diagnoses.
  • Human out of the loop: where the AI system has full control without the option for human override. For example, providing product recommendation solutions or in demand forecasting machine learning models for airline scheduling.
  • Human over the loop/human on the loop: where human oversight is involved in a monitoring capacity and stepping in only where the AI model encounters unexpected or undesirable events. For example, in a GPS navigation systems where humans can locate alternatives where there is a roadblock.

The Model AI Framework also sets out a matrix for balancing the severity and probability of harm with the degree of human involvement in the AI solution to minimise the risk of adverse impact on individuals or groups. The matrix only considers two main factors, but the Model AI Framework acknowledges that other factors could be relevant, including the nature of harm and the reversibility of harm.

Operations management

The Model AI Framework outlines a proposed AI adoption process which encourages organisations to increase accuracy and quality and minimise bias in data and AI models. The focus is on the interaction between data and the algorithm/model. Some guidelines include tracing data lineage to understand the data better, taking active steps to de-bias AI models (e.g. conducting model training and validation testing), and continuously reviewing the models to ensure that it is accurate and updated.

The Model AI Framework also highlights seven principles which could be implemented in the AI models (i.e. explainability, repeatability, robustness, regular tuning, reproducibility, traceability, and auditability). However, it also recognises that not all models need to include all of these measures. The recommended approach is pragmatic and risk-based, and if organisations wish to implement any of the principles, the Model AI Framework contains a list of suggested practical measures for each principle.

Stakeholder interaction and communication

Organisations are encouraged to build trust with stakeholders when deploying AI by being transparent and providing general disclosure on AI systems and policies to its users or consumers. Facilitating open stakeholder communication also includes providing for feedback and decision review channels.

The Model AI Framework applies to the design, application, and use of AI generally and is technology-agnostic and sector-agnostic. Given the Government's desire to grow the AI ecosystem, it is unsurprising that the focus of the Model AI Framework is rooted in practicality so that organisations can easily translate the suggested framework into practice.

In the second edition of the Model AI Framework, the document was updated with industry examples illustrating how organisations have implemented AI governance practices, with a compendium of uses cases published in two separate volumes6. Each of these volumes highlights use cases from various organisations and showcases how the Model AI Framework can be adopted for experiments that the AI Singapore team has conducted, among others.

As a complementary measure, the Government has released a self-assessment tool which is called the Implementation and Self-Assessment Guide for Organisations ('ISAGO')7. This tool is intended for organisations to self-assess the alignment of their AI governance practices with the Model AI Framework, so that they can identify any potential gaps in their existing processes and address them accordingly.

Approaches to AI regulation by other countries

The Model AI Framework contrasts with the approach in Europe. In April 2021, the European Commission published its legislative proposal for an AI regulation8. The European model is a risk-based regulatory framework for AI where certain uses of AI are classified as 'unacceptable' and would be prohibited under the regulation. There are other use cases placed within a 'high-risk' category, and these are subject to a number of 'mandatory requirements,' including putting in place risk management systems, checking the quality of datasets, and adding in appropriate human oversight measures. Failure to comply with these requirements could lead to enforcement action, with fines for the most serious infringements going up to €30 million or 6% of global revenue, whichever is higher.

The Singapore approach eschews the more prescriptive model that Europe is proposing to adopt. Not only is the Model AI Framework voluntary, but the framework also consists of best practice guidance which organisations can consider adopting, with no enforcement or liability regimes if there is a failure to do so. The Singapore approach has arguably struck a very fine balance between fostering innovation, growth, and ensuring that AI is developing in a responsible manner. It also seeks to maintain Singapore's position as a global and regional tech hub.

Interestingly, the Model AI Framework includes in Annex A a compilation of AI ethical principles that organisations may also wish to adopt. Some of these principles have been surfaced as part of the global discourse on AI governance (such as the OECD Recommendation of the Council on Artificial Intelligence) and others were raised through industry feedback. These additional principles include auditability, fairness, human centricity and wellbeing, human rights alignment, inclusivity, and progressiveness. Even though these principles were not articulated within the Model AI Framework, the list illustrates the complexity around AI governance regulation and the range of principles and ethical considerations that are also relevant for future developments in this area.

The recommendations from the Singapore Academy of Law's Law Reform Committee, in their report series on robotics and AI9, are another set of possible future developments in AI regulation. Its Subcommittee on Robotics and Artificial Intelligence has considered interesting issues including whether criminal liability can be imposed in the use of robotic and AI systems, and whether civil liability can be attributed for accidents involving autonomous cars.

Regulation of AI in financial services

The Model AI Framework also notes that sector specific laws, regulations, or guidelines may apply to certain sectors including the finance, healthcare, and legal sectors. The Monetary Authority of Singapore, Singapore's financial services regulator, has led the way in publishing the FEAT Principles in 2018 to promote fairness, ethics, accountability, and transparency in the financial sector.

The FEAT Principles apply to the use of AI in data analytics, specifically with respect to the finance sector. They aim to provide guidance to firms offering financial products and services (including banks and insurers) on the responsible use of AI and data analytics ('AIDA'), so as to strengthen internal governance around data management and use, and to promote public confidence in the use of AIDA. The focus is on AI and data analytics because these are identified as the technologies which assist or replace human decision-making.

The relevant principles set out in the FEAT Principles include ensuring that the use of personal attributes as input factors for AIDA-driven decisions is justified and that the use of AIDA is proactively disclosed to data subjects as part of general communication in order to increase public confidence.

It is important to note that the FEAT Principles are principles-based, and not prescriptive, they are for firms to consider when assessing existing or developing new internal frameworks to govern the use of AI. Therefore, it does not extend as far as the EU draft AI regulation mentioned above, which is sector-agnostic and would also apply to the financial sector.

Looking forward

In AI regulation, there is a tension in balancing innovation, openness to accelerating growth in the sector, and the need to put in place safeguards and protect the users from potential dangers that AI systems may pose. It appears that in Singapore, there is no appetite to introduce broad-ranging AI regulation in line with the European approach.

Although a more prescriptive approach is unlikely, the Model AI Framework and the FEAT Principles contain detailed guidance on the key frameworks that the Singapore government are encouraging organisations to adopt.

As these frameworks are voluntary, the Government has demonstrated its trust in organisations to be accountable for their own adoption and development of responsible AI solutions. In addition, the Government has expressed its openness to engage with organisations and stakeholders to share how they have used the Model AI Framework and to encourage widespread adoption of the framework.

Another benefit of the Model AI Framework and the FEAT Principles sitting outside the legislative processes is that the published guides are forward-looking and can quickly adapt to any fast-changing developments in the AI ecosystem.

The global discourse on AI regulation is still evolving. With the publication of the AI regulation in Europe, other governments may also review their existing approaches to AI and possibly move towards more prescriptive models. The Government of Singapore has expressed that it will monitor global developments, and it remains to be seen whether any of these international developments on AI governance and regulation will trickle into the Singapore regulatory landscape.

Adrian Fisher Partner, Asia Head of TMT
[email protected]
Jia-Yi Tay Associate
[email protected]
Linklaters Singapore Pte. Ltd, Singapore

1. Available at:
2. Available at:
3. In November 2019, Singapore (through its Smart Nation and Digital Government Office) outlined a National AI Strategy which is part of its broader journey of transforming into a 'Smart Nation'. In it, there is a vision for Singapore to be a “leader in developing and deploying scalable, impactful AI solutions” by 2030, with an initial tranche of five strategic national AI projects including border clearance operations, freight planning and chronic disease prediction & management. One of the key elements set out in the National AI Strategy is adopting a 'human-centric approach' to artificial intelligence – this means focusing on human needs, rather than "developing … technology for its own sake".
4. AI SG is driven by a partnership between the National Research Foundation (NRF), the Smart Nation and Digital Government Office (SNDGO), the Economic Development Board (EDB), the Infocomm Media Development Authority (IMDA), SGInnovate and the Integrated Health Information Systems (IHiS) with up to S$150 million being invested over five years by the NRF.
5. Composition of the Advisory Council on the Ethical Use of Artificial Intelligence ("AI") and Data - Infocomm Media Development Authority (
6. Volume 1 available at:
Volume 2 available at:
7. Available at:
8. Available at:
9. Available at: