Support Centre

International: Establishment of GPAI and status of AI development in EU and USA

The Organisation for Economic Co-operation and Development announced, on 15 June 2020, that they will be hosting the Secretariat of the newly founded Global Partnership on Artificial Intelligence ('GPAI'). In particular, the GPAI is a global coalition for AI policy development whose founding members include Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the UK, the US, and the EU.

Tony Studio / Essentials collection / istockphoto.com

GPAI Mission

The initiative was initially advanced by French President Emmanuel Macron and Canadian Prime Minister Justin Trudeau and began as a forum to monitor the policy implication of AI globally. The idea was to create binding reports, however, consensus between countries in this regard was never reached. In particular, the US, until May 2020, stated that they would not participate.

More specifically, the GPAI will focus on four key Working Group themes, which consist of responsible AI, data governance, the future of work, as well as innovation and commercialisation. AI has been a key feature of OECD policies, with the introduction of its Principles on Artificial Intelligence('the OECD AI Principles') forming the basis of G20 Principles on AI in 2019. Furthermore, the OECD AI Principles have been supported by 40 countries and will be used significantly in policy development of the GPAI. Furthermore, the GPAI plans to have its inaugural Multi-stakeholder Experts Group Plenary in December 2020, which will be hosted by Canada. The OECD has stated that the GPAI's governance bodies will consist of a Council and a Steering Committee, which will be supported by a Secretariat housed by the OECD. The GPAI has established two cities to be hubs for AI research, namely Montreal and Paris. Moreover, the OECD will be a Permanent Observer to the GPAI's governing bodies and its experts will participate in both working groups and plenary meetings.

International harmonisation

Notably, all founding members of the GPAI have, in recent years, issued national AI strategies, many of whom focus firstly on its use on a public sector basis in a way that balances the collective interest of innovation and the civil liberties of citizens. Interestingly, national strategies on AI differ significantly from one country to another in terms of the approach moving forward. For example, the US has an approach that focuses on investing in innovation, as portrayed in their AI strategy2, compared to the heavily regulated approach of the EU. The US AI strategy warns, "[holding] AI systems to such an impossibly high standard that society cannot enjoy their benefit" should be avoided. Conversely, a recent AI white paper from the European Commission ('the Commission')3 explains, "key elements of a future regulatory framework for AI in Europe that will create a unique ‘ecosystem of trust.' With regards to the APAC perspective, India's AI policy4 again takes a human-centric approach noting, "… AI has to be guided by optimisation of social goods, rather than maximisation of top-line growth."

The US Perspective

Until recently, the AI policy objectives have been focused significantly on innovation in the public sector, in particular with the use of AI in autonomous vehicles and aviation. In addition, there have been high-level government reports on US policy objectives on AI which summarise each administration's regulatory approach. AI development significantly advanced on 11 February 2020, when U.S. President, Donald Trump, issued an Executive Order launching the American AI Initiative. The Executive Order is guided by five principles including:

  • driving technological breakthroughs;
  • driving the development of appropriate technical standards;
  • training workers with the skills to develop and apply AI technologies;
  • protecting American values including civil liberties and privacy and fostering public trust and confidence in AI technologies; and
  • protecting US technological advantage in AI, while promoting an international environment that supports innovation.

The US followed this up by joining other countries in adopting the OECD AI Recommendation and subsequently joined the G20 countries in supporting the G20 AI Principles. This also led to a number of federal agencies across all sectors to release AI principles, standards, and guidance. In its first annual report, the American AI Initiative noted the importance of promoting an international environment supportive of American AI innovation, particularly stressing the need to partner up with like-minded allies and other non-federal entities.

Federal AI regulation has been piecemeal, however, a number of states have put forward legislation to joining Texas, Washington, and California in heavily restricting the collection and storage of biometric which is a key application of AI. A federal bill regulating facial recognition bill is currently being discussed in the U.S. Senate.

The EU perspective

The EU has highlighted that its focus is not on 'winning or losing a race' but taking an approach that is human-centered. The Commission has put forward an AI approach based on three pillars:

  • being ahead of technological developments and encouraging uptake by the public and private sectors;
  • prepare for socio-economic changes brought about by AI; and
  • ensure an appropriate ethical and legal framework.

The Commission has undertaken work to advance AI in a number of sectors and industries, including agriculture, manufacturing, transport, data, and health. The work also forms part of the Commission's European Digital Strategy, with a focus on creating a digital single market that will improve access to digital goods and services, create an environment where digital networks and services can prosper, and maximise the growth potential of the European Digital Economy.

As of yet, the Commission has also not reached consensus on how AI will be regulated with the two current options being requirements for AI applications generally or sector-specific regulation.

The EU is currently considering legislation that will have an impact on AI regulation in the Digital Services Act which is considered as the first update to the Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market ('the e-Commerce Directive').

Conclusion

Although generally united in outlining the potential risks in AI development in areas such as privacy, security and the ethical use of AI, it remains to be seen if harmonisation will be possible with such differences in policy objectives. Particularly, the contrast between policies for the push for innovation from the US perspective and the focus on privacy and data protection principles from an EU perspective still seems quite stark. 

Edidiong Udoh Privacy Analyst
[email protected]


1. OECD AI Principles available at https://www.oecd.org/going-digital/ai/principles/
2. Artificial Intelligence for the American People available at https://www.whitehouse.gov/ai/#:~:text=On%20February%2011%2C%202019%2C%20President,national%20AI%20technology%20and%20innovation
3. White Paper On Artificial Intelligence - A European approach to excellence and trust available at https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
4. National Strategy for Artificial Intelligence #AIForAll available at https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf