Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Navigating the future - considering the role of AI Officers

In this Insight article, Roger Vilanova Jou, Senior Associate at PwC, delves into the growing impact of generative artificial intelligence (AI), which is sparking debate and regulatory consideration worldwide. As organizations grapple with AI's transformative potential, the question arises: Do we need a new role to navigate its governance effectively?

Alex Potemkin/E+ via Getty Images

The advent of generative AI in 2023 has had undeniable consequences. It has sparked debates of all sorts that have affected and are currently influencing the way AI is being regulated around the world. It has pushed organizations to consider the potential that a broader scope of AI can have as a driver of business transformation. It has attracted the attention of citizens as creators, employees, or consumers who may be affected by AI in their daily lives. And that's just the beginning.

In this context, many companies are experiencing a sense of urgency to adopt an early strategy on AI, as well as to address the governance of this technology. The motivations may be diverse: from regulatory compliance, a vocation to innovate, not falling behind competitors, gaining market significance, or monitoring the AI-based tools used by their employees, to name a few. However, all these organizations will have to face a shared burning question while they look at each other: Do we need to introduce a new role?

As the use and development of AI-based solutions grow, it makes sense for organizations to start considering who should be the person or people in charge of promoting innovation while ensuring that AI is used safely and responsibly, in compliance with applicable regulations and alignment with the business strategy and ethical values. However, it's important to remember that the consolidated version of the Artificial Intelligence Act (AI Act) in the EU does not provide for the obligation to designate or appoint a new figure, even when dealing with high-risk AI systems.

Unlike the General Data Protection Regulation (GDPR), which includes the obligation for certain companies to appoint a data protection officer (DPO) with tasks such as advising, monitoring compliance, awareness-raising, and training, as well as acting as a contact point for the supervisory authority on issues relating to the data processing; the AI Act opens the door for companies to choose the best way they deem appropriate to ensure compliance and responsible use of AI with a focus on risk management and accountability.

What difficulties can we encounter? The complexity and dimensions affected by AI due to its inherent characteristics can distance this technology from others that traditionally were under the supervision of some C-level roles such as the Chief Technology Officer (CTO), the Chief Information Officer (CIO), or similar profiles with a deep understanding of information technology and computer systems. Therefore, governance and risk management related to AI could entail the need to promote the creation of a working group, committee, or collegiate body with the participation of additional actors from different backgrounds. But even then, we would stumble upon the same question twice: Who should lead this group? Do we need a new role?

In the absence of clear examples from the private sector about the figures who should guide the management of AI in the organization, scientific literature has in the past analyzed the possible rise and functions of roles such as the Chief AI Officer (CAIO) or the AI Risk Officer (AIRO)1. In this regard, while the first would have the aim of ensuring AI-business alignment and leading the company's strategy on AI, the latter would be mainly focused on the identification, reduction, and prevention of AI risks.  

Nonetheless, it wasn't until the last quarter of 2023 that figures related to the governance of AI gained real traction, coinciding with the obligation for some US agencies to designate a CAIO, as anticipated by the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by the White House on October 30, 2023, and subsequently endorsed by the Office of Management and Budget (OBM) with the Memorandum for the Heads of Executive Departments and Agencies (the Memorandum) of March 28, 2024.

This Memorandum details the main responsibilities entrusted to CAIOs, divided between coordinating the use of AI, promoting AI innovation, and managing risks from the use of AI, serving as a good example of what organizations might consider when looking to appoint a similar figure. Likewise, it is important to emphasize that the Memorandum foresees that CAIOs should be professionals with sufficient experience to serve as senior advisors at the highest hierarchical level, engaging with other leaders, as well as to ensure that the use of AI complies with applicable regulations.

Does any of that ring a bell to privacy professionals? Some of the functions provided for CAIOs can be easily assimilated to some of those assigned to DPOs. Moreover, as with the DPO, there is no need to hire a new professional to serve as CAIO following the Memorandum, which establishes that US agencies may choose to designate an existing official such as a Chief Information Officer (CIO), Chief Data Officer (CDO), Chief Technology Officer (CTO), or similar official, provided they have significant expertise in AI and meet the rest of requirements demanded.

All of the above can offer some first clues to organizations about who should lead the AI strategy in the organization while ensuring a responsible and safe use of AI-based tools, who can be part of a specific AI working group or committee, and who should lead that group of professionals. However, companies should be aware that there is no one-size-fits-all answer, just as being an AI provider is not the same as being a deployer following the definitions set forth by the AI Act: who and how will depend, among other factors, on the way the organization is internally organized, what the company's culture is like and how they approach the use of innovative technologies such as AI, beyond factors such as the availability of resources, the business volume, or the company's size.

There may be companies opting for a division of tasks between professionals - as we have seen with the CAIO and AIRO example - but there may also be other companies opting for working groups or committees offering a multidisciplinary approach from different angles, from business strategy, data governance, privacy and cybersecurity, regulatory compliance, or the ethics involved with the development or deployment of AI systems, or even for a single professional assuming these tasks in collaboration with others.

For instance, some will see the DPO as a candidate sufficiently prepared to serve as an AI Officer, based on the previous experience acquired when conducting Data Protection Impact Assessments (DPIAs), managing the allocation of resources, monitoring compliance with a particular sensitivity to fundamental rights and freedoms, engaging with internal and external stakeholders, and directly reporting to the highest management level of the company.

Nevertheless, companies should also consider, on a case-by-case basis, whether the assignments involved with a new role may result in a conflict of interests by the DPO if it is involved in AI strategy and it is empowered with decision-making capacity on processes with an impact on the processing of personal data. In this line, the lack of a settled definition of the CAIO and similar figures in the private sector entails the need for companies to make relevant decisions about the relevance of this position within the organization, its responsibilities, and the tasks entrusted.

Although we are at an early stage regarding the appearance of these figures, we can be sure that the differences between regulating and not regulating a role such as the CAIO or - as proposed in the UK with the Artificial Intelligence (Regulation) Bill (the AI Bill) introduced in the House of Lords in November 2023 - the AI Officer, will lead to an interesting discussion about the effects of leaving it up to organizations who should internally manage AI regulation compliance and its risks.

This AI Bill proposes that any business that develops, deploys, or uses AI must have a designated AI Officer with responsibilities aimed at ensuring that the use of AI is safe, ethical, non-discriminatory, and free of bias within the organization. Can a single person achieve all this, considering the complexity that accompanies AI and the impact it can have from different perspectives? Will a CAIO or AI Officer be enough? At the very least, we can expect that this role, if created, will need the right amount of support, resources, and serenity to face the times ahead.

Roger Vilanova Jou Senior Associate
[email protected]
PwC, Barcelona


1 Schäfer, Mathias, Schneider, Johannes, Drechsler, Katharina and vom Brocke, Jan (2022). AI Governance: Are Chief AI Officers and AI Risk Officers needed? In: Proceedings of the 30th European Conference on Information Systems, AIS Association for Information Systems.