Governing the use and risk of AI in organizations today

Stephen Mann and James Finister – authoring team, ITIL AI Governance White Paper


How much risk is posed currently by the general level of AI adoption?

The OECD risk register of AI incidents is eye-wateringly long, including examples of teenage suicide involving AI and instances of people mistakenly identified by AI for benefit fraud.

Aside from these clear social concerns arising from AI, its use in organisations might not be delivering value for money, nor are there robust mechanisms to assess its value, leading to a high failure rate of corporate AI projects.

Recent research showed that 82% of organisational users are accessing free AI tools. This use of “shadow IT” is not automatically a bad thing, and employees doing this can be more likely to use corporate-sanctioned AI tools.

However, the risk with non-centralised AI tools in the workplace is often the lack of education about how sensitive corporate information can end up in the “melting pot” of AI available to anyone, anywhere.

This is why we – along with our co-authors – have created the ITIL AI Governance White Paper to help professionals and their organisations benefit from AI’s potential while achieving compliance, resilience, and stakeholder confidence.

The myth of general AI governance

There is a presumption in the traditional IT world that centralised AI is governed, when most of it isn’t effectively governed at all.

This is reflected in the gulf between how much IT organisations – and those outside – trust AI. This requires caution, as, for example, despite many ITSM tools stating they have AI guardrails, these are only a small part of what’s needed. The danger is that people doing IT governance are also managing AI governance.

One example of the corporate risk of AI is in human resources. To increase productivity, such as in the recruitment process, HR professionals deploying AI may put their organization at risk of contravening employment legislation through discrimination. This is an area where we are likely to see legal cases emerging.

Governance is about responsibility, accountability and finding an appropriate governance model that allows an organisation to use AI safely.

Towards effective AI governance

There is a major difference between traditional IT governance and what is needed for AI.

With the former, a compliant IT system will probably remain so when audited the following year. Conversely, an AI system might fail to be compliant even minutes after going live.

The non-static nature of AI means tools can “go rogue” very quickly. In one fictional test scenario, industry developer Anthropic’s Claude AI tool – when handling email information and being told it would be taken offline – threatened to blackmail an executive by revealing a secret love affair.

In this context, AI governance is not a “one and done” activity. The ITIL AI governance white paper begins with the four AI impact perspectives: a structured way of understanding how AI affects technology governance:

• Decision authority and risk management

This is about questioning who (or what) has decision making power in AI-enabled environments and how to govern the associated risk.

If people’s decision making is influenced or directed by AI, are they making the right decisions? If not, a human can’t blame AI for a poor decision based on its advice. Another issue to address is the level of transparency in decisions made using AI, particularly the lack of an audit trail to justify a decision made with AI’s assistance.

Governance is needed to ensure people recognise that AI makes mistakes and it’s necessary to understand and prioritise the most significant risks that might harm an organisation.

Having a “head in the sand” approach about AI – often not knowing where AI is used in the organisation or omitting it from the risk register – is not good enough. However, the guidance in the ITIL AI Governance White Paper will help those organisations that are only beginning to understand the full extent of the AI risk landscape.

• Ethical principles and responsible AI

There are moral and societal implications of AI systems which require concrete practices, policies and oversight.

Humans make ethical decisions often without realising it, but it’s not possible to guarantee that AI will emulate the right ethics, or any ethics at all. Organisations need to be clear about the ethical issues that are compatible with their culture. And considering ethical issues is also a good way to think about compliance.

An ethical risk is posed when organisations use AI to make life easier but fail to consider the potential impacts. For example, using AI to create marketing materials may harm the organisation if the intended audience doesn’t trust AI-generated content.

• Data governance and performance management

AI systems create persistent data governance problems – especially in the way they might highlight issues around data appropriateness or consent.

The way AI models are trained on data can cause disclosure of sensitive information or create the need for multiple, segregated AI models at increased cost.

In addition, poor data can generate bias and poor generalisation which increases the risk of AI errors.

• Regulatory compliance and operational standards

AI systems need to comply with data protection laws, IP rights and emerging AI-specific legislation. This means having clear audit trails, tackling liability and handling evolving regulations.

From an operational perspective, AI models can degrade over time – therefore needing updates, retraining, version control and retirement. Additional risk through data sharing with external suppliers can also cause data misuse and non-compliance.

The ITIL 6C model

In the context of the four AI impact perspectives already outlined, the ITIL AI Governance guidance also offers the 6C model.

This has a dual purpose: helping organisations understand the six AI capabilities – creation, curation, clarification, cognition, communication, coordination – and also tailoring risk profiles, controls and countermeasures.

A mapping approach can assist organisations in plotting the six capabilities against different risks, while also acknowledging the positives and value that AI can deliver. However, when mapping, it’s important to be organisation and topic-specific (e.g., for ITSM, mapping for incident and problem management will be different); ensure someone is taking responsibility for it and seeking buy-in from people who it will affect, rather than imposing it from above.

Deciding which governance model suits your organisation is not a tick box exercise. It should be a process that generates value in its own right – a call to action to make things better, while learning and changing as a result. As our guidance in the white paper recommends, by treating AI as a “partner to be stewarded, not as a tool to be restrained”, organisations can “harness AI’s transformative potential responsibly, ethically and effectively.

It’s important to recognise that context, consciousness and conscience are human traits that AI doesn’t possess – which is why it’s so important to have humans in the loop. Having appropriate governance means maximising the benefits and minimising the risks of good AI, while understanding its limitations.

The ITIL AI Governance White Paper is free to download from ITIL.com.

Also, explore AI Unlocked, the complete online course for thriving in the intelligent era.
Learn more