Time to take responsibility: Why AI governance is critical

by Kevin Macnish - Head of Ethics and Sustainability Consulting, UK
| minute read

AI is changing how we work but brings with it the urgent need for governance to ensure its use is compliant, fair and ethical. Summing up the key takeaways of a session on AI governance at the GRC #Risk Conference Dr Kevin Macnish, Head of Ethics and Sustainability Consulting at Sopra Steria Next UK, reveals why its vital businesses take action 

Imagine your company's new AI tool discriminates against people with darker skin tones, or worse, leaks users' personal data. Scenarios like these are happening right now, as you're reading this.   

AI is quickly weaving its way into our lives, at home and at work, making its governance more urgent than ever. We’ve known about AI’s ethical challenges for decades, but developments over the last five years have turned these concerns into real threats to everyone’s rights and wellbeing. 

Some bodies, like the European Parliament and the State of New York, have introduced legislation to put guardrails around AI development and use. Others, like the UK government, are taking a more cautious approach. Meanwhile, companies are adopting AI at pace and are not always adopting or maintaining governance at the same rate.  

These issues were discussed at a recent panel at the GRC #Risk conference at London’s Excel Centre. The panel, which I chaired, included Teodora Pimpireva Tapping, global head of privacy at Bumble; Eleonor Duhs, head of data privacy, Bates Wells LLP and chief negotiator for the UK for GDPR; Ivan Djordjevic, principle architect for security, privacy, and identity at Salesforce; and Marc Rubbinaccio, head of compliance at Secureframe.  

The conference, which was held in October 2024, brought together governance, risk and compliance experts from around the globe to discuss these and related issues.  

The panel covered three core areas: the current challenges, how to move beyond lists of principles and the motivation to put robust governance in place, especially where there is not overarching legislation, as in the UK.   

Current challenges  

A core challenge that was raised repeatedly on the panel was the need for cross-functionality. AI governance isn’t just for lawyers or tech specialists, it’s like assembling a football team. You need everyone on board - lawyers, tech experts, ethicists, and more - working together towards the same goal.  

In Sopra Steria, for example, the AI governance board consists of our chief technical officer, chief information and security officer, head of legal, head of procurement, data protection officer and head of ethics consulting. 

Governance is also challenged in some jurisdictions, such as the UK, for the aforementioned reason of no overarching legislation. The UK currently has a patchwork of laws and regulations that collectively govern AI use (such as the Equalities Act, the UK GDPR act and others) which makes compliance complex and uncertain, especially for small and medium businesses without the resources to have specialised AI governance oversight.   

Principles vs Practice  

While principles are important as a starting point, they cannot be the last word on the matter. This will only create confusion when different principles clash and there is no clear guidance as to which should be traded off.  

Think of a case where profitability may clash with explainability. It’s easy to say explainability should always come first, but in reality, businesses have to balance explainability against profitability and their risk tolerance, while remaining ethical and within the law. Should we stop using (and should OpenAI and Anthropic stop offering) tools such as ChatGPT and Claude because their output is not fully explainable?   

Again, the need for cross-functionality was brought up as an essential prerequisite in order to move effectively from principles to policy to the implementation of standards. Which standards should be employed (ISO27001, ISO42001, the NIST Risk Management Framework, and others) is another area for decision.   

Motivation  

While organisations may recognise the need for governance, they may not be able to justify the budget if there is no legislation demanding this. Even so, in those contexts good governance can be a differentiator, and certifications such as ISO42001 will become increasingly valuable to help suppliers stand out in a crowded market. Good governance can also help organisations bring some order to the chaos many of us are experiencing with AI. 

Lastly, we’ve all heard of the Universal Declaration of Human Rights. Even though some organisations may not be subject to, for instance, the fundamental rights requirements of the EU’s AI Act, the call to respect human rights such as non-discrimination, privacy and freedom of expression is universal.   

Key takeaways  

To wrap up, the panellists left us with some key takeaways: audit your AI systems so you know where they’re being used, don’t get swept up in the hype of new tech, make sure everyone knows their responsibility for the models across your organisation and hold your suppliers to account for how they are implementing AI governance.  

Conclusion  

For all of the excitement and pace of development in AI, there are some core risk management principles which should underlie implementation. Know what your organisation has and is using; review what is coming into your organisation (and what is going out); and ensure that good governance sits within the organisational culture and does not reside in one function alone. Given the urgency around governance, if no-one’s taking clear responsibility for AI in your organisation, maybe it’s time to ask yourself: what’s your role in making sure AI is compliant, fair, and ethical in your workplace?  

Search

artificial-intelligence

excellence-client

digital-transformation

emergent-technology

infrastructure-management

Related content

How AI is powering support services for EDF employees

World leader in low-carbon energy generation EDF wanted an innovative tech solution to ease pressure on IT support teams while also boosting service quality. AMY was the answer. 

Sopra Steria and OVHcloud expand their partnership to industrialise AI and accelerate companies’ transformation using open source principles

By combining OVHcloud’s AI offering with Sopra Steria’s AI-industrialisation capabilities, this partnership will enable companies to scale up AI rollout.

Sopra Steria recognized as a Leader in Agile Development & DevOps Services by global analyst firm NelsonHall

NelsonHall’s NEAT vendor evaluation reflects Sopra Steria’s overall ability to meet future client requirements as well as delivering immediate benefits to Agile Development & DevOps Services clients. This evaluation assessed all the major providers in this segment worldwide.