As public and regulatory scrutiny of artificial intelligence grows, AI providers and adopters face a critical challenge: making AI both ethically sound and legally compliant.
In recent years, algorithmic bias has shifted from academia to mainstream debate, with high-profile cases in areas like hiring and facial recognition driving this shift. These instances spotlight algorithms as flawed tools, capable of introducing new societal issues.
AI is often painted as either a Pandora’s box of dangers or a cure-all for humanity’s problems. Yet building AI that respects human rights and ensures fairness is neither a simple technical fix nor an idealistic vision. It’s a social and business responsibility.
Bias, What Bias?
Beyond these polarised views lies a core question: can fairness be mathematically modelled? Pursuing algorithmic fairness means applying fairness in practice, not just in theory. But achieving this isn’t straightforward. Engineers grapple with multiple fairness metrics, each with trade-offs. The challenge is deciding which is 'right'—and, importantly, who makes that decision.
Consider data diversity: bias often starts with unrepresentative data. For example, facial recognition in diverse cities like London needs a broad dataset to accurately identify people of various backgrounds. Yet, in more homogenous areas, a highly complex model may not be necessary. While diversity is crucial, ensuring data is suited to its specific use case matters more.
Implicit cultural assumptions can also lead to unintended biases. For instance, computer vision models trained to analyse walking patterns might overlook cultural variations, like walking pace or side preferences in different countries. People may assume something is bias-free, yet it can actually carry bias. These assumptions, though subtle, can shape an AI’s behaviour in unexpected ways.
The Cost of Biased AI
Financial stakes are high: as regulations like the AI Act impose penalties, companies risk severe fines for non-compliant AI systems. Ensuring fairness is not only an ethical duty but also a legal requirement, with potential fines up to 7% of a company’s global revenue.
Public trust is also at stake, and having the right data and message no longer suffices. Superficial ethics—often called 'ethics-washing'—has become easy to spot. Today, it’s less about appearing ethical and more about embedding ethics into the very code and algorithms that power AI systems. This shift isn’t idealism; it’s an economic necessity.
Leading AI companies, from e-commerce platforms to tech giants, are moving beyond abstract debates to concrete actions. They’re not just asking, 'Is this ethical?' but 'How can we make this trustworthy?' and 'How can we measure it?'.
Amazon scrapped its AI hiring tool due to gender bias, underscoring the cost of overlooking ethics, affecting both reputation and revenue. But Amazon is not an outlier. Many major companies are now on a similar path, understanding that success and ethical AI go hand in hand. Rather than just checking boxes, they’re implementing internal and external evaluations to get a fuller view of how their AI impacts society in practical, meaningful ways.
The message is clear: embedding ethics in AI is not only the right choice; it’s essential for business survival. As AI becomes more integrated into daily life, companies that grasp this will lead the future, while those that don’t risk being left behind.
Building an Effective Anti-Bias Framework
An AI that is inherently 'ethical' may sound idealistic, but creating trustworthy AI goes beyond aspiration—it’s a necessity. At Sopra Steria, our approach to reducing bias in AI rests on three key pillars.
Governance: A strong governance structure brings diverse perspectives into AI projects, reducing bias risks. Sopra Steria's AI board, which includes members of the executive committee, oversees all AI initiatives, fostering responsible practices.
Technical Assessment: We rigorously assess models with tools such as those from the OECD, which offer resources to evaluate fairness and transparency. We actively contribute to Confiance.ai, developing standards for trustworthy AI.
Real-Time Monitoring: AI biases often appear only after deployment, even when engineers rigorously test the systems. Proactive, real-time monitoring helps us catch and address these biases early, preventing small issues from escalating as models scale up.
When Bias Is Unavoidable: Focus on Mitigation
Some bias in AI is inevitable, given the challenges of real-world data, cultural diversity, and current technological limits. When complete elimination isn’t feasible, the priority shifts to reducing bias impact for fairer outcomes.
We should see AI as a dynamic tool—transparent, actively monitored, and continuously refined to manage bias effectively. Creating ethical AI requires more than broad commitments; it calls for concrete, practical metrics at every stage, from data gathering to model updates.