Will the AI Act kill innovation in European banks?

by Marine Lecomte - Offers and Innovations Manager for Financial Services
by Vincent Lefèvre - Regulatory Tribe Director Sopra Steria Next
| minute read

Artificial Intelligence (AI) has the potential to transform the banking sector, for example, by revolutionizing credit scoring. But could this wave of innovation be significantly slowed down? With the AI Act, the European Union becomes the first region in the world to regulate the emerging field of AI. This regulation undoubtedly serves a noble purpose: protecting European citizens from AI-related risks such as lack of transparency, algorithmic bias, and more. However, by arriving so early in AI’s development cycle, will this regulation slow the adoption of this technology in Europe, particularly in the banking sector? 

The AI Act: A Generalist Framework with Limited Added Value? 

The AI Act establishes a regulatory framework for the use of artificial intelligence. This framework is the result of long debates. The initial work began in 2017, leading to a draft text in April 2021, and finally coming into effect on August 1, 2024—seven years later. This is an eternity in the AI world, where technology evolves at lightning speed. The rapid rise of generative AI, just as the text was being finalized in late 2022, illustrates how difficult it is to legislate on a constantly evolving field. 

Barely adopted, the AI Act is already a subject of division: some find it too general, while others criticize it for being too strict compared to more flexible approaches, such as those now favored in the United States. It is striking that the first application deadline for the European text, set for February 2, 2025, almost coincides with the revocation of the AI White House Executive Order. This highlights the U.S. priority on fostering innovation above all else. 

Setting politics aside, in practice, the AI Act appears to require a level of compliance that banks can easily meet. According to Denis Beau, First Deputy Governor of the Banque de France, the requirements for so-called “high-risk” AI systems align with well-established practices, such as risk management, data governance, and cybersecurity. In essence, the regulation introduces nothing radically new for the financial sector. 

Banking Sector: 4 Key Impacts of the AI Act to Anticipate 

Should the AI Act’s impact be minimized and viewed as merely an addition to existing regulations? That would be a premature conclusion. Even if it is not revolutionary, the AI Act requires rigorous and operational monitoring of AI solutions: 

1. Ban on "Dangerous Uses" 

Starting in February 2025, the AI Act will prohibit AI applications deemed dangerous by legislators. For instance, any processing that could lead to discrimination or disadvantage an individual based on their AI-assigned classification will be banned. Likewise, the regulation prohibits AI from “predicting the likelihood of a person committing a criminal offense solely based on profiling or the evaluation of personality traits or characteristics.” 

In practice, banks are not involved in these prohibited uses. 

2. Regulation of "High-Risk Uses" 

However, by August 2026, banks will need to comply with the AI Act’s requirements for “high-risk” use cases. 

For the financial sector, these use cases specifically involve creditworthiness assessments for granting loans to individuals and pricing in life and health insurance. Whether AI system providers developing the models or “deployers” using them, both will be responsible for ensuring security and transparency: risk management processes, technical documentation, data quality monitoring, and continuous oversight. AI systems must meet strict controls to prevent discrimination or bias, be traceable, and remain under human supervision. 

This framework does not constitute a disruption but rather builds on existing regulations, such as the 2022 recommendations on credit granting and monitoring, which already established the right to a reassessment of AI-driven decisions. However, the AI Act goes further by standardizing and strengthening these obligations at the European level. 

3. Supervision and Oversight Rules 

Supervising banks in this evolving context is also a major challenge. In France, the ACPR (Autorité de Contrôle Prudentiel et de Résolution) appears poised to play a central role. Denis Beau recently stated that “the financial supervisor itself is developing expertise and adapting its tools and methods. We will likely need to gradually establish a doctrine on new topics.” In this respect, the AI Act’s framework could facilitate interactions between banks and regulators. 

4. Increased Sanctions 

The enforcement aspect reinforces the importance of proactively addressing the regulation’s requirements. The AI Act’s penalties are particularly severe, reaching up to 7% of global annual revenue or €35 million. 

The AI Act: An Essential Framework for the Banks of Tomorrow 

While the AI Act does not represent a major revolution, it establishes a regulatory foundation that the banking sector cannot ignore. For banks, compliance is not just about meeting requirements but about anticipating them. This means actively following the regulator’s recommendations while taking concrete steps to align with this new framework. Key actions include raising awareness among all stakeholders about new obligations, defining roles according to the concepts of “providers” and “deployers,” and systematically identifying and classifying AI use cases. 

Now, banks must shift from pilot initiatives to industrialized deployments, making AI a driver of performance and differentiation. 

Sopra Steria is committed to supporting banks in transforming AI Act challenges into opportunities. Beyond our expertise in governance, technical audits, and regulatory compliance, we place people at the heart of this transition. We offer tailored training programs to educate and guide teams in mastering new obligations. By combining innovation, security, and ethics, we help banks develop responsible AI while strengthening trust among employees and customers. 

Search

artificial-intelligence

Related content

TradSNCF: AI to help rail staff welcome Olympic Games travellers

TradSNCF, rail operator SNCF’s AI-powered translation tool, enhances the travel experience for millions of passengers from around the world.

How Norad and Sopra Steria leverage AI and cloud tech to fight child illiteracy

A joint Norad-Sopra Steria project leverages AI and cloud tech to boost child literacy by creating open, accessible education resources. 

How AI is powering support services for EDF employees

World leader in low-carbon energy generation EDF wanted an innovative tech solution to ease pressure on IT support teams while also boosting service quality. AMY was the answer.