A “large coordinated attempt on social media” to spread pro-Russian, anti-vaccine, and anti-LGBT rhetoric ahead of the European elections in June 2024: this is what Dutch researchers described in a recent study.
It’s just one example
among many others: social media platforms are constantly flooded with
disinformation, exaggerations, lies, conspiracy theories, and absurd or
misinformed statements. This polarised noise, amplified by AI-generated fakes,
sometimes even manages to infiltrate global media, fuelled by zealous - whether
intentional or not - commentators.
In an environment of
widespread distrust, confusion, interference, and heightened conflict in the
digital public sphere, the time for building barriers against disinformation is
over. We must now learn to "live with" disinformation in order to rethink
information warfare and collectively develop ways to respond to it.
Disinformation: An
Old Tactic, Now Radically Transformed
Disinformation spans
a broad range, including everything from image manipulation and fake social
media accounts to counterfeit websites that mimic news outlets, statements that
are cut short or taken out of context, or deepfakes. This extensive list is growing,
and not only are the forms of disinformation multiplying, but its reach is also
expanding.
Make no mistake:
disinformation is a genuine weapon of war. It has encroached upon every aspect
of our democracies and lives, constituting a systemic threat designed to
amplify societal fractures, undermine trust, and instil universal scepticism. A
Sopra Steria/Ipsos poll carried out in February this year found 72% of French people were concerned about
the impact of disinformation on the European elections. So, how can we
build a society when all citizens are driven to suspicion, even regarding
verifiable facts?
The temptation might
be to think that this is a new phenomenon, but disinformation is not a product
of modern times: its roots stretch far back into history. Sun Tzu chronicled information manipulation in his
celebrated work "Art of War" from the 4th century BC, and more
sophisticated techniques were described in the early 20th century in the works
of Walter Lippmann ("Public Opinion") and Edward Bernays ("Propaganda"). These techniques have
progressively increased in significance, spreading widely during the World Wars
and the Cold War.
The democratisation
of the internet, followed by the rise of social media, has catapulted
disinformation to the forefront of the communications battlefield: 88% of all fake news now spreads through
online platforms, especially X and TikTok (DiNum, 2024). The
phenomenon was evident in Trump's first campaign, the Brexit vote (the
Cambridge Analytica affair), the COVID-19 pandemic or Russian interference
operations such as the Doppelganger operation.
And AI is no stranger
to this escalation.
Beyond the headlines:
AI's complex role in disinformation
It’s clear that the
rapid spread of disinformation parallels the rise of AI, especially generative
AI. Moving away from misplaced techno-solutionism or technophobia, we can see
that AI's impact is double-edged.
On one hand, the rise of generative AI has led to an
unprecedented increase in the
number and sophistication of information attacks. These attacks are
asymmetrical: easy to initiate and difficult to defend against, due to their
scattered and deceptive tactics. They require minimal resources to launch and
can have disproportionately large effects on public opinion and discourse. This
ease of deployment, combined with their typically low intensity, makes them
insidiously efficient without drawing immediate or substantial counteraction.
Originally targeting
major media and political figures, the focus has now shifted to include the
general public, making every online user a potential target. This expansion in
target scope means that virtually anyone can unwittingly become a conduit for spreading
misinformation.
But on the other
hand, AI brings hope. Hope because
it offers the potential to imagine new solutions and scale existing ones to previously
unimaginable levels.
Let's take
fact-checking as an example. Over the last two decades, major media outlets—The
Guardian, The New York Times, Le Monde to
name a few— have heavily invested in journalist-driven fact-checking. The
concept was promising: correct falsehoods, but the time and effort invested did
not yield the expected results. The tremendous work done on verification,
sourcing, and correction did not stop the proliferation of fake news.
Several factors
contribute to this: the process is overly human-centric, too costly. Media and
journalists also become targets for disinformation actors and conspiracy
theorists. Most importantly, the attractiveness gap between the visibility of
sensationalist fake news and measured explanations seems impossible to bridge.
Looking back over the
past few years, it is obvious that fact-checking, despite its merits, cannot
single-handedly address the challenges of disinformation. But what if we
leverage AI advancements to revitalise these techniques within a global and
technological framework? What if we strengthen and expand existing measures
while developing new AI-driven strategies to fight disinformation?
Toward a new
collective resilience
We believe that
winning the war against disinformation requires
uniting civil society, research, and the private sector around three key priorities.
1. Prioritising education and
training
The rationale is
inescapable: If everyone can be both a target and a conduit for disinformation,
then everyone needs the ability to recognise and counter it. 74% of French
people believe they can tell the difference between true and false information
on social media, but over 60% think their neighbours cannot (Sopra Steria/Ipsos
poll, February 2024).
Starting education
and awareness at school is essential. Media
education, information literacy, understanding opinion manipulation, and AI
techniques are vital—not to instil fear but to provide a means to
understand, question, and develop critical-thinking skills.
These efforts should
be ongoing in organisations, much like cybersecurity protocols, and should be
systematically arranged and enhanced. Companies
have a pivotal role to play. They must build trust through
transparency, supported by strong management, proactive leadership, and clear
explanations of significant changes, internally and externally.
2. Holding social media platforms
accountable
With 60% of French people relying solely on
social media for their news (SSG/Ipsos
poll, February 2024), fighting disinformation without the cooperation of major
platforms is futile. It is imperative to bring them into discussions and secure
their active involvement in combating disinformation, by any means necessary.
The rollout of regulation is increasing, spearheaded by the EU's AI Act. Legislators are arming themselves with tools for
oversight and sanctions, and they should not hesitate to employ them.
Platforms need to be
responsible for their technological choices and the impacts they have. This
entails addressing moderation challenges and recommendation algorithms that
foster informational bubbles and enable the spread of disinformation.
3. Developing new
countermeasures compliant with the DSA (Digital Services Act)
Facing complex
attacks that exploit the full potential of digital technology, we need to shift
the balance of power through the
extensive development of AI-driven tools to monitor and combat disinformation
on social media, along with establishing trust environments among all
stakeholders.
At Sopra Steria, this
mission fuels our daily efforts. Two examples stand out as particularly
interesting.
At the Eurosatory
International Defence and Security Exhibition, we unveiled the first elements
of an information movement analysis system designed for conflict situations.
Associated with a Command and Control (C2) centre, this system demonstrates how
monitoring and analysing information movements can be integrated into a
multi-domain approach, enhancing the management of military operations and
geopolitical challenges.
Together with
partners, including numerous startups, we are also developing an end-to-end detection and response
solution for businesses to counter
information attacks, especially those generated by artificial intelligence.
This solution uses three main levers: cohort vigilance systems to detect weak
signals, AI-powered subject detection systems to track all topics discussed on
social media in real-time by cohort members, and influence forecasting to
predict "growing" topics based on engagement.
Combined with AI
systems specialised in detecting deepfakes and fact-checking services that
blend human analysis with AI, these solutions are intended to empower
organisations and states to be at the cutting-edge of the fight against
disinformation.
However, let’s be
clear: Despite significant progress and efforts, it would be naive to think
that our modern democratic societies can entirely stop the disinformation
campaigns generated by adversaries. We have entered a new era, and we must
learn to "live with" disinformation. This isn't about capitulation
but about recognising the enemy for what it is — a sprawling and faceless foe,
multifaceted and decentralised — and continuing the fight to understand both
the content and the manipulation processes, to respond collectively without
ever compromising any aspect of citizens' freedom.