Seeing is no longer believing: the impact of deepfakes on our perception of reality 👀

Published by Adrien,
Source: The Conversation under Creative Commons license
Other Languages: FR, DE, ES, PT

By Stéphanie Gauttier & Sylvie Blanco - Grenoble École de Management (GEM)

The manipulations achieved through artificial intelligence distort our view of reality, affecting our behaviors as consumers and citizens.


Crude deepfake of Donald Trump being arrested by police forces.

In 2022, a hacked Ukrainian television channel broadcast a deepfake of Ukrainian President Zelensky announcing his surrender, which could have altered the reality of the battlefield.

In May 2023, a fake image of the Pentagon with a smoke cloud caused a 0.29% drop in the S&P500 stock market, requiring an official denial from the U.S. Secretary of Defense.

In Hong Kong, an employee transferred $25 million believing he was following orders from his company's directors after a video conference call. In reality, he had never interacted with a real person.

These images are deepfakes, ultra-realistic content that does not represent reality, or even alters it, and can therefore distort our view of the world.

These hyper-manipulations — images, videos, and even interactive content — engage our hearing, sight, and participation. They are created by overlaying real images with fabricated data using machine learning algorithms combined with facial mapping software, and can be further enhanced through generative artificial intelligence.

Thus, some tools like Speechify, democratized online, allow anyone to make someone say anything with just a 30-second recording of their voice. The created content is so realistic that it can deceive even those who believe they can detect it, making it difficult to distinguish between real and fake.


One of the first popular deepfakes. No, the Pope never wore this white puffer jacket in public.
Image Wikimedia

Today, seeing is no longer believing: images, videos, and other fabricated content can be indistinguishable from authentic content. This phenomenon poses serious challenges for society.

Manipulating consumers: from beauty standards to cultural hegemony

The images of models used in advertising campaigns by some major brands are generated by artificial intelligence systems. They convey a certain idea of beauty and aesthetic standards that are often unattainable in the real world.

Beyond an aesthetic that might simply characterize content generated and modified by artificial intelligence, the hyper-manipulated models and influencers created by deepfakes seek to integrate into the real world as if they were human. Incognito Influencer, a hyper-manipulated virtual influencer, appeared in images from all the fashion weeks in 2023, even though no one actually met him there. Aitana Lopez, a hyper-manipulated model, has her own social media accounts.

In these cases, it is not about generating a "polished" image to sell a product, but about staging a virtual personality to influence humans in the physical world.

There is no limit to what can be done with these hyper-manipulated agents, since they do not exist. In this sense, deepfakes can convey unrealistic daily life models, which can exacerbate the discomfort of those seeking similar lifestyles.

In this regard, marketing research considers deepfakes to be different from traditional advertising mechanisms. Hyper-manipulated images can be so inspiring that individuals may believe them directly without seeking to understand them too much. The richer the deepfakes (for example, by conveying information beyond a simple photograph through video content), the more consumers are inclined to buy the associated products.

However, this does not mean that brands should not set limits: consumers who do not detect the hyper-manipulation but later learn they have been deceived are less likely to buy products from that brand.

Finally, the artificial intelligence tools used to generate deepfakes reflect and amplify the biases of our societies. For example, eight years ago, an AI invited to judge a beauty contest discriminated against people with dark skin.

Even today, the Bing search engine shows almost exclusively Caucasian babies with blue eyes when searching for images of "beautiful babies". This suggests that hyper-manipulated virtual influencers may present only certain ethnicities and visions of beauty as "inspiring" aesthetically and socially.

Deepfakes can shake political life

Politicians can be the subject of modern parodies through deepfakes. In 2023, Clad 3815 featured French political leaders on his Twitch channel for an interactive session in the form of a deepfake, intended for entertainment.

However, AI-generated content, if not announced as such, can be manipulative, as it leads one to believe it shows reality when it does not.

For example, for New Year 2024, a member of the French presidential party shared a deepfake on X depicting Marine Le Pen wishing a happy new year in Russian, without indicating it was a montage, which could affect the perception of Marine Le Pen and, or, seem entertaining in light of debates about her political party's stance on Russia.

Political deepfakes effectively affect our view of the world: experiments have shown that deepfakes can convince up to 50% of the American public of scandals that never happened.

But we are not all equal in the face of these manipulations. For example, individuals more interested in politics will be better able to detect these deepfakes than others, which could be explained by their general knowledge of political information, their developed ability to analyze it, and their increased exposure to this type of content.

On the other hand, those who do not recognize these deepfakes may see their political opinions affected. Indeed, once an individual is confronted with a deepfake depicting a politician, their opinion of that politician tends to deteriorate, while their opinion of the political party represented does not change. If the deepfake is presented to individuals who already have a negative opinion of the political party represented, their view becomes more negative regarding both the party and the politician after seeing the deepfake. This means that deepfakes can be used to manipulate opinions through targeted deepfake campaigns.

Generative artificial intelligence: towards responsible uses

If one is optimistic, it can be thought that generative artificial intelligence allows for the creation of educational content, such as videos of historical figures telling history in a captivating way or reconstructing historical events.

It can also enable the emergence of new services. Some are exploring the idea of using it to support people in grief. The difference between a deepfake and a simple use of generative artificial intelligence is made based on the purpose of the content generation: the deepfake is related to the idea of deceiving and manipulating. If responsible uses of generative artificial intelligence are possible, it is important to promote them and regulate potential deviations.

In this sense, the reflection on regulating deepfakes was raised as early as 2019 in the French National Assembly, and stakeholders are taking responsibility: AI-generated images on Google have been watermarked since 2023, a technique the company applied in 2024 to its video content, particularly those created on Gemini and via its chatbots. In 2023, Twitch announced the ban on the dissemination of pornographic deepfakes. In 2024, California enacted a law against the generation of deepfakes in the context of elections.

This article follows the publication of the white paper "Generative AI and Deepfakes" for the Auvergne Rhône Alpes region. The white paper was coordinated by P. Wieczorek and S. Blanco. It received contributions from L. Bisognin, S. Blanco, T. Fournel, D. Gal-Regniez, S. Gauttier, S. Guicherd, A. Habrard, E. Heidsieck, E. Jouseau, S. Miguet, I. Tkachenko, K. Wang, and P. Wieczorek.
Page generated in 0.134 second(s) - hosted by Contabo
About - Legal Notice - Contact
French version | German version | Spanish version | Portuguese version