Do you remember this image floating around the internet of a young girl clutching her puppy in the aftermath of Hurricane Helene?
Paired with a storyline about the devasting effects of the hurricane, the picture immediately tugged on all our heartstrings as it showed us the “reality of the storm.” There was not enough being done to help the poor victims of this natural disaster, many of us felt.
The only problem was, the picture was entirely AI generated, or a "deepfake."
Deepfakes are any type of media created with artificial intelligence that convincingly manipulate audio and video to fabricate reality. While the technology is impressive, its implications are deeply concerning.
Deepfakes do more than exploit the power of technology. Deepfakes leverage our psychological vulnerabilities, influencing how our brains process information and make moral judgments. By understanding how these mechanisms work in our decision-making, we can arm ourselves against manipulation in a media landscape that is littered with disinformation.
Deepfakes succeed because they exploit the very ways our brains are wired to process information and make decisions. Cognitive biases like confirmation bias, escalation of commitment, and the illusion of control influence how we perceive and respond to media, often leading us to accept false narratives without question. At the same time, our moral reasoning plays a critical role in how we interpret and justify these manipulations. Kohlberg’s theory of moral development explains how individuals at different stages of ethical growth may engage with deepfakes, while Bandura’s mechanisms of moral disengagement reveal how both creators and viewers rationalize harmful behavior.
By unraveling these psychological and moral processes, we can better understand the hold deepfakes have on us—and how to counteract their influence in an era of pervasive disinformation.
Why Deepfakes Work - Cognitive Biases and Psychological Mechanisms
Our minds are incredibly sophisticated but also desire shortcuts and routine. They like the ease of consistency when reviewing information, the reinforcement that its energy is spent on the right things, and the comfort of believing it is above average in assessing reality.
These processes are also known as:
- Confirmation Bias
- Escalation of Commitment
- Illusion of Control
But these very things are mechanisms that predispose us to believe in deepfake material.
Confirmation bias is the tendency to look for and favor information that aligns with our preexisting beliefs. Deepfakes are especially powerful when they align with a description that someone already believes to be true.
Political deepfakes, for example, often behaving in ways that align with certain audience members’ unfavorable beliefs about the candidate. Those deepfakes are spread quickly within partisan channels that are fueled by our eagerness to confirm suspicions.
Confirmation bias blinds us to evidence that contradicts our belief of the content’s authenticity. It’s not just that we want to believe; our brains are wired to find comfort in consistency, even at the expense of accuracy.
Take the image referenced above, the girl clutching her puppy after Hurricane Helene. It aligns perfectly with our understanding that natural disasters can cause heartbreaking suffering for humans and animals. Confirmation bias allows us to believe that the story conveyed in the picture was true without question.
Escalation of commitment is the tendency to continue believing, investing time, money, or effort into something simply because we have already invested significantly into it. Once someone has emotionally invested in the belief, they will tend to be resolute in their belief that it is true.
When it comes to deepfakes, many will assume and dignify the validity of a deepfake—perhaps by sharing it online—and are more likely to defend its authenticity, even when proven wrong. This cognitive bias results in retractions and corrections of deepfakes having little effect.
As with the image of the girl and her puppy, there are countless comments on social media of people supporting the image as a realistic representation of the devastation from the hurricane regardless of it being an AI generated image.
The illusion of control is the inaccurate belief that we can identify deepfakes or detect manipulation simply by paying closer attention. However, research shows that even tech-savvy individuals are frequently deceived by sophisticated deepfakes. This illusion of our own skills leads to overconfidence, making us more vulnerable to deepfakes, especially in high-stakes scenarios like financial fraud or political campaigns.
These three cognitive biases create an environment that allows deepfakes to exploit individual vulnerabilities and chip away at our collective trust in digital media.
Why Moral and Cognitive Development Matter
In addition to cognitive biases, our own moral and cognitive development plays a role in how we interact with potential deepfake material.
Lawrence Kohlberg’s theory of moral development describes three levels of moral reasoning:
- It begins with pre-conventional thinking, where decisions are based on avoiding punishment or pursuing personal gain (e.g., reciprocal arrangements like 'I'll scratch your back if you scratch mine').
- The conventional level involves adhering to societal norms and laws to maintain relationships and social order.
- The final post-conventional level is guided by internal principles and values, emphasizing justice, equality, and universal ethics.
Many deepfake creators operate at the pre-conventional level, driven by personal gain or avoidance of punishment. Most of us, though, respond at the conventional level, relying on societal norms or authority figures to determine the content’s validity.
To combat the influence of confirmation bias, escalation of commitment, and the illusion of control, we should critically assess deepfakes through the lens of Kohlberg’s highest stage of moral development—universal ethical principles.
Kohlberg theorizes society is the ultimate arbiter of truth for the majority or people at the conventional level. It’s critical that each of us take on the responsibility to critically scrutinize media and, when needed, challenge its validity to maintain trust in what we believe is reality.
Albert Bandura’s mechanisms of moral disengagement explain how, without conscious awareness, individuals justify unethical behavior, including the creation and spread of deepfakes. These mechanisms allow us to rationalize or justify our behavior and act in ways that are not aligned with our moral standards without guilt or responsibility.
With the image above, several of these mechanisms come into play as people engage with and share the photo, all while deflecting responsibility for their own roles in spreading disinformation:
- Moral Justification: Viewers move from considering the validity of the picture to advocating that sharing the image is justified because of the potential to raise awareness of the disaster and that there is a greater good that can come from it.
- Diffusion of Responsibility: When sharing the image people feel that their singular action is inconsequential in the broader perspective.
- Displacement of Responsibility: People argue that it’s the responsibility of the hosting platform to verify the validity of content rather than taking any individual accountability for evaluating authenticity.
- Minimizing Consequences: Viewers may downplay the impact of sharing this deepfake by reasoning that it’s only a picture or that the emotional impact of the picture outweighs the damage of sharing a completely computer-generated picture.
By understanding these mechanisms, we can recognize how moral disengagement perpetuates harm and strive to counteract it in ourselves and others.
Awareness is the Solution
The first step in combating deepfakes is acknowledging our own biases. Confirmation bias, escalation of commitment, and the illusion of control are not flaws to be fixed, but tendencies to be managed. Simple practices, like questioning why a piece of content resonates with us or pausing before sharing, can disrupt these automatic responses. Empowering individuals with critical thinking skills is crucial. Tools like reverse image searches and AI detection software can help verify authenticity. However, technology alone isn’t enough; fostering a mindset of skepticism and inquiry is equally important.
A practical framework for evaluating content might include:
1. Pause: Avoid immediate reactions.
2. Evaluate: Consider the source and context.
3. Verify: Use tools to check authenticity.
4. Decide: Share only if confident in its validity.
Remember Kohlberg's post-conventional level where our decisions are guided by internal principles and values? We can increase our own ethical awareness by:
- Reflecting regularly on our decisions and motivations behind them
- Understanding our own core values and acting consistently in alignment with them
- Questioning assumptions and evaluating credibility of online information
- Staying informed about real world ethical dilemmas and consequences
- Building self-awareness of our natural biases and consider ways to counteract them
Awareness Will Build Trust and Resilience
Awareness helps stop the spread of misinformation and rebuilds trust online. When people are more critical of what they see, it becomes harder for bad actors to manipulate or deceive others.
Deepfakes thrive on the intersection of technological innovation and psychological vulnerability. By exploiting cognitive biases and moral disengagement, they erode trust in what we see and hear. However, the solution lies within our grasp: by cultivating self-awareness, critical thinking, and ethical reasoning, we can protect ourselves and others from manipulation.
In a world where seeing is no longer believing, the most powerful tool we have is understanding how our minds work—and choosing to think critically, even when it’s easier not to.