In an era where seeing is no longer believing, deepfake technology has emerged as a double-edged sword. The term 'deepfake'—a fusion of 'deep learning' and 'fake'—refers to synthetic media where a person's likeness or voice is swapped or created using artificial intelligence. This makes it appear as though the person is saying or doing something they are not.
From online sales and marketing tactics to severe invasions of privacy and disinformation campaigns, the implications of deepfakes are vast. Understanding and dealing with deepfakes is still relatively new which makes it the perfect time to inject ethical guidelines, transparency measures, and proactive governance before the technology gets out of control.
The Majority of Deepfakes Today
One of the most insidious uses of deepfake technology is the creation of non-consensual pornography. Currently, pornography represents the vast majority of deepfakes. Images of women, some of whom are under-aged, are created without their consent or knowledge. Unfortunately, right now there is no simple technical fix for this problem.
Female celebrities and private individuals alike have fallen victim to this abuse, with their faces superimposed onto explicit content and spread across the internet. A notable case is that of Taylor Swift, whose digital likeness was exploited to create and disseminate pornographic material on the social media platform X, formerly known as Twitter. This case sparked outrage among Taylor Swift fans and Congress and furthered the conversation about digital rights and regulation.
In response to the Taylor Swift deepfake, X removed all identified images and said it was taking action against the accounts responsible for posting them. So far, platforms have tried to address deepfakes by asking users to report them, but by the time this happens, millions of users have already seen the images, as was the case with the Taylor Swift deepfake. This reactive approach by social media platforms is simply not enough. Stronger action is needed because this form of abuse is not only a gross violation of privacy but also serves as a tool for emotional and psychological harm.
Deepfakes in Business
In business, there are examples of deepfakes being used for both illegitimate and legitimate purposes.
On the illegitimate side, deepfakes have been weaponized to commit fraud. For example, a finance worker at a multinational firm recently paid out $25 million to fraudsters who used deepfake technology to pose as the company’s chief financial officer and other company employees in a video call. The finance worker was tricked by the deepfake and while initially suspicious, he went ahead with the money transfer because the people on the video call looked and sounded just like colleagues he recognized from his company.
On the other hand, there are some early examples of how deepfakes may be used for legitimate business purposes, such as extending a seller’s influence and marketing reach. Deepfake technology has been used to create marketing and sales videos of human likenesses who promote products sold via online platforms such as Taobao, China’s most popular e-commerce platform. The creation of this type of deepfake is surprisingly inexpensive, costing as little as $1,100 or several thousand for a more sophisticated deepfake.
Once created, these deepfake videos can be livestreamed 24/7. Using the likenesses of online influencers round-the-clock in this way can reduce a company’s operating costs. If the deepfake influences are effective at bringing in sales, then the company does not need to hire as many human influencers who come at a much higher cost.
This type of advertising for an e-commerce site, where products are typically displayed as a static continuous shopping aisle, is believed by some to create more of an emotional connection between the host company and the viewer. But should we be trying to create an emotional connection between AI-generated deepfakes and human beings? What about when the existence of the deepfake is not transparently disclosed, or where consent has not been given for creation of the deepfake? This takes us straight into the realm of ethics, transparency, and governance.
Ethical Guidelines for the Creation and Use of Deepfakes
The creation and distribution of deepfakes bypass the fundamental ethical principle of respect for another individual’s autonomy when they are created without obtaining the person’s consent. By co-opting an individual's likeness, deepfakes can also infringe on rights to privacy and lead to damaging misrepresentations. Without ethical guidelines, such as those outlined below, the societal impact of deepfakes could profoundly erode the fabric of trust that underpins our understanding of truth and reality.
- Purpose and Intent: Legitimate purposes for creating and using deepfakes should be clearly defined, such as using them for educational purposes or lawful research. Uses intended to deceive, harm, or infringe on privacy rights, such as creating non-consensual explicit content, spreading misinformation, or impersonating individuals for fraudulent purposes, should be prohibited.
- Consent: Explicit consent from individuals whose likenesses are used to create deepfakes must be obtained. This includes informing them about how their images or voices will be used and the context for the content being created. Moreover, consent should be an ongoing process and not a one-time event. Individuals should have the right to withdraw their consent if they no longer feel comfortable with their likeness being used in a certain way.
Promoting Transparency
A significant challenge with deepfakes is the difficulty an average person has in distinguishing them from authentic content. This lack of transparency is a major hurdle in maintaining the integrity of digital media and preventing the spread of misinformation. To promote transparency, wide-scale adoption of disclosure and verification processes is needed.
- Labeling Deepfakes: Disclosure should be required when content has been altered or generated using deepfake technology, especially in contexts where the authenticity of the content could influence public opinion. If platforms and creators would provide visible and understandable notices when content is created or significantly altered with deepfake technology, individuals would be better equipped to critically evaluate the content and make informed decisions.
- Verifying the Source: Another way to promote transparency is to use technology that verifies the source and authenticity of digital content. Blockchain technology, for example, could be used to create a verifiable history of digital content, ensuring that alterations are transparently recorded. If a standard such as this were to be adopted industry-wide, it would help to combat the spread of harmful deepfake content and misinformation.
Proactive Governance
As with other types of technology, the creation and use of deepfakes have outpaced government regulation to address it. While a regulatory framework may be developed at some point in the future, it does not currently exist in the United States. This means that industry self-regulation is needed now, along with stakeholder collaboration and public education.
Companies should proactively adopt and enforce ethical guidelines and transparency standards by implementing their own clear internal principles, frameworks, and policies. Moreover, collaborative partnerships must be built across various segments of society - technology companies, academia, and government agencies - to continuously promote wide-scale adoption of deepfake ethical guidelines and transparency standards. Finally, education is key. Public awareness campaigns about deepfakes and how to critically assess digital content, can keep individuals from being misled.
As we look to the future, we must remember that the technology we create reflects our values. The story of deepfakes is still being written. It is up to us to ensure that the story becomes one where integrity, authenticity, transparency, and respect for the individual shine through.