top of page

Regulatory Barriers to Solving “Deep Fake” Risks

By: Gonzalo Nuñez


Introduction

On January 22, 2025, President Donald J. Trump threatened Russia with imposing “high levels” of taxes, tariffs, and sanctions if it did not cease its invasion of Ukraine. Through a post on Truth Social, President Trump told Russia that the war was “only going to get worse” if Russia and Ukraine did not make a deal to end the conflict. Following the Truth Social post, a Ukrainian Telegram channel named BAZA, ce H’yuston (Base, this is Houston) circulated a video of Trump making remarks directed to President Vladimir Putin. In the video, President Trump is heard saying “I do think Putin is a strong leader, and I respect that, but he plays bad games. And that always ends badly. We all remember the story of Saddam, Ceausescu, and, of course, Gadhafi . . . terrible death. I tell you, but that's how it ends. So, Vladimir, let's not let it come to that.” The video garnered vast attention throughout Ukraine and Russia, resulting in Andrey Isajev, a member of Russia’s parliament, accusing President Trump of trying to force Russia into peace negotiations with threats. During an interview with 60 Minutes, Isajev fired back at President Trump by stating that he should remember the fate of President John F. Kennedy. The video that began this firestorm turned out to be a deepfake, or a piece of media that has been manipulated with artificial inteligence (AI) technology. Two Ukrainian news networks, New Voice and Antikor, confirmed that the video was a deepfake. Furthermore, BAZA’s Telegram channel itself admitted that the video was doctored using deepfake technology.

This scandal is a prime example of the power for deepfake technology to undermine national security. The advent of AI technology in recent years has made software for creating deepfakes readily accessible to the public and, in turn, exacerbated the use of technology for criminal activity and the spread of misinformation. Public figures are often utilized to create deepfake propaganda because their likenesses and voices are readily available to be fed to AI programs. Deepfake creators pose an increasingly ominous national security risk because bad actors can use this widely available technology to spread misinformation, extort public figures, and warp the policy stances of governments. Further examples of media doctored by deepfake technology include a video of former-Speaker Nancy Pelosi appearing inebriated and slurring her words during a public appearance, CNN reporter Jim Acosta acting aggressively towards a White House intern, and a deepfake of South Korean President Yoon Suk Yeol and his wife at a rally for the impeachment of President Yoon. While it may seem like these are one-off instances specifically targeted towards politicians, the threat of deepfakes permeates all aspects of society and affects the world at large. Last year, Sumsub, an online full-cycle verification platform and ongoing monitoring provider, released its third annual Identity Fraud Report which found that there had been a ten-fold increase in deepfakes detected globally across all industries from 2022 to 2023. This encompasses a 1740% increase in deepfakes in North America alone.

In response to the surge in deepfake technology and related fraud, government regulators are trying to find ways to deter the deceptive use of deepfake technology by passing legislation to criminalize and punish this behavior. This article will explore the Federal Trade Commission’s (FTC) recent proposal to extend criminal liability to entities that facilitate the impersonation of government and business officials. This article will also identify potential barriers that the FTC may face while attempting to enforce these policy changes and how entities and individuals can defend themselves against these charges.


What Is a Deepfake and How Are They Made?

            As previously mentioned, deepfakes are pieces of media that have been manipulated using AI technology to replace faces, distort facial expressions, or synthesize speech of a previously existing piece of media. Deepfakes are created by feeding training data—such as images, videos, audio, and text—to “deep learning” AI models that recognize features within the data and impose those features onto another piece of media or generate a new piece of media altogether. These deep learning models are loosely modeled after the neural networks of the human brain that identify patterns. To generate deepfake content, a user typically has to feed large quantities of media to these deep learning models to “train” the model to recognize a specific person’s facial features, mannerisms, or patterns of speech. Once the model has been fed enough data, it can reconstruct these features into a new image, video, or audio with great accuracy. Due to the massive volume of data that deep learning models need to create accurate deepfakes, public figures such as celebrities and government officials are the most common subjects of deepfake media.

            While the creation of a deepfake may sound complex to someone unfamiliar with the underlying technology, the software used to create deepfakes is widely available on the internet and has been simplified for anyone with basic computer skills to utilize.


How Can Deepfakes Jeopardize Our National Security?

            The Department of Homeland Security (DHS) and the Government Accountability Office (GAO) have recognized the threat that deepfakes may pose to our national security. DHS has stated that the rapid advancement of deepfakes poses a “clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” DHS has posed hypothetical scenarios in which deepfakes could be used to threaten our national security, including: doctoring speeches of public officials to distort their stances on policy issues or support of certain political parties; generating non-consensual pornography involving government officials; producing false evidence of criminal activity; or generating fake kidnappings of intelligence assets to extort governments into making ransom payments. The GAO has also recognized that deepfakes could be used to influence elections, incite civil unrest, or become a weapon of psychological warfare. 

            Ultimately, deepfakes can be used to undermine public trust in the media, distort a government’s stances on various policy issues, and sow chaos online that may translate into violence offline. While there may be certain positive uses for deepfake technology—such as enhancing accessible communication, stimulating memory and nostalgia, and recreating evidence in a court room—regulators must become aware of the nefarious uses of this rapidly advancing technology. As deepfake technology becomes more advanced, the quality of the content it generates becomes more convincing, thus deepening the threat that manipulated media can have on our government and society at large.


The Proposed FTC Rule

            In response to the clear threats that deepfakes pose to our national security, government regulators are attempting to craft and refine their policies to deter and punish the nefarious use of deepfakes. On February 15, 2024, the FTC announced that it would seek comments on a supplemental notice of proposed rulemaking, which would extend the protections granted by the “Trade Regulation Rule on Impersonation of Government and Businesses” (the Rule) (16 CFR § 461). Currently, the Rule protects government and business officers against fraudulent impersonation through the use of technology, such as deepfakes. The proposed amendment would extend this protection to all individuals and potentially extend liability to companies that provide the software used for impersonation. As written, the proposed rule would extend liability to: (i) parties who provide “goods and services” (or “means and instrumentalities”) (ii) with “knowledge or reason to know” that those goods or services will be used by bad actors to (iii) “materially and falsely pose as” a government or business official with the goal of affecting commerce. Former FTC Chair, Lina M. Khan, stated that this policy would enhance the FTC’s ability to tackle AI-enabled scams that fraudsters conduct through the use of deepfake technology.


Potential Barriers to Enforcement

            While it is commendable that government regulators are acting swiftly to respond to the threat of deepfakes, the question remains whether this amendment will actually work to deter the criminal use of widely available AI technology. The proposed amendment for extending liability to companies that provide the software to generate deepfakes used for fraud can be challenged on two grounds.

First, it is likely that the FTC could face serious challenges to meet the evidentiary burden needed to prove that a company had “knowledge or reason to know” that their services were being used for fraudulent impersonation beyond a reasonable doubt. The problem with meeting this element arises from the fact that the proposed rule effectively attempts to establish secondary liability for the criminal actions of bad actors against companies that provide services to the public. To illustrate the difficulties the FTC may face in proving this element of the proposed rule, 18 U.S.C § 2333(d)(2) of the Anti-Terrorism Act allows U.S. citizens to bring suits against an entity that “aids and abets” international terrorism by “knowingly providing substantial assistance” to terrorists. In Twitter Inc. v. Taamneh, the family of a victim of a terrorist attack perpetrated by ISIS sued Facebook, Google, and Twitter under § 2333 of the Anti-Terrorism Act. The plaintiffs argued that the social media companies knew that their platforms were being used for ISIS’s recruitment, fundraising, and organizing operations but did not take action to prevent ISIS from utilizing their platforms. In 2023, the Supreme Court unanimously ruled that social media platforms could not be held liable for the actions of bad actors like ISIS, “even if the companies had some general awareness that their platforms were being used for criminal activities.”

Second, entities charged with violation of the FTC rule could invoke § 230 of the Communications Decency Act, which provides limited federal immunity to providers of computer services whose services are used to create third-party content. Courts have interpreted this statute to shut the door on a wide variety of lawsuits against providers of computer services and to preempt laws that would hold such providers liable for the conduct of third-parties. Most recently, the Supreme Court in Gonzalez v. Google L.L.C., declined to analyze the scope of the immunities provided by § 230 because the plaintiff’s allegations here were “materially identical” to the allegations of the plaintiffs in the previously mentioned Taamneh decision.

Consequently, the FTC could face similar obstacles when trying to hold AI software companies like DeepAI or OpenAI liable for the criminal conduct of bad actors that used their services to create AI generated deepfakes. Defendants could challenge allegations that they had “knowledge or reason to know” that bad actors used their services for fraudulent impersonation, as described by the amended Rule. This can be done by invoking the precedents set by Taamneh and Gonzalez or arguing that they are immune from liability by invoking § 230 of the Communications Decency Act.


Conclusion

            Deepfakes pose a real and evolving threat to our national security. As AI technology improves, so too does the quality of deepfakes and their ability to be used for criminal activity. As government regulators try to catch up with evolving AI technology, statutes and regulations such as the FTC’s proposed rule to extend liability to companies that provide bad actors the means to create deepfakes, must be tailored to fit with the legal precedents established by the judiciary. In short, it is likely that the FTC’s proposed amendment will be at least challenged, or at most ineffective at holding tech companies liable for the conduct of bad actors that utilize their services. The Taamneh and Gonzalez decisions, as well as the immunity provided by § 230 of the Communication Decency Act could pose significant roadblocks for the FTC’s mission.

bottom of page