Regulating AI Deepfakes and Synthetic Media in the Political Arena
Brennan Center for Justice
Brennan Center for Justice in their Expert Brief initiates their discussion in the ‘early 2020s’ when deepfakes first rose to prominence in elections. Underlying deepfakes, the article points to artificial intelligence which allows for the likeness of politicians and social figures to be mimicked through video, audio, and images. Brennan Center holds that deepfakes have pronounced consequences for the electorate, as they increase the presence of inaccurate information for voters. The misuse of deepfakes for this end is documented within Slovakia’s 2023 elections and the 2024 US Republican Primary. With confidence that more advanced deepfakes will be at use in upcoming elections, The Brennan Center for Justice summarizes existing deepfake policy and makes recommendations for future policy.
Before providing guidelines for further legislation, the article mentions the presence of bipartisan legislation on political deepfakes at the federal and state level. Additionally, national and state regulatory groups (i.e. Federal Election Commission) are deemed as relevant stakeholders in deepfake policy. Even while promoting a mandate for action, the Brennan Center also acknowledges the legitimate use of deepfakes under the law, especially those protected by the First Amendment. Lastly, the Expert Brief formally defines deepfakes and highlights deep learning and text-to-image generation as features which allow for the misrepresentation of individual citizens and societal figures through audio, video, and images.
In their ‘Urgent Considerations for Policymakers’, the Brennan Center first details rationale which must be considered when regulating deep fakes. This includes combatting political misinformation, upholding an ‘informed electorate’, and protecting political candidates, election workers, and the electoral system.
Next, the Expert Brief specifies the types of deepfakes which should fall under federal law. The Brennan Center recommends that laws target campaign advertisements with images, videos, and audio that portray words or action which did not occur are regulated. The article also presents a spectrum on the strictness of laws which have been enacted previously. For example, strict statutes ban misleading political/election communications regardless of AI use, while looser regulations target only synthetic media sources. Furthermore, consideration is suggested for cases when candidates use synthetic material for their own advertisement and whether regulation should stop at the predominant use case for political deepfakes (synthetic video) or should go further. The article points to ‘electioneering communications’ under federal campaign finance law which can serve as a model for deepfake regulations. Lastly, this section proposes exceptions for specific use cases for deepfakes (i.e. satire, parody, publication by press or private citizens).
Third, the Brennan Center contrasts the possible forms which regulations can take, which include disclosures and bans. Disclosures have larger precedent in existing law due to previous Supreme Court rulings on campaign finance rules. However,
prohibition of deepfake materials is deemed more effective due to total prevention of voter misinformation. Previous state legislation has confined deepfake bans to a specified moratorium (‘preelection window’) leading up to political contests.
Finally, the Expert Brief recommends parties which should be regulated under deepfake law. This includes the political candidates, political action committees (PACs), and creators of deepfakes disseminated in elections. The section also holds separate discussion on online media platforms, presenting the Honest Ads Act proposal regulating political advertisements and the Federal Communications Commission’s ‘political programming rules’ governing content disclaimers as applicable frameworks for deepfake law.
Here’s What Policymakers Can Do About Deepfakes, Right Now
IBM
International Business Machines Corporation (IBM) published a set of priorities for policymakers, in which deepfake laws can serve as a safeguard for society. Throughout the article, IBM highlights their previous actions to promote the responsible use of AI. First, IBM provides a formal definition for deepfakes, being a video, audio, or image which is realistic, has been manufactured through AI, and imitates the likeness of an individual. The article stresses that deepfakes have a broad impact, from the functioning of our political system to our experience as private citizens. Alongside more specific legislation for targetted deepfake use, IBM has highlighted its previous support of general policies to mitigate the harmful AI use such the Munich Tech Accord (Tech Accord to Combat Deceptive Use of AI in 2024 Elections) and regulations for malicious technology use.
IBM expressed three vital interests that substantiate ‘protection’ through legislation on deepfakes. First, deepfake policy is crucial in safeguarding elections. The article details how deepfakes impact voters’ access to an independent and honest election, where constituents can be misled on how to vote and candidates on the ballot. IBM proposes future legislative action which prevents the use of political deepfakes in federal races (as proposed in the US Senate’s Protect Elections from Deceptive AI Act), and provides candidates with an avenue for legal recourse when implicated in deepfakes. IBM also describes its support of moderation rules for digital content like deepfakes, with its endorsement of the European Union’s Digital Services Act.
Next, the article emphasizes the importance of deepfake law in protecting creative content. Individual can steal copyrighted work and generate deepfakes which imitate the likeness of artists, musicians, and actors. This creates an inherent disincentive for creators to produce quality work as others can exploit their effort to make profit. IBM supports the NO FAKES ACT in the US Senate to establish a national right to personal likeness.
Lastly, deepfake legislation is necessary to secure individual privacy. Malicious actors have used deepfakes to generate pornographic materials of victims without their consent. The article specifies women and minors as targets for abuse from nonconsensual pornography. Therefore, IBM has supported criminal and civil penalties through the Preventing Deepfakes of Intimate Images Act for individuals who create or threaten to publish these types of deepfakes. IBM has endorsed the EU AI Act which requires transparent disclosure when fabricated content is included in all types of deepfakes.
The Princeton Legal Journal
The Princeton Legal Journal published an article detailing the necessary legal considerations behind the passage of deepfake laws in the United States. First, the article provides context and a definition on deepfakes, presenting the implementation of deep learning and fake nature of these files as the origin of the word ‘deepfake’ and describing its objective to imitate the likeness of an individual through technology. Applications for deepfakes are depicted as vast, including in education, film, and consumer behavior. However, negative uses for deepfakes are also widespread. The article reports that 90% – 95% of deepfake videos depict nonconsensual pornography, with 90% of these deep fakes targeting women who are largely minors. Without policy interventions, anyone with online visibility has the potential to be targeted by these clips. Furthermore, deepfake audio has been used to extort money from unknowing victims who cannot sense the use of vocal mimicry. Individuals and companies alike could be a target of financial fraud. Lastly, individuals have employed deepfake videos and text in politics. Political misinformation has increased significantly, with false statements and fabricated videos being shared of political candidates.
Next, the article provides an overview of existing legislation on deepfakes. As of the article’s publication date, there were no federal laws passed specifically on deepfakes. However, the journal does describe direction from the National Defense Authorization Act which compels the Director of National Intelligence to inform the public on the use of deepfakes by governments abroad, along with their impact on national security and misinformation. The author notes that this approach ignores the risks of deepfakes on domestic security. Next, the presence of state legislation on deepfakes is discussed. Overall, states have largely focused on statutes related to AI use rather than deepfakes specifically, with only five state laws passed focusing on deepfakes. Looking internationally, the article examines deepfake law in nations abroad. China has employed the strictest model for deepfake regulation with ‘Deep Synthesis Provisions’. Creators of deepfakes cannot depict an individual without their permission, and must disclose that their deepfake was created using AI. While no other nation has instituted a comparable prohibition on deepfakes, some have implemented legal protections on one’s individual likeness. German Basic Law includes a General Right of Responsibility which protects the right to one’s own image, and the United Kingdom has sought amendments to its Online Safety Bill to prohibit illicit, non consensual deepfakes.
Lastly, the article examines existing legal and constitutional precedent which may apply to deepfakes. First, protections for speech and expression under the First Amendment can apply to deepfakes. Generally, speech is protected if it is not classified as libel, written/spoken defamation, slander, and profanity. However, even if online platforms passively transmit deepfakes posted by users, media websites themselves cannot be sanctioned as a publisher or speaker themselves (Section 230). In addition, copyright protections are relevant for deepfakes. The fair use doctrine under the Copyright Act of 1976 requires evaluation of four conditions before a source including copyrighted material can be deemed in violation of copyright law: purpose and character of use, nature of original work, amount and substantiality of content borrowed, and effect of such use on the potential market. The article further contends that deepfakes using protected content specifically for a ‘transformative’ purpose distinct from that of the original source may be protected from legal precedent established in Campbell v. Acuff Rose. Finally, defamation law is applicable towards deepfakes. The article notes that defamation statutes vary from state-to-state, and fall under civil rather than criminal law. Also, content must present claims as a matter of fact rather than opinion to be considered in violation of defamation law. Alongside regulations governing defamatory content, the article notes that cases may also fall under ‘false light law’, which only considers emotional distress that results from speech instead of the content itself. In summary, the author contends that existing statutory frameworks may not be satisfactory in regulating deep fakes, and that more specific legislation may be required.
Combatting deepfakes: Policies to address national security threats and rights violations
Policy Paper Submitted to NIST (PDF)
This paper provides policy recommendations to address threats from deepfakes. First, we provide background information about deepfakes and review the harms they pose. We describe how deepfakes are currently used to proliferate sexual abuse material, commit fraud, manipulate voter behavior, and pose threats to national security. Second, we review previous legislative proposals designed to address deepfakes. Third, we present a comprehensive policy proposal that focuses on addressing multiple parts of the deepfake supply chain. The deepfake supply chain begins with a small number of model developers, model providers, and compute providers, and it expands to include billions of potential deepfake creators. We describe this supply chain in greater detail and describe how entities at each step of the supply chain ought to take reasonable measures to prevent the creation and proliferation of deepfakes. Then, to operationalize our proposal more concretely, we provide sample legislative text. Finally, we address potential counterpoints of our proposal. Overall, deepfakes will present increasingly severe threats to global security and individual liberties. To address these threats, we call on policymakers to enact legislation that addresses multiple parts of the deepfake supply chain.
Trust Your Eyes? Deepfakes Policy Brief
Center for Strategic and International Studies
The Center for Strategic and International Studies (CSIS) published a policy brief on deepfakes, which highlights their harm on society and potential frameworks for a solution. First, deepfakes are defined as alterations in video and audio which are AI-generated and result in a realistic product that falsely ascribes speech and action to an individual. The article notes specific features within deepfakes that make detection more difficult, such as “facial expressions, movements, intonation, tone, stress, and rhythm.” Simultaneously, the barrier to entry in creating deepfakes is seen as reduced due to automations from machine learning and software which is open-source.
CSIS presents a dynamic of human trust which is exploited when viewing deepfakes. Online platforms have not provided a sufficient barometer for genuine content, and human senses cannot reliably detect the intricate modifications included within deepfake content. Furthermore, anonymous posting and lack of moderation on media websites exacerbates the misinformation caused by deepfakes. The article specifically highlights authoritarian and extremist entities who leverage this unrestrained environment to further distrust amongst users.
The article also mentions the threats posed by deepfakes, with a sample of 15,000 deepfakes collected in 2019 revealing that 96% were pornographic in nature. Phishing schemes and fraud using deepfakes are also discussed. The article also discusses political harm resulting from deepfakes. Intelligence reports suggest that foreign actors may use deepfakes to promote messaging against the United States and its allies around the globe. Historic context underpins contemporary use of deepfakes, where authoritarian leaders in the Soviet Union and China have previously modified images to create a reality conducive to their political gain. In contrast, CSIS also mentioned uses of deepfakes which do not harm, and can even benefit society. Deepfakes have been used to spur innovations in entertainment, medical training and treatment, advertising, and news transmissions.
Lastly, the article proposes solutions to defend against deepfakes. It supports both independent AI algorithms which detect anomalies present in deepfakes and security infrastructure within video/audio creation that reveal instances of tampering. However, current solutions are deemed as limited due to rapid advances in formulation of deepfakes. Alongside these technological frameworks, the article also presents existing state statutes which regulate deepfakes through bans on election interference and revenge pornography. Looking at federal law, it is seen that legislators have only been able to introduce proposals to ban authorship and dissemination of deepfakes with harmful intent. Lawmakers have not realized passage of regulations targeting deep fakes due to questions on jurisdictional authority and lack of enforcement mechanisms for the federal government. The article seeks a balance in potential legislation between regulating harmful deepfake use and preservation of individual rights such as speech and expression. The most notable for the federal government in enacting effective policy interventions is Section 230 of the Communications Decency Act, which shields online platforms from legal liability for the content that is published on their website. CSIS maintains that robust solutions will close the gap between deepfake creators and online websites which transmit this material to the public.