Federal Legislation

 

Passed Legislation

Identifying Outputs of Generative Adversarial Networks Act

U.S. Congressional Research Service

This bill directs the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) to support research on generative adversarial networks. A generative adversarial network is a software system designed to be trained with authentic inputs (e.g., photographs) to generate similar, but artificial, outputs (e.g., deepfakes).

Specifically, the NSF must support research on manipulated or synthesized content and information authenticity and the NIST must support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content.

 

TAKE IT DOWN Act

U.S. Congressional Research Service

This law criminalizes the publication of non-consensual intimate imagery (including AI “digital forgeries”) and requires covered platforms to remove reported content within 48 hours. Became law on May 18, 2025.

Specifically, the bill prohibits the online publication of intimate visual depictions of an adult or a minor subject where the publication is intended to cause harm or does cause harm, abuse, or harass the subject. Covered platforms must establish a process whereby victims of non-consensual intimate imagery may notify the platform the existence of and request the removal of such visual depictions.

 

Legislation in Consideration

DEFIANCE Act of 2025

Re-Introduced (similar core provisions) to Senate Judiciary Committee on 5/21/2025 (initially was 118th S.3696).

To improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries, and for other purposes.

The DEFIANCE Act of 2025 expands the federal law code on ‘Civil Action Relating To Disclosure of Intimate Images’ by defining digital forgeries, and outlining which victims included in explicit deepfakes without consent can initiate civil litigation. Specifically, the bill holds producers, distributors, solicitors, and owners of explicit deepfakes with intention to disclose them as liable parties facing punitive damages from an appropriate U.S. district court.

 

The Preventing Deep Fakes Scams Act

Re-Introduced (similar core provisions) to House Financial Services Committee on 2/27/2025 (initially was 118th H.R. 5808).

This bill establishes the Task Force on Artificial Intelligence in the Financial Services Sector. Members include representatives from the Department of the Treasury, the Federal Reserve Board, the Federal Deposit Insurance Corporation, and the National Credit Union Administration. The task force must report on (1) how banks and credit unions prevent fraud that utilizes artificial intelligence, (2) best practices for financial institutions to protect their customers, and (3) related legislative and regulatory recommendations.

 

Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025

Introduced; in Senate on 4/9/2025.

To require transparency with respect to content and content provenance information, to protect artistic content, and for other purposes. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025 would establish a national framework to protect the authenticity and traceability of digital media content, particularly to address manipulated or AI-generated (“deepfake”) material.

 

Preventing Deepfakes of Intimate Images Act

Introduced; in House on 3/6/2025.

To prohibit the disclosure of intimate digital depictions, and for other purposes. The Preventing Deepfakes of Intimate Images Act would amend federal criminal law to make it illegal to knowingly create or share AI-generated or digitally altered non-consensual intimate images. Additionally, this bill would define “intimate digital depictions” to include realistic synthetic or manipulated content, adding these offenses under the Violence Against Women Act (VAWA) framework to ensure that AI-generated intimate content is treated with the same penalties as traditional “revenge porn.”

 

Protect Elections from Deceptive AI Act

Introduced; in Senate on 3/31/2025.

To prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes. The Protect Elections from Deceptive AI Act would prohibit the distribution of materially deceptive audio or visual media that is generated by artificial intelligence (AI) relating to federal candidates, prohibiting individuals, political committees, and other entities from knowingly distributing such material for purposes of influencing an election or soliciting funds (not applicable to radio or media broadcasting stations). Additionally, this bill would permit a federal candidate whose voice or likeness was falsely represented to bring a civil action for injunctive relief or damages.

 

NO FAKES Act of 2025

Introduced; in Senate on 4/9/2025.

To protect intellectual property rights in the voice and visual likeness of individuals, and for other purposes. The NO FAKES Act of 2025 would establish federal protections for an individual’s voice and visual likeness from unauthorized digital replication, including AI-generated deepfakes. Additionally, this bill directly targets the unauthorized creation or monetization of AI-generated likeness, effectively creating a federal deterrent for AI-voice clones, deepfakes, and more.

 

AI PLAN Act 

Introduced; in House 9/18/2025.

To require a strategy to defend against the economic and national security risks posed by the use of artificial intelligence in the commission of financial crimes, including fraud and the dissemination of misinformation, and for other purposes. The AI PLAN act would require the Secretary of the Treasury, the Secretary of Homeland Security, and the Secretary of Commerce, along with other U.S. officials to jointly submit to Congress a report describing policies, procedures, and strategies to combat AI-related risks including deepfakes, voice cloning, foreign election interference, digital fraud, and more.