Addressing Real Harm Done by Deepfakes / Hearing Wrap Up: Action Needed to Combat Proliferation of Harmful Deepfakes
Congress
On November 9th 2022, the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation hosted a hearing themed Advances in Deepfake Technology. Within the hearing, committee members and relevant experts discussed the use and authentication of deepfake images, specifically in the context of foreign adversaries and hostile actors. Lawmakers also discussed legislative action on the issue.
There were three main conclusions reached from the hearing. First, advancement in the procurement and spread of deepfakes makes it harder to discern between authentic and doctores images and videos. This trend can deteriorate confidence in cyberspace communications. Next, it was found that women and children are at highest risk to be victims of deepfake pornography, and therefore must be the focus of policy interventions. Lastly, scaling solutions for widescale use is the greatest challenge for stakeholders, with even the most precise algorithms seeing an inflation of incorrect classification of sources as deepfakes (increase in the number false positive). Thus, the committee agreed on a business model where online platforms possess inherent incentive to remove deepfakes themselves, rather than prioritizing user engagement with these sources.
Increasing Threat of DeepFake Identities
Department of Homeland Security
Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened. Many applications of synthetic media represent innocent forms of entertainment, but others carry risk. The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation. Based on numerous interviews conducted with experts in the field, it is apparent that the severity and urgency of the current threat from synthetic media depends on the exposure, perspective, and position of who you ask. The spectrum of concerns ranged from “an urgent threat” to “don’t panic, just be prepared.” To help customers understand how a potential threat might arise, and what that threat might look like, we considered a number of scenarios specific to the arenas of commerce, society, and national security. The likelihood of any one of these scenarios occurring and succeeding will undoubtedly increase as the cost and other resources needed to produce usable deepfakes simultaneously decreases – just as synthetic media became easier to create as non-AI/ML techniques became more readily available. In line with the multifaceted nature of the problem, there is no one single or universal solution, though elements of technological innovation, education, and regulation must comprise part of any detection and mitigation measures. In order to have success there will have to be significant cooperation among stakeholders in the private and public sectors to overcome current obstacles such as “stovepiping” and to ultimately protect ourselves from these emerging threats while protecting civil liberties.
Department of Homeland Security
Since their creation in 2017, there have been many examples of deepfakes which became famous. There have also been some examples of synthetic media which were not technically “deepfakes,” but which highlighted the potential for deepfakes and synthetic media to be used as disinformation. Finally, in at least one publicized case from 2021, the spectre of deepfakes was invoked as part of a criminal complaint, only to have that aspect of the charges retracted later. All of these examples are included on the timeline presented in the “Increasing Threats of Deepfake Identities” report to which this addendum is attached. This addendum will provide the reader with brief descriptions of the media, visuals related to them and links for further information.
Phase 2: Deepfake Mitigation Measures
Department of Homeland Security
Deepfakes are a type of synthetic media—commonly generated using artificial intelligence/machine learning (AI/ML)—presenting plausible and realistic videos, pictures, audio or text of events which never happened. In Phase II of our work, we build upon our Phase I findings and offer more in-depth suggestions for organizational, legislative, and regulatory approaches to combat the impending threat of deepfake identities in three use cases. The first use case addresses content offered by creators, owners and immediate users like media organizations, non-government organizations (NGOs), law enforcement and legal institutions that rely on this content. The second use case addresses content disseminated in the broadcast environment where social media platforms and news organizations may be used as vehicles to disseminate false, misleading and ultimately harmful information with broad impacts of varying magnitude. The third use case addresses content associated with real-time or live scenarios for identity proofing and verification to enable and offer services and products. The real-time or near-real-time nature of the interaction in these scenarios make imagery, video and audio content of particular importance. We evaluated these use cases and developed a generalized framework for combatting deepfakes—including an associated checklist—and make recommendations for future work along each of the five aspects of the framework: Establish Policies and Support Legislation; Identify Deepfakes; Demonstrate Integrity, Authenticity, and Provenance; Act Appropriately; and Engineer the Environment.
Science & Tech Spotlight: Deepfakes
US Government Accountability Office
The Government Accountability Office’s Science, Technology, Assessment, and Analytics Team released a report on deepfakes, providing fast facts and more extensive highlights on the subject. First, a definition is given for deepfakes, along with areas in which deepfake technology is utilized. Deepfakes are defined as sources of video, photo, or audio which may be taken as real but are actually modified with artificial intelligence. The report presents multiple contexts in which deepfakes have been leveraged for public harm: election interference, civil unrest, exploitation of women, and pornography. Next, the article highlights the artificial intelligence tools which allow deepfakes to work, including artificial neural networks, generative adversarial networks (GANs), and autoencoders. While anyone could create a deepfake with access to a computer, the article acknowledges that more accurate deepfakes possess a ‘barrier to entry’ from the expertise and resources needed to apply aforementioned AI tools. Even with advanced tools involved in the creation of deepfakes, there are a few quick detection methods mentioned to spot deepfakes through noticeable error, particularly in human subjects. Alongside automated AI detection, visual cues include physical features lacking definition, non-matching earrings, and inconsistent eye movements. Additionally, the article discusses both opportunities and challenges which surround the intentional use deepfakes. Deepfakes can bring positive impact when applied in specific sectors of society, but require sufficient training data and automated algorithms for detection. Alongside these technical challenges, there are also inconsistent policies from online platforms and legal questions underlying deepfake detection. Lastly, the article introduces considerations for policy practitioners to consider when devising regulations for deepfakes. These include identifying public-private partnerships for deepfakes detection, considering legal precedent and constitutional safeguards for creators, devising public education alternatives for deepfakes, and considering the role of social media companies and media institutions towards deepfake detection.
Deep-Fake Audio and Video Links Make Robocalls and Scam Texts Harder to Spot
Federal Communications Commission
The Federal Communications Commissions published an announcement of its declaratory ruling, designating phone calls utilizing AI-generated voice as ‘robocalls’ which are illegal under the Telephone Consumer Protection Act (TCPA). The act prohibits the use of voice cloning within phone calls unless permitted by the recipient. Underlying their ruling, the FTC explained that scammers have used AI to imitate the likeness of individuals – even one’s own family members. Through ‘voice cloning’, malicious actors attempt to proliferate misinformation, retrieve sensitive information, and steal money. The commission has specified elderly Americans as particularly vulnerable from scamming calls. The FTC concluded its announcement by provided a series of cautionary tips for consumers to safeguard themselves from robocalls and robotexts. Following these guidelines, the FTC reaffirmed its commitment to protect consumers and reiterated recent action against robocalls, highlighted by the commission’s partnership with state attorney generals to stop AI-generated calls and texts, alongside a set of new tools provided to consumers and phone companies to target illegal calls.
Contextualizing Deepfake Threats to Organizations
Department of Defense
Threats from synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications, including National Security Systems (NSS), the Department of Defense (DoD), the Defense Industrial Base (DIB), and national critical infrastructure owners and operators. As with many technologies, synthetic media techniques can be used for both positive and malicious purposes. While there are limited indications of significant use of synthetic media techniques by malicious state-sponsored actors, the increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication. Synthetic media threats broadly exist across technologies associated with the use of text, video, audio, and images which are used for a variety of purposes online and in conjunction with communications of all types. Deepfakes are a particularly concerning type of synthetic media that utilizes artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. [1] The most substantial threats from the abuse of synthetic media include techniques that threaten an organization’s brand, impersonate leaders and financial officers, and use fraudulent communications to enable access to an organization’s networks, communications, and sensitive information. Organizations can take a variety of steps to identify, defend against, and respond to deepfake threats. They should consider implementing a number of technologies to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications. [2] [3] Organizations can also take steps to minimize the impact of malicious deepfake techniques, including information sharing, planning for and rehearsing responses to exploitation attempts, and personnel training. In particular, phishing using deepfakes will be an even harder challenge than it is today, and organizations should proactively prepare to identify and counter it. Several public and private consortiums also offer opportunities for organizations to get involved in building resilience to deepfake threats, including the Coalition for Content Provenance and Authenticity and Project Origin. [4] [5] This cybersecurity information sheet, authored by the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA), provides an overview of synthetic media threats, techniques, and trends. It also offers recommendations for security professionals focused on protecting organizations from these evolving threats through advice on defensive and mitigation strategies.