Navigation
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
A Review of Modern Deepfake Detection Methods: Challenges and Future Directions
Zaynab Almutairi
2022
The article reviews existing audio deepfake (AD) detection methods and compares faked audio datasets. It introduces types of AD attacks and analyzes detection methods and datasets for imitation and synthetic-based deepfakes.
Auto Annotation of Linguistic Features for Audio Deepfake Discernment
Kifekachukwu Nwosu, Chloe Evered, Zahra Khanjani, Noshaba Bhalli, Lavon Davis, Christine Mallinson, Vandana P. Janeja
2023
The study focuses on detecting audio deepfakes through linguistic analysis. It involves analyzing audio samples for specific linguistic features and testing the auto-annotation methodology.
Audio-deepfake detection: Adversarial attacks and countermeasures
Mouna Rabhi, Spiridon Bakiras, Roberto Di Pietro
2024
The study explores vulnerabilities of audio deepfake detection systems to adversarial attacks, demonstrating that current methods like Deep4SNet can be manipulated to nearly 0% detection accuracy using GAN-based attacks. A new, generalizable defense mechanism is proposed to enhance system resilience.
Awais Khan, Khalid Mahmood Malik, James Ryan, Mikul Saravanan
2023
The datasets used in the experiments include ASVspoof2019, ASVspoof2021, and VSDC, which are employed to evaluate the performance of various voice spoofing countermeasures.
Thomas Nygren, Mona Guath, Carl-Anton Werner Axelsson, Divina Frau-Meigs
2021
The study focuses on media and information literacy, evaluating how pupils can identify and assess the credibility of digital news using the InVID-WeVerify tool. It assesses the effectiveness of this tool in an educational context across multiple schools.
Fighting AI with AI: Fake Speech Detection using Deep Learning
Hafiz Malik, Raghavendar Changalvala
2023
The study evaluates the performance of a deep learning-based fake speech detection method using a dataset of cloned and bona-fide speech samples. It focuses on detecting fake speech through a deep learning model using spectrograms of the audio recordings.
Fooled twice: People cannot detect deepfakes but think they can
Nils C. Köbis, Barbora Dolezalová, Ivan Soraperra
2019
The study investigates human detection abilities for deepfake videos, focusing on detection accuracy, cognitive biases, and overconfidence. Participants view 16 videos (8 authentic and 8 deepfakes) and provide responses on whether the video is a deepfake, along with confidence ratings and demographic information.
Human Perception of Audio Deepfakes
Nicolas M. Müller, Karla Pizzi, Jennifer Williams
2022
The dataset is used in a gamified online experiment where participants distinguish between real and fake audio samples. It includes both bona-fide and deepfake audio samples, with users’ classifications and AI model predictions recorded.
Zahra Khanjani, Lavon Davis, Anna Tuz, Kifekachukwu Nwosu, Christine Mallinson, Vandana P. Janeja
2023
The study utilizes a hybrid dataset composed of various spoofed audio samples, including replay attacks, Text-to-Speech, Voice Conversion, and mimicry, as well as genuine samples. This dataset aims to facilitate the development and testing of spoofed audio detection techniques.
Katarína Greškovicová, Radomír Masaryk, Nikola Synak, Vladimíra Cavojová
2022
The study analyzes how different editorial styles affect the perceived credibility of health messages among adolescents. It explores factors like media literacy and scientific reasoning.
Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction
Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler, Patrick Traynor
2022
The dataset utilized in this study is the TIMIT Acoustic-Phonetic Continuous Speech Corpus. It is used to test the deepfake audio detection method described.