Fooled twice: People cannot detect deepfakes but think they can

Authors:
Nils C. Köbis, Barbora Dolezalová, Ivan Soraperra

Where published:
iScience

Publication Date:
November 19, 2021

 

Dataset names (used for):

  • Main experiment, video coding: 3,000 deepfake videos and their corresponding original short clips from Kaggle’s DeepFake Detection Challenge

 

Some description of the approach:
The study investigates human detection abilities for deepfake videos, focusing on detection accuracy, cognitive biases, and overconfidence. Participants view 16 videos (8 authentic and 8 deepfakes) and provide responses on whether the video is a deepfake, along with confidence ratings and demographic information.

 

Some description of the data:
The dataset includes 16 videos per participant (8 authentic and 8 deepfakes), with responses on deepfake detection tasks, confidence ratings, and demographic information. The videos are sourced from MIT’s DetectDeepfake project and Kaggle’s DeepFake Detection Challenge.

 

Keywords:
Deepfakes, detection accuracy, cognitive biases, overconfidence, AI-manipulated media, fake videos

Instance Represent:
Responses from participants on deepfake detection tasks, including demographic details and confidence ratings

Dataset Characteristics:
Includes videos from the MIT DetectDeepfake project and Kaggle’s DeepFake Detection Challenge. Contains numerical data (confidence ratings) and categorical data (demographic details, video type)

Subject Area:
Behavioral economics, cognitive psychology, digital misinformation

Associated Tools:
Deepfake detection, confidence assessment, bias analysis

Feature Type:
Numerical (confidence ratings), Categorical (demographic details, video type)

Main Paper Link

Code Link


License: © 2021 by the authors. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives (CC BY-NC-ND) license.


Last Accessed: 7/7/2024

NSF Award #2346473