Publications

* Indicates Undergraduate Students
Italics indicates Graduate students

  • Nwosu, Kifekachukwu*, Chloe Evered, Zahra Khanjani, Noshaba Bhalli, Lavon Davis, Christine Mallinson, and Vandana Janeja. “Auto Annotation of Linguistic Features for Audio Deepfake Discernment.” Assured and Trustworthy Human-Centered AI (ATHAI). Workshop paper delivered at the fall symposium, Association for the Advancement of Artificial Intelligence. Arlington, VA: October.
  • Mallinson, Christine, Lavon Davis, Chloe Evered, Vandana Janeja, Noshaba Basir Bhalli, Zahra Khanjani, Nehal Naqvi*, and Kifekachukwu Nwosu*. “Learning to Listen: Training Undergraduate Students for Better Discernment and Detection of Audio Deepfakes.” American Association of Applied Linguistics: Houston, TX. March.
  • Mallinson, Christine, Vandana Janeja, Zahra Khanjani, Lavon Davis, Noshaba Basir Bhalli, Chloe Evered, and Kifekachukwu Nwosu*. “Incorporating Sociolinguistic Insights and Techniques to Enhance AI Based Methods for Audio Deepfake Detection: An Interdisciplinary Approach.” Linguistics Society of America: New York, NY. January.
  • Zahra Khanjani, Lavon Davis, Anna Tuz, Kiffy Nwosu*, Christine Mallinson and Vandana Janeja, Learning to Listen and Listening to Learn: Spoofed Audio Detection through Linguistic Data Augmentation, 20th Annual IEEE International Conference on Intelligence and Security Informatics (ISI) , October 2023 Accepte

Audio deepfakes: A survey

Zahra Khanjani, Gabrielle Watson, Vandana P. Janeja

Front. Big Data, 09 January 2023
Sec. Cybersecurity and Privacy
Volume 5 – 2022 | https://doi.org/10.3389/fdata.2022.1001063

ARXIV version: https://arxiv.org/abs/2111.14203

Artificial intelligence (AI) is used to create or edit artificial information called “deepfakes,” which includes text, audio, and video synthesis. They have a big impact on society and look a lot like actual artifacts. Understanding deepfake landscapes, detection techniques, and generating techniques are the main topics of this essay, with a focus on audio deepfakes. Generative adversarial networks (GANs), convolutional neural networks (CNNs), and deep neural networks (DNNs) are frequently used for both producing and spotting deepfakes. The article presents a thorough review of audio deepfake research from 2016 to 2021 and emphasizes the necessity for more research on audio deepfakes.

 

Audio Deepfake Perceptions in College Going Populations

Gabrielle Watson, Zahra Khanjani, Vandana P. Janeja

Deepfake is content or material that is generated or manipulated using AI methods, to pass off as real. There are four different deepfake types: audio, video, image and text. In this research we focus on audio deepfakes and how people perceive it. There are several audio deepfake generation frameworks, but we chose MelGAN which is a non-autoregressive and fast audio deepfake generating framework, requiring fewer parameters. This study tries to assess audio deepfake perceptions among college students from different majors. This study also answers the question of how their background and major can affect their perception towards AI generated deepfakes. We also analyzed the results based on different aspects of: grade level, complexity of the grammar used in the audio clips, length of the audio clips, those who knew the term deepfakes and those who did not, as well as the political angle. It is interesting that the results show when an audio clip has a political connotation, it can affect what people think about whether it is real or fake, even if the content is fairly similar. This study also explores the question of how background and major can affect perception towards deepfakes.

 

Research in progress:

  • Learning to Listen and Listening to Learn : Spoofed Audio Detection through Linguistic Data Augmentation
  • Towards Causality in Spoofed Audio Detection
  • Expert Defined Linguistic Features Auto-annotations
  • Audio Signal Internal Annotation with Expert Defined Linguistic Features
  • Auto Annotation of Expert Defined Linguistic Features using Matrix Profiles