Protect Your Voice Recordings Against Deepfakes with the New AntiFake Tool

Tech

In the face of increasing concerns regarding the threat of deepfake technology, computer scientists from the McKelvey School of Engineering at Washington University in St. Louis have developed a new tool called AntiFake. This tool aims to protect voice recordings from unauthorized speech synthesis created through generative artificial intelligence (AI) tech. The team, led by Ning Zhang, an assistant professor of computer science and engineering at the McKelvey School of Engineering, has developed AntiFake in response to the rising threat of deepfake technology.

AntiFake differs from traditional methods that detect synthetic audio post-attack, as it uses adversarial techniques to make it challenging for AI tools to extract characteristics from voice recordings, preventing the synthesis of deceptive speech. Notably, Zhang has stated that AntiFake ensures that voice data becomes challenging for criminals to exploit for synthesizing deceptive voices or impersonation.

The tool employs adversarial AI techniques originally associated with cybercriminal activities, intentionally distorting recorded audio signals to make them slightly different from AI, but still sounding right to human listeners. The tool has undergone testing, demonstrating a robust protection rate of over 95% against advanced speech synthesizers, even when confronted with unfamiliar commercial synthesizers. Additionally, usability tests involving 24 human participants have confirmed the tool’s accessibility to diverse users.

Zhang envisions that AntiFake can be expanded to safeguard extended recordings or even musical content, underscoring its role in the fight against disinformation. The study’s findings were presented in the Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. The efforts of the team underline the importance of developing tools to address the prevalent issue of voice impersonation and protect voice recordings against potential malicious use, particularly in the context of AI technology’s continuous development.