Deepfake audio can fool even prepared listeners

Audio deepfakes are often already pretty convincing, and there’s reason to anticipate their quality only improving over time. But even when humans are trying their hardest, they apparently are not great at discerning original voices from artificially generated ones. What’s worse, a new study indicates that people currently can’t do much about it—even after trying to improve their detection skills.

According to a survey published today in PLOS One, deepfaked audio is already capable of fooling human listeners roughly one in every four attempts. The troubling statistic comes courtesy of researchers at the UK’s University College London, who recently asked over 500 volunteers to review a combination of deepfaked and genuine voices in both English and Mandarin. Of those participants, some were provided with examples of deepfaked voices ahead of time to potentially help prep them for identifying artificial clips.
Music for everyone by Alireza Attari is licensed under Unsplash unsplash.com

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 louder.news, Privacy Policy