Fresh Tool Protects Musicians From AI-Generated Deepfake Songs
Artificial intelligence models can now clone a voice with just seconds of audio, leading to a surge in deepfake songs and creating a crisis for musicians concerned about unauthorized use of their voices. However, a new tool developed by researchers at Binghamton University, State University of New York, in collaboration with Cauth AI, aims to protect artists by safeguarding their songs from generative AI cloning.
The Rise of AI-Generated Music and Deepfakes
The rapid advancement of AI music platforms like Udio, Suno, and Klay has enabled the creation of music based on existing artists’ work. This technology, while offering potential benefits, raises concerns about intellectual property rights, lost revenue for artists, and the emotional impact of having one’s voice and artistry replicated without permission. Musicians are deeply concerned about the potential for AI to devalue human creativity.
The issue extends beyond simple imitation. Spotify has acknowledged the problem of deceptive content created by AI, including vocal deepfakes, and has taken steps to remove over 75 million spammy tracks in the past year. Legal experts are grappling with the implications of deepfake audio, blurring the lines between homage and theft.
Introducing My Music My Choice (MMMC)
Researchers Umur Aybars Ciftci and Ilke Demir have developed “My Music My Choice” (MMMC), a digital safeguard designed to protect artists’ songs from AI cloning. The research, presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop: AI for Music, introduces subtle, imperceptible changes to a song’s waveform.
According to Ciftci, “Even though this AI technology has been developed for fun and entertainment, a lot of people are using it for nefarious purposes. You can easily seize someone’s voice and make them sing something that they normally don’t sing, or steal someone’s songs and make it look like it is your song to begin with.”
How MMMC Works
MMMC alters a song in a way that is undetectable to the human ear. However, when an AI model attempts to replicate the song, it produces distorted noise. The slight modifications introduced by MMMC confuse the AI, making it unable to accurately clone the vocal track. The goal is to minimize any impact on listeners while maximizing disruption for AI systems.
Ciftci explains, “We’re trying to minimize the impact on human listeners while maximizing disruption for the machines.”
Testing and Future Development
The researchers have tested MMMC on 150 music tracks across various genres and plan to continue testing on larger datasets. They as well intend to compare its performance against other similar methods, though currently, there are limited alternatives available. The team included Binghamton students Gerald Pena Vargas, Alicia Unterreiner, and David Ponce.
As AI technology continues to evolve, tools like MMMC will be crucial in protecting the rights and livelihoods of musicians in the face of increasingly sophisticated deepfake technology. The development of such tools represents a significant step towards ensuring a fair and sustainable future for the music industry.