Voice Cloning | Blog - Respeecher

Respeecher Partners with Pindrop to Protect Voice Cloning from Harmful Uses

Written by Anna Bulakh | Feb 28, 2024 3:03:41 PM

The development of modern technologies such as voice cloning through artificial intelligence and machine learning is opening up new horizons and providing exciting opportunities for content creators. At the same time, it creates opportunities for misuse. To actively prevent harmful uses, Respeecher has partnered with Pindrop, a leader in voice security and deepfake detection solutions. Pindrop and Respeecher will share research tools and data to maximize accuracy in detecting bad actors who use real-time voice cloning systems for fraud.

Advantages and Challenges of AI Voice Cloning

For Respeecher, the ethical use of digital voice replicas has always been a priority. The company develops solutions for a wide range of applications. Digital voice replicas are used in entertainment, such as film, television, advertising, and games. With their help, new characters get voices, and actors, by allowing the use of their voices, receive an additional source of income and no longer have to be present at every stage of film production.

In addition, Respeecher technologies make it much easier to create translations of content into other languages while preserving the original voice. Companies can also develop brand voices this way to increase customer engagement. Finally, voice cloning helps to restore the voices of people suffering from complex diseases.

Respeecher has established its leadership in developing best practices for the ethical and safe use of voice cloning with its strict consent, moderation, and data security policies. That is why it has always been essential that scammers do not use our developments. Unfortunately, such a threat exists. Financial fraud, impersonating family members, or audiojacking live conversations are not uncommon.

 

How we prevent harmful uses

As AI grows more sophisticated, the threat of highly convincing voice clones and large-scale coordinated attacks becomes increasingly concerning. Providers of voice cloning technology must prioritize the prevention of harm, maintain strict ethical standards in AI usage, and refrain from cloning voices without proper consent.

Another looming threat is real-time voice conversion, or voice swapping, which eliminates synthetic anomalies and awkward pauses, resulting in a remarkably natural delivery. These artificially generated voices pose a significant challenge as they are challenging for human ears to discern.

To overcome this challenge, Respeecher has partnered with Pindrop.

Pindrop’s solution can detect cloned voices with a far better accuracy and in real time. If a person can detect an AI-generated voice only 73% of the time, according to UCL research, then Pindrop's sophisticated deepfake detection technology can reach 99% accuracy.

The company accomplishes this by utilizing "Liveness detection" technology, which scrutinizes an audio at a rate of 8000 times per second to identify both expected and unexpected artifacts that are present in the audio stream. These include signals such as the sounds created from the natural opening and closing of a human vocal tract and frequencies generated by machines that fall beyond the range of human auditory perception. 

 

The collaboration between Respeecher and Pindrop marks a significant stride in combating deepfakes. This partnership introduces a solution capable of real-time detection of voice conversion, offering the industry a crucial advantage against fraudsters attempting to deceive traditional authentication and biometric systems. Moreover, this collaboration establishes a foundation for both companies to advocate for the ethical utilization of GenAI in our technological endeavors.