by Anna Bulakh – Mar 29, 2022 10:00:00 AM • 8 min

How Respeecher Can Influence Victim Assistance and Witness Protection

Crime has a severe influence on the people affected by it. Victims suffer from emotional, mental, physical, and financial harm, from which some may never recover. Not to mention, injury may be threatened or inflicted upon witnesses and victims of crimes and their families. 

The principle of open justice and investigative journalism may serve as a standard for successful investigations and prosecutions, particularly in homicides as well as organized and violent crime. Witnesses and victims may fear that if the defendant or general public becomes aware of their identity, either they or their families will be at risk of serious harm.

On camera, a witness can be dimmed in the shadows, and their voices can be digitally altered, but such anonymity leads to a loss of human connection. These digitally altered voices cannot convey an in-depth story about something so frightful from the first-person perspective because of the technology used to hide their identity. With the advent of generative AI, however, there is potential for more nuanced and realistic voice replication through the use of voice changers, allowing for greater authenticity and connection even in situations where anonymity is necessary.

Altering voices and making them sound flat and robotic is so last century. Say hello to 21st-century voice replication technologies. 

The public’s right to know versus the victim’s right to privacy 

Crime stories make top headlines on an everyday basis. If you’ve been a victim or witness of a crime, you may find yourself at the center of media attention. And you may have any number of reasons why you might want to share your story, such as:

  • Help the police catch the perpetrator
  • Recover from emotional and mental trauma
  • Raise the awareness of crime 
  • Caution people against such crimes 

In most cases, victims and witnesses aren’t willing to disclose their identity, and they’d like to be kept anonymous in order to protect themselves. With some types of crimes such as sexual assault and rape, crimes involving young people and children, it may be hard for witnesses and victims to decide to come forward by giving an interview, even if it’s ‘off the record’. 

Many victims are even hesitant to report sexual assault. There is still much work to be done to prevent such crimes and help survivors speak up. Here are some common reasons and fears why witnesses and victims keep silent:

  • Blaming themselves 
  • Under threat from the offender 
  • Fear of being ostracized by society
  • Fear that they won’t be believed
  • Fear of punishment from their perpetrator
  • Victim-blaming 

Blurring out informants and digitally altering voices through generative AI can help protect the privacy of witnesses and victims, thereby giving them a voice that they can use to report crimes. Still, according to recent research, they entirely leave out the emotional component of the experience since a victim or witness's emotional expression can influence credibility and judgment. To address this gap, ethical voice synthesis emerges as a promising solution. 

Voice conversion for witness protection 

Speech synthesis is the AI-generated simulation of human speech. Thanks to machine learning and artificial intelligence, Respeecher has worked out a voice changer technology that allows transforming any voice into the target speaker’s voice. 

In other words, Respeecher can transform the voice of a victim or a witness and preserve their anonymity while preserving all the emotions, affections, and expressions by utilizing voice changers. But how is that possible?

  1. You can choose any voice from Respeecher’s Voice Marketplace. All voices are anonymous and copyright-free. Or, you can use the voice of any other person who volunteers to act as a target voice speaker. 
  2. Respeecher’s system will analyze the source voice recording (the voice of the victim) and transform it into the target speaker’s voice you have chosen.
  3. As a result, you get a completely anonymous recording that reflects the emotional impact and intonations of the source voice.

Synthetic speech has given rise to a number of ethical questions. At Respeecher, we follow a strict code of ethics. We know that voice cloning technology can be damaging. Respeecher only takes on projects that meet our ethical standards. We take advantage of particular watermarking technology to mark Respeecher-generated content from any other synthetic voice technology. It’s wrong to defraud people and create fake news. 

At Respeecher, we guarantee to provide:

  • complete anonymity of the source speaker (we ourselves may not know whose voice we are cloning - this is not necessary).
  • upon separate request, we will remove all voice samples of the source speaker from all of our systems
  • security of information when working on this or that anonymization project in the same way as we do for top Hollywood projects.

We’re only just begun to enter the field of voice synthesis for witness and victim protection. The technology is capable of helping and protecting people all over the world while encouraging people to raise awareness about different crimes.

If you’d like to find out more about our ethical voice synthesis technology or in case you have other questions, feel free to reach out” or check our voice synthesis FAQ section.

If you’d like to find out more about our technology or in case you have other questions, feel free to reach out. Subscribe to our newsletter to receive regular email updates about our product.

Anna Bulakh
Anna Bulakh
Head of Ethics and Partnerships
Blending a decade of expertise in international security with a passion for the ethical deployment of AI, I stand at the forefront of shaping how emerging technologies intersect with national resilience and security strategies. As the Head of Ethics and Partnerships at Respeecher, I focus on guiding ethical AI development. My role is centered around promoting the responsible use of AI, especially in synthetic media.
  • Linkedin
  • Email
Previous Article
Respeecher Calls on Celebrities to Speak Ukrainian Using AI Voice Generation Technology
Next Article
The Rise of Ethical Voice Cloning in the Deepfake Voice Wars