Respeecher Interviewed in LinkedIn Course ‘Understanding the Impact of Deepfake Videos’

Feb 25, 2021 7:52:37 AM

Deepfake videos are a subject of increasing interest and scrutiny worldwide. To this effect, Linkedin Learning has produced a course about synthetic media called “Understanding the Impact of Deepfake Videos'' that aims to provide a comprehensive overview of deepfake technology.

Respeecher was invited to provide insight into deepfake audio, how it’s produced, its current and future use cases. Read on to find out the exact section of the course we’re featured in and watch our own Grant Reaber dive into the topic.

Senior staff instructor Ashley Kennedy explains what deepfake videos and deepfake audio are, and delves into the impact of this technology, its dangers and benefits, how people can learn to identify synthetic media, and what to do when we suspect such content.

Ashley Kennedy is a Managing Staff Instructor at LinkedIn, leading the team of Staff Instructors in the Business and Creative Libraries at LinkedIn Learning. She is involved in course creation, instructional design, and course production and she also created her own courses on various topics (including video, filmmaking, storytelling, social media marketing, education and more).

Ashley was kind enough to interview Respeecher’s Chief Research Office Grant Reaber in the section called “What is Deepfake Audio”.

Generally speaking, synthetic media (also known as ‘deepfakes’) consists of manipulated media (video and/or audio) that uses AI to replace a person with someone else's liking and made to appear as if they said or did something that never happened.

Because while some ethical problems regarding synthetic speech are simple, others are more difficult. We don't simply trust our gut to tell us what we should do. Our decision-making is guided by this set of ethics principles you can read all about on the Respeecher FAQ page.

Companies from the synthetic media landscape like Respeecher aim to revolutionize the way content is produced, by bringing more flexibility in industries like entertainment, video games, advertising, and more. Even so, unethical deepfakes can be used for negative purposes to mislead audiences. This is why part of our mission at Respeecher has always been to:

  • Educate the public about the capabilities of synthetic speech technology.
  • Develop automatic detection algorithms that can detect synthetic speech even if it has not been watermarked by us.
  • Work with gatekeepers of content such as Facebook and YouTube to limit the harm of voice cloning by bad actors through prominent labeling of all synthetic content and banning of particularly unethical content.

So you can imagine our excitement when Ashley reached out to us.

The course “Understanding the Impact of Deepfake Videos” coveres skills like: compositing, video production, media psychology, and visual effects. So far, over 65,000 members liked the content and over 180,000 enrolled in the course. And these numbers keep growing.

A better understanding of voice synthesis and its benefits

In the piece called “What is a deepfake audio” from the first part of the course you can watch our own Grant Reaber talk about voice synthesis, give a few examples of synthetic audio content, and use the voice of some famous politicians like Barack Obama and Richard Nixon.

Through speech-to-speech voice conversion technology (STS), the general speech patterns of the speaker remain unique, but their voice is replaced with another. It can be successfully used to gain more control over the emotions that are being expressed. STS delivers a natural performance, keeping the inflection and the other characteristics of the human speech.

Expressivity and this subtle control over the performance is one reason to use voice conversion.  So, basically, an actor can express themselves just as much through voice conversion as they could  if they were using their own voice. It's just sort of like an additional creative tool where you can change the voice of the performer.


Grant Reaber, Chief Research Office, Respeecher

Respeecher works with movie studios and video game producers, and helps content creators generate speech that's indistinguishable from the original speaker. In fact, the usage of voice cloning includes:

  • making changes to dialogue in films;
  • making voiceover updates for educational videos  and other media;
  • improving voice quality for individuals who require enhancements;
  • reducing foreign accents in real time;
  • providing a voice to people who have lost the ability to speak.

Respeecher will never use the voice of a private person or an actor without permission, but we can use, for example, the voices of historical figures and politicians. We also choose to watermark our audio.


Respeecher started from the idea of cloning human speech and swap voices for the entertainment industry: filmmakers, TV producers, game developers, advertisers, podcasters, and content creators of all types.

The company was founded in 2018 by 3 friends and colleagues: Alex Serdiuk, Dmytro Bielievtsov, and Grant Reaber. In October 2019, we completed the Comcast NBCUniversal LIFT Labs Accelerator and in March 2020 Respeecher received $1.5 million in funding.

We'd love to hear your voice. (We promise not to replicate it unless you give us permission.) If you want to learn more about our technology or see how we can partner up on a project, drop us a line.

Related Articles
The Impact of Deepfake Technology on Digital Marketing and Advertising

Mar 30, 2021 9:44:04 PM

Deepfake tech has been widely criticized over the last couple of years and has recently...

What Are Deepfakes: Synthetic Media Explained

Jun 15, 2021 6:43:48 AM

Deepfakes are one of the most unique phenomena of the last five years in the world of ...

Respeecher synthesized a younger Luke Skywalker's voice for Disney+'s The Mandalorian

Sep 16, 2021 12:23:47 PM

The Mandalorian is a Star Wars television series created by Jon Favreau for the streaming...

Ready to Replicate Voices?

Want to see our technology in action? Get a demo today.