Alexa’s Ability to Mimic Dead Relatives May Be the Creepiest Thing Ever

It’s not available, yet (or ever)

  • Amazon’s Alexa voice assistant can copy a voice with as little as one minute of audio.
  • You could ask Alexa to read a story in the voice of a dead parent. 
  • It’s the same idea as deep fakes, only used for the powers of good.

An Amazon Alexa box against a white background with potted plants and an anatomical drawing tool nearby.

Jan Antonin Kolar / Unsplash

Amazon Alexa's latest gimmick is to learn to mimic the voice of a dead loved one, so they can speak to you from beyond the grave. 

Alexa needs just a minute of spoken audio to convincingly mimic a voice. Amazon bills it as a comforting feature that can put you in touch with loved ones, but it could also be a pretty creepy experience. And it shows how easy it is to make deep fake audio that's good enough to fool us, even when the voice is one we know very well.

"Amazon has definitely entered a rather unique—and bizarre—territory with its announcement that Alexa would soon be able to learn and then use the voice of dead relatives soon," Bill Mann, privacy expert at Restore Privacy, told Lifewire via email. "For some people, it's not creepy at all. In fact, it can be rather touching." 

Ghost in the Machine

As a part of its annual re:MARS conference, Amazon shows off the feature in a short video. In it, a kid asks Alexa if grandma can keep reading him "The Wizard of Oz," every child's favorite keynote-friendly public domain work. And it's quite a touching moment. It's hard not to feel human emotions when granny starts reading. 

"Humans struggle with mortality, especially in Western culture. For centuries we have tried to find ways to memorialize the dead, from death masks, to locks of hair, to old photos, to watching old movies," Andrew Selepak, a social media professor at the University of Florida, told Lifewire via email. "Deepfakes use the latest technology to create a new death mask of a deceased loved one. But, depending on one's perspective, is it creepy or a way to memorialize and hold on to someone you love after they have died?"

But a memento mori can be both comforting and creepy. A family member's friend is dead, yet you can still hear them speaking. It doesn't help that Alexa has a history of odd, and sometimes terrifying, behavior. In 2018, as NYT opinion columnist Farhad Manjoo was getting into bed, his Amazon Echo "began to wail, like a child screaming in a horror-movie dream."

Soon after, Amazon acknowledged that Alexa sometimes laughed out loud, which, along with teens and cellars, is horror movie 101. 

One can only wonder how you might feel if Alexa pulled the same tricks in grandma's voice. 

Deep Fake

The apparent ease with which Alexa learns to mimic a voice leads us to more nefarious uses of voice cloning: deep fakes. 

A child sitting on a bed, reading a book.

Annie Spratt / Unsplash

"Deepfake audio is not new, even if it is little understood and little known. The technology has been available for years to recreate an individual's voice with artificial intelligence and deep learning using relatively little actual audio from the person," says Selepak. "Such technology could also be dangerous and destructive. A disturbed individual could recreate the voice of a dead ex-boyfriend or girlfriend and use the new audio to say hateful and hurtful things."

That's just in the context of Alexa. Deep fake audio could go far beyond that, convincing people that prominent politicians believe things they don't, for example. But on the other hand, the more we get used to these deep fakes—perhaps in the form of these Alexa voices—the more we will be skeptical of the more nefarious fakes. Then again, given how easy it is to spread lies on Facebook, perhaps not. 

Amazon has not said whether this feature is coming to Alexa or if it is just a technology demo. I kind of hope it does. Tech is at its best when it is used in a humanistic manner like this, and even though the easy reaction is to call it creepy, as Selepak says, it really isn't that much different from watching old videos or listening to saved voicemails, like a character in a lazily-scripted TV show.

And if the tech for deep fakes is readily available, why not use it to comfort ourselves?

Was this page helpful?