Why Building Chatbots from Dead People is a Bad Idea

What would Marilyn Monroe say?

Key Takeaways

  • Microsoft is reportedly working on technology that could one day allow chatbots to be based on the personalities of dead people. 
  • A recent patent granted to the company describes building a bot from data posted on the web, including images and social media posts. 
  • Some observers say that building chatbots based on real personalities could contribute to the rise of fake news.
A robot hand typing on a laptop keyboard.
baona / Getty Images

New technology one day could allow chatbots to emulate the personalities of dead people. But some observers say that such bots could contribute to the rise of fake news. 

The Independent reports that Microsoft has been granted a patent on technologies that would allow the company to build a bot using people’s "images, voice data, social media posts, [and] electronic messages." People even could make a bot of themselves, using the methods in the patent. Not so fast, some experts say. 

"Technology that would allow companies like Microsoft, or corrupt governments or organizations, to create chatbots from deceased individuals is a frightening possibility," Andrew Selepak, a social media professor at the University of Florida, said in an email interview.

"It would allow these companies or governments to alter how we view the deceased and change what the deceased has said and done."

Build a Bot of Yourself

Microsoft’s patent describes creating a bot based on digital information. "The specific person [who the chatbot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.," the patent application says.

The patent also suggests that living users could program a replacement of themselves, saying, "The specific person may also correspond to oneself (e.g., the user creating/training the chatbot." 

"It’s all too possible that people will also create personas based on darker characters such as Adolf Hitler."

Microsoft’s bot technology is only the latest in a long line of highly theoretical and possibly sketchy experiments to keep people’s personalities alive after death. For example, Nectome is working on preserving the brain for memory extraction using the chemical preservative glutaraldehyde

Another project underway at MIT, called Augmented Eternity, maps an individual based on their digital interactions and allows them to be represented as a bot. "For example, a corporate lawyer can provide her expertise to a network of clients for a reduced cost compared to her classic in-person rate sheet," according to the project’s website.

"Her clients, in this case, have the ability to 'borrow the identity' of the lawyer for an hour and consult it as a chatbot. Our machine intelligence framework will learn from each interaction and respond to the user with a high degree of relevance.”

Who Gets to Live on as a Bot?

The idea of bots based on real people raises a host of thorny ethical problems. Selepak said that one problem is that famous people would be more likely to be recreated as bots. "Anyone then with the technology could change the words and actions of a deceased person who is no longer around to refute them," he added. 

Brad Smith, the CEO of software firm Wordable agreed with Selepak’s views on chatbots in an email interview, saying it’s "not okay to make people believe they are talking to someone who is dead." 

Humanoid robot talking to a group of people.
gremlin / Getty Images

Selepak points out that bots based on personalities are just an extension of existing deepfake technology, which can create videos and photos that look like real people. 

"The possibility of creating a bot that could do this would alter not just a video or an event, but an entire person’s life and their influence on our society," he said. "Adolph Hitler could be recreated and refute the Holocaust. Ronald Reagan could be recreated and support communism."

If people are ever recreated as chatbots, they may have to watch what they say. Facebook recently suspended the account of a popular South Korean chatbot after complaints that it used hate speech.

Lee Luda, a bot with the persona of a 20-year-old female university student, reportedly made derogatory remarks towards sexual minorities. 

I’m all in favor of bots of specific historical figures like William Shakespeare, for example. But it’s all too possible that people also will create personas based on darker characters such as Hitler.

Was this page helpful?