How AI Could Make Computer Speech More Natural

Train your own software

Key Takeaways

  • Companies are racing to find ways to make computer-generated speech sound more realistic. 
  • NVIDIA recently unveiled tools that can capture the sound of natural speech by letting you train an AI with your own voice. 
  • Intonation, emotion, and musicality are the features that computer voices still lack, one expert says.
Someone working with a voice recording on a laptop computer.

CoWomen / Unsplash

Computer-generated speech might soon sound a lot more human. 

Computer parts maker NVIDIA recently unveiled tools that can capture the sound of natural speech by letting you train an AI with your voice. The software also can deliver one speaker’s words using another person’s voice. It’s part of a burgeoning push to make computer speech more realistic. 

"Advanced voice AI technology is allowing users to speak naturally, combining many inquiries into a single sentence and eliminating the need to repeat details from the original query constantly," Michael Zagorsek, the chief operating officer of speech recognition company SoundHound, told Lifewire in an email interview. 

"The addition of multiple languages, now available on most voice AI platforms, makes digital voice assistants accessible in more geographies and for more populations," he added. 

Robospeech Rising

Amazon’s Alexa and Apple’s Siri sound a lot better than computer speech from even a decade ago, but they won’t be mistaken for authentic human voices anytime soon. 

To make artificial speech sound more natural, NVIDIA’s text-to-speech research team developed a RAD-TTS model. The system allows individuals to teach a text-to-speech (TTS) model with their voice, including the pacing, tonality, timbre, and other factors. 

The company used its new model to build more conversational-sounding voice narration for its I Am AI video series. 

"With this interface, our video producer could record himself reading the video script and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor—tweaking the synthesized speech to emphasize specific words and modifying the pacing of the narration to better express the video’s tone," NVIDIA wrote on its website

Harder Than It Sounds

Making computer-generated speech sound natural is a tricky problem, experts say. 

"You need to record hundreds of hours of someone’s voice to create a computer version of it," Nazim Ragimov, the CEO of the text to speech software company Kukarella, told Lifewire in an email interview. "And the recording must be of high quality, recorded in a professional studio. The more hours of quality speech loaded and processed, the better the result."

"Text-to-speech can be used in gaming, to aid individuals with vocal disabilities, or to help users translate between languages in their own voice."

Intonation, emotion, and musicality are the features that computer voices still lack, Ragimov said.

If AI can add these missing links, computer-generated speech will be "indistinguishable from the voices of real actors," he added. "That’s a work in progress. Other voices will be able to compete with radio hosts. Soon you’ll see voices that can sing and read audiobooks."

Speech technology is becoming more popular in a wide range of businesses. 

"The auto industry has been a recent adopter of voice AI as a way to create safer and more connected driving experiences," Zagorsek said.

"Since then, voice assistants have become increasingly ubiquitous as brands are seeking ways to improve customer experiences and meet the demand for easier, safer, more convenient, efficient, and hygienic methods of interacting with their products and services."

Typically, voice AI converts queries to responses in a two-step process that begins by transcribing speech into text using automatic speech recognition (ASR) and then feeding that text into a natural language understanding (NLU) model.

Someone recording voice audio in a home studio.

Soundtrap / Unsplash

SoundHound’s approach combines these two steps into one process to track speech in real-time. The company claims this technique allows voice assistants to understand the meaning of user queries, even before the person is finished speaking.

Future advancements in computer speech, including the availability of a variety of connectivity options from embedded-only (no cloud connection required) to hybrid (embedded plus cloud) and cloud-only "will give more choice to companies across industries in terms of cost, privacy, and availability of processing power," Zagoresk said. 

NVIDIA said its news AI models go beyond voiceover work. 

"Text-to-speech can be used in gaming, to aid individuals with vocal disabilities, or to help users translate between languages in their own voice," the company wrote. "It can even recreate the performances of iconic singers, matching not only the melody of a song but also the emotional expression behind the vocals."

Was this page helpful?