How AI Could Help Give People With Speech Challenges Their Voices Back

New invention powers eyeglasses that read lips

  • Researchers have developed eyeglasses that let AI recognize unspoken commands.
  • AI is being used in many ways to help those with speech challenges. 
  • AI-driven software also lets users mimic natural voices.
Portrait of someone wearing futuristic, artificial intelligence powered eyeglasses.

Yana Iskayeva / Getty Images

Artificial intelligence (AI) is getting a lot of attention at the moment for the way it can let you talk with chatbots, but the technology could also help millions of Americans living with speech challenges. 

A Cornell University researcher has invented eyeglasses that use AI to recognize up to 31 unspoken commands based on lip and mouth movements. It's one of a growing number of ways that AI is being used to help people with speech disorders express themselves more easily. 

"AI can assist in reducing communication barriers and enable individuals with speech issues to actively participate in society," Joris Castermans, the CEO of Whispp, an AI-powered speech technology," told Lifewire in an email interview. "For instance, AI-powered speech recognition systems can help individuals who have difficulty speaking or have speech impediments, making it simpler for them to communicate with others. This, in turn, can provide them with access to a broader range of services, such as healthcare, education, and employment."

AI Eyeglasses That Help With Speech

The Cornel gadget uses little power and has an interface that its inventors claim requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone. EchoSpeech, as the device is called, could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. 

Using a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm, also developed by Cornell researchers, analyzes these echo profiles in real-time with about 95% accuracy.

"For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer," Ruidong Zhang, one of EchoSpeech's creators, said in the news release. "It could give patients their voices back."

EchoSpeech is just one of many recent advances in using AI for speech assistance devices. A  new national institute that develops AI systems that help young children with speech or language processing challenges has been established at the University at Buffalo thanks to a five-year, $20 million grant from the National Science Foundation. The institute includes researchers from nine universities. 

But many children develop language problems that have no clear cause, and these children often fall through the cracks of our educational system.

The AI Institute for Exceptional Education will focus on serving the millions of children nationwide who, under the Individuals with Disabilities Education Act, require speech and language services. Researchers will develop the AI Screener, that's meant to help identify potential speech or language impairments, and the AI Orchestrator, which will act as a virtual teaching assistant. 

"There are many well-known reasons that children develop language problems, like autism or intellectual disability," Carol Miller,  a member of the institute, said in a press release. "But many children develop language problems that have no clear cause, and these children often fall through the cracks of our educational system. The AI screener will be able to evaluate more video footage than a speech-language pathologist, or teacher could possibly watch."

The Challenges of AI Voice Assistance

Another invention that promises to change the life of users with speech disorders is, a real-time AI voice changer based on speech-to-speech technology. It takes voice input, retains some aspects of that input, and transforms it into another voice.

"Users can not only select the output voice they want their voice to be changed into but also build their own custom voices in order to sound like they want to sound," Heath Ahrens, the CEO of, which makes AI-powered voice changing technology, said in an email. "This allows them to build a digital version of their own voice or alter it to sound more like they want to sound." 

A person in a wheelchair using a smartphone.

RyanJLane / Getty Images

In the future, AI could help eliminate much of the frustration of speech disorders, predicted Castermans. One barrier to this goal is that since human communication mainly occurs in real-time, AI-powered communication aids also need extremely low latency or time delay. 

"However, high-performing AI models are typically large and complicated, requiring expensive computing power to achieve low latency," he added. "We believe that establishing a systematic methodology to train tiny yet high-performing AI models will facilitate more AI integrations and contribute to creating a more inclusive society."

Was this page helpful?