Speech impairments aren’t a problem for Google’s new voice app

It understands what users are trying to say even when other people can’t.

Google is developing an app that can recognize what people with speech impairments are trying to say, making it easier for them to both talk to other people and use voice-controlled technology.

The challenge: About 7.5 million people in the U.S. have difficulty speaking and being understood due to a brain injury, a disease like asmyotrophic lateral sclerosis (ALS), or some other condition. 

Not only can this make it hard for them to communicate with other people, it can also stifle their ability to use speech-recognition technology, which is particularly disheartening given that many people with speech impairments could greatly benefit from the tech.

“For example, people who have ALS often have speech impairments and mobility impairments as the disease progresses,” Julie Cattiau, a project manager at Google AI, told the New York Times in September. “So it would be helpful for them to be able to use the technology to turn the lights on and off or change the temperature without having to move around the house.”

Users record themselves saying 500 phrases to train the app to recognize their particular speech.

Project Relate: Google’s Project Relate app aims to solve both of these problems. 

The AI can be individually trained to understand what people with speech impairments are trying to say — this allows them to take advantage of Google’s voice-controlled Assistant. 

The app also has a “Listen” feature that transcribes users’ speech to text. They can then show the text to someone who’s having difficulty understanding them, or use the “Repeat” feature to repeat what they’re trying to say with a synthesized voice.

Beta testers: Google is currently looking for adults with speech impairments that make it difficult for them to be understood to beta test the app. They must use an Android phone, speak English, and live in Australia, Canada, New Zealand, or the United States.

Each tester will need to record themselves saying a list of 500 phrases to train the app to recognize their particular speech — that’ll take about 30 to 90 minutes, but it doesn’t have to be done in one session. They’ll then be asked to use the app and provide feedback on it to Google.

The big picture: People with speech impairments aren’t the only ones being left behind by speech-recognition tech.

Today’s systems are typically trained to understand “the average North American English voice,” Frank Rudzicz, a computer scientist at the University of Toronto, told the NYT. As a result, they aren’t as adept at understanding some English speakers, such as those with accents or who speak African American Vernacular English.

AI researchers will need to make collecting data from people who speak “non-standard” English a priority to make the speech recognition tech more universal, and Google’s Project Relate is a step in the right direction. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Autonomous public transportation rolls out in Arizona
Sun City, Arizona has launched a free transit system featuring self-driving cars for its retirement-age residents.
New tech creates liquid buttons on touchscreens
Carnegie Mellon researchers want to return a sense of touch to touchscreens.
Insiders say Apple is building an “AI health coach”
Tech giant Apple is developing an app with an AI health coach to help you live a healthier lifestyle, according to a Bloomberg report.
ChatGPT can now help you plan a perfect vacation
Online travel agency Expedia has added ChatGPT to its app, giving users a free AI assistant to help them plan a vacation.
Dog “nose print” app is 99% accurate at ID’ing lost pets
An AI-based app that identifies dogs and cats based on their unique nose prints could help reunite lost pets and their owners.
Up Next
drug discovery
Subscribe to Freethink for more great stories