In the United States, 20-30% of children diagnosed with an Autism Spectrum Disorder (ASD) are unable to communicate their wants, needs and thoughts verbally according to the Center of Disease Control (CDC). Smart tech can help these children become more independent at home and even at school. What is very exciting about smart tech is that it has potential to help people even if they are visually and speech impaired but can hear, or are hearing impaired but can see text. Advances in artificial intelligence have made the development of such smart devices possible, and we are only seeing the beginning of this revolution.
IoT Personal Assistants for more independence
Text-to-speech devices or voice generators have been in existence for decades. Now imagine complementing capabilities of a Voice Generator using smart assistants like the Amazon Echo or Google Home. These smart assistants use AI to power their voice control capabilities, which include completing actions depending on the voice command given. A child or adult with a speech disorder can use their Voice Generator to give voice commands to personal assistants.
For example, you can ask the smart assistant to turn-off the lights using the text-to-speech device. Imagine ordering pizza and turning on the television and more without having to walk to the television or phone to press a button. People who are visually and speech impaired can also use speaking devices to ask these personal assistants to search for information online or call someone.
Amazon Echo Tap
Amazon added a new feature to the Amazon Echo Show device which allows people with speech impairment to query Alexa without using their voice. The new feature is called Tap to Alexa. All they need to do is tap the screen and choose among the preset menu options. You can also customize certain requests on the Amazon Echo Show to make it do things like turn on the lights and so on.
The future is gesture control and mind reading
There are developers working on new ways devices like the Amazon Echo Show can be used by people with speech disorders. For example, Abhishek Singh, a researcher from NYU, trained an AI to learn the basics of sign language by standing and gesturing in front of a webcam. Then once the system was able to read the gestures and turn them to text, he connected it to Google text-to-speech software to read the words aloud. An Amazon Echo he set up listened and vocally responded to those words and also turned its vocal response to text that could be read.
Another example is Dr. Nima Mesgarani of Columbia University’s Zuckerman Institute, who together with his fellow researchers, announced that a brain-computer interface has been used to turn brainwave patterns into speech with the help of a speech synthesizer. They managed to do this by analyzing the brain waves of patients with epilepsy as the patients listened to speech. They were looking to identify certain patterns that algorithms could use to recreate speech. The algorithm reads the patterns and speaks out the words. The researchers hope that one day it will be possible for a machine to take brain signals generated by a thought and turn them into synthesized verbal speech.
Technology changes so swiftly that it is hard to predict the exact form it will take in the next 3 to 5 years. But what is clear is that in the future people with speech disorders will be able to live more independently than they do now.