Author : Anju Mohan

Published on: June 26, 2020

Category : Industry Updates

Print

The modern day innovations combines state-of-the-art technologies and knowledge about how the brain produces speech to tackle an important challenge faced by many patients. Researchers have combined several advances in machine learning to interpret the patterns of activity in the brain to find out what someone wants to say even if they’re physically unable to make speak.
We all know Stephen Hawking. Hawking had a form of motor neuron disease that paralyzed him and took away his verbal speech, but he continued to communicate using a synthesizer. Hawking initially communicates through the use of a handheld switch, and eventually by using a single cheek muscle. This is possible to Hawking because he can select words manually and trigger words. If some other individuals having ALS, Locked-in syndrome, or stroke, they won’t have the ability to control a computer. Now many researchers closer to the goal for help for those patients ‘A Voice for the Voiceless’. But when we capture individuals’ thoughts to speech it will have some privacy issues. So if we use brain patterns while a patient is listening to a speech it will be a great success.

brain

How Brain Scans Are Translated To Speech

Brain-computer interfaces that translate the brain patterns into machine commands. The auditory cortex is the area of the brain that’s activated when we speak and listen, also when we think Speech can be reconstructed from the human auditory cortex, which helps the possibility of a speech neuroprosthetic a direct communication with the brain. Patients’ auditory cortex activities are tracked as they listened to sentences read aloud by a variety of people.

A vocoder is a computer algorithm. Brain patterns from the auditory cortex will be decoded by the AI-Vocoder to reconstruct speech. The voice from the vocoder is trustable. This Robo-voice could understand and repeat about 75 %of the time. For studies, patients listened to readers read from zero to nine. And the brain patterns were fed into AI-enabled Vocoder for the speech. Thus results in a Robo voice. So the listener can correctly identify the spoken digits, great surprise that that can even tell the reader is a male or female! To be useful the machine should decode the response of the patient to speak. But there are certain limitations, overlapping between the brain areas for hearing, imagining speech, and speaking so there will be differences in the response brain signals. Another limitation is, this system is patient specific. All individuals produce different brain signals while listening to speech. Much researches are ongoing to overcome these limitations and for the future to find a decoder generally for all individuals. No doubt, we can hopefully wait for an AI-enabled Vocoder that produces the voice for the voiceless.

For a free consultation on IoT, Enterprise or Telecom Service, contact us at sales@thinkpalm.com

Author Bio:

anju

Anju Mohan C K: Anju works as a Software Engineer at ThinkPalm Technologies with the ETG department. Her Hobbies include Travelling and Gardening.

FacebooktwitterlinkedinFacebooktwitterlinkedin