Over the phone, we’ve had people who know us well say, “You don’t sound too well. Maybe you should see a doctor.” Now voice-assisted technology can do the same.
Instead of using our voice to turn on the lights or to ask Alexa what the weather is like, our voice will soon be leveraged to tell us when it is time to see a doctor. The technology is already in place, and several companies are in the running to create the next transformation in healthcare technology – voice biomarkers.
Evolution of Voice Technology
Over the last ten years, Artificial Intelligence (AI) and Machine Learning (ML) have been utilized to identify vocal biomarkers of conditions like dementia, depression, autism, pulmonary disease, and even heart disease. These technologies are getting better at picking up subtle differences in how people with certain conditions speak. A biomarker is an objective medical sign that is observable outside the patient that can be measured accurately to diagnose a medical state. Voice biomarkers are non-invasive diagnostic tools and can be used by clinicians to get immediate results helping to initiate fast proactive treatment.
Machine learning (ML) has given the field of voice diagnostics a big boost. Researchers are busy feeding thousands of voice samples into a computer to search for distinguishing features that can be associated with a medical condition with accuracy. Scientists can detect aberrations in voice, compute quickly and at scale.
The Progress in Voice Technology
Vocalis, a voice analysis company based in Israel and the United States, has already developed a smartphone app that could detect flare-ups of chronic obstructive pulmonary disease. The technology was able to listen for signs that users were short of breath when speaking. They have now developed a pilot version of a digital Covid-19 screening tool which is being tested around the globe. While it is not a definitive diagnosis, it does help doctors identify levels of urgency and categorize patients who need home quarantine, in-person medical care, or just testing. How? Just by speaking to the app!
Dr. Charles Marmar, a psychiatrist at New York University, has used ML to identify 18 vocal features associated with post-traumatic disorder or PTSD in male military veterans. The system he developed was able to identify those with PTSD with nearly 90% accuracy. Says Dr. Marmar, “Voice is enormously rich in terms of carrying our emotion signals. The rate, the rhythm, the volume, the pitch, the prosody [stress and intonation] — those features, they tell you whether a patient is down and discouraged, whether they’re agitated and anxious, or whether they’re dysphoric and manic.”
So voice can unlock our emotional state. But can it be used to unlock our medical state? Max Little a researcher in ML and signal processing in the University of Oxford has shown us. A decade ago, he conducted an experiment to detect Parkinson’s disease, a disease that does not have a definitive diagnostic yet. He used audio recordings of 43 adults out of which 33 had Parkinson’s disease. Each participant has recorded hundreds of times saying the syllable “ahhh”. Speech-processing algorithms were used to analyze acoustic features that led to the identification of characteristics such as breathiness and tremulous oscillations in pitch and timbre. Using these features, the system was able to identify the speech samples from those who had Parkinson’s disease with an accuracy of nearly 99%.
Voice biomarkers can be used to detect psychiatric disorders, neurological disorders, respiratory disorders, cardiovascular disorders, traumatic brain injury, cognitive impairment, depression and anxiety disorders, and others. In Minnesota, the Mayo Clinic has already begun tracing vocal biomarkers to improve remote health monitoring of patients with heart disease. Segmentation of the voice biomarkers market can be divided by type:
- Amplitude
- Error rate
- Frequency
- Vocal rise/fall time
- Voice tremor
- Pitch
- Others
GS Lab is one among many tech companies that have helped healthcare organizations develop platforms that enable users to record short voice samples and turns these into analytical data, which then is used to train data science models that develop a deeper understanding of diseases and underlying health conditions. End-to-end platforms enable clients to develop new perspectives on screening methods and early detection. Companies around the world are commercializing such products.
Where is voice technology headed?
The nascent field of vocal diagnostics is beginning to grow at a rapid rate. According to Market Research Future, the “market for vocal biomarkers is driven by factors such as the growing prevalence of psychological, neurological and other diseases affecting speech such as depression, Attention Deficit, and Disruptive Behavior Disorders, Parkinson’s disease (PD), Alzheimer’s disease, Huntington’s disease, Respiratory and Cardiovascular Disorders, traumatic brain injury (TBI) etc.” The firm predicts the market for vocal/voice biomarkers is expected to reach $ 2.5 billion by the end of 2023, with a growth of about 14.5 % during 2017-2023.
Where is this technology headed? In the future, your smartphone device, your robot, or Siri and Alexa would turn into medical devices that could listen to your voice and say, “Hey, you sound like you’ve got a cold.” And that’s just the beginning. Depending on the sophistication of the speech and emotion programming, these voice assistants may be able to tell to pick up diagnostic clues in your voice to detect a range of diseases and disorders. Based on programming, the device may be able to order medication or call your doctor. In the future, your smartphone could be your preliminary doctor offering frontline medical diagnostics. But for now, the moving from proof-of-concept to product is still being developed with greater accuracy.