Mark V. Albert, PhD

2022 Awardee: Mark V. Albert, PhD

Body

Project: Pilot development and assessment of a precision gesture-to-speech system for speech-impaired individuals with limited mobility

Artificial intelligence has recently enabled people to have sign language translated into spoken language, both in video and through worn gloves. However, there are many people who are unable to speak and have related motor impairments that make traditional sign language impossible. What can we do for them? Mark V. Albert, PhD, began working on this problem after meeting Hannah Thompson, a woman with severe Cerebral Palsy through contacts at the Shirley Ryan 嫩B研究院. He has developed a gesture-to-speech system using mobile phones for a widely accessible prototype development platform. But unlike other gesture-to-speech systems, he is using contemporary deep learning techniques, including transfer learning and continual learning models, to build a system that can be readily tailored to an individual's movement capabilities rather than fixed, predetermined movements. C-STAR funding support will enable this system to be tuned and tested specifically for impaired subjects. Dr. Albert leads the Biomedical Artificial Intelligence lab at the University of North Texas. His lab uses machine learning to advance medicine, with a history in wearable device analytics to aid clinicians in the treatment of mobility disorders, as well as broadly using AI to improve health outcomes. In service, Dr. Albert also manages over 2,000 current graduate students as Associate Chair for Graduate Studies in the department of Computer Science and Engineering at UNT. He aims to be a bridge between the explosion of AI interest and tools and the health care needs that can be met with such expertise.