Opti-Speech: real-time visual feedback to guide speech therapy
Opti-Speech is an interactive system that integrates tongue, lip, and jaw motion capture from 3D Electromagnetic Articulography (EMA) systems to animate a realistic 3D avatar. Users get real-time visual feedback of their tongue and jaw movements during speech therapy, which helps both them and the speech therapist to guide correct tongue positioning for speech sounds.
Tongue movements can be guided with customizable shape targets that are derived from normal speaker datasets or from users' own best movements.
Opti-Speech is currently only available for use in research labs, but ongoing development, including a 30 subject efficacy study, is aimed at developing a commercially available clinical system.
Katz, Mehta. Visual feedback of tongue movement for novel speech sound learning. Frontiers in Human Neuroscience 9(612), 2015.
Katz, et al. Opti-Speech: A real-time, 3D visual feedback system for speech training. Interspeech, 2014.
Campbell, et al. Opti-Speech: A visual biofeedback system for speech treatment. Oral Presentation at Motor Speech Conference, 2016
Vick. Speech motor learning without auditory perceptual tuning: An Opti-Speech trial. Poster Presentation at Motor Speech Conference, 2016.
Testing of Opti-Speech with both typical-speaking and dysarthria patients have established the feasibility of utilizing visual feedback of tongue movement during speech production as a viable therapeutic tool.
If you'd like more information about the Opti-Speech system, you can inquire at optispeech@vulintus.com.
Development of the Opti-Speech System was supported by Small-Business Innovation Research (SBIR) grants R43DC013467 and R44DC013467 from the National Institute on Deafness and Other Communication Disorders.