EMG of Speech
USC and TUM Collaboration
Silent Computational Paralinguistics (SCP) is an emerging field that explores how muscle movements in the face and neck, captured through electromyography (EMG), can reveal information about a person’s speech and emotions—even when no sound is produced. This research has exciting implications for communication technologies, offering potential breakthroughs for individuals with speech impairments, silent speech interfaces, and even emotion-aware systems. By analyzing facial muscle signals, SCP aims to decode unspoken words and understand the subtle ways emotions influence speech production. Early studies have shown that it’s possible to recognize speaker traits and states from EMG signals alone. We are working to refine these techniques, collect more diverse data, and develop better machine learning models to improve accuracy. One of the biggest challenges in this field is bridging the gap between EMG signals and audible speech. Machine learning plays a key role here, helping to map these silent signals to meaningful representations of language and expression. With advancements in deep learning, SCP is moving closer to real-world applications.
As the field grows, collaboration between researchers is crucial for refining methods, expanding datasets, and improving model performance. In this project, USC SAIL is collaborating with the Technical University of Munich in both data collection and analysis efforts. With continued progress, SCP could transform how we think about speech and communication, making silent speech as expressive and understandable as spoken language.
- Supported by Bavaria California Technology Center (BaCaTeC).