How does the brain decipher voice information in spoken language?

The field of social cognitive neuroscience is developing rapidly, and there is a recent surge of interest in how humans communicate their emotions and respond to emotional stimuli. In this research program, we investigate a topic that has been somewhat neglected in this growing field—how emotions are expressed and understood from the human voice while speaking, and how related mental functions are structured in the brain.







To recognize vocal expressions of emotion, for example that convey anger or joy, listeners must process dynamic acoustic properties of speech–i.e., ongoing fluctuations in pitch, loudness, and rhythm which differentiate over time in emotionally meaningful ways. The fact that emotional expressions in the voice are uniquely represented across time raises a critical empirical issue: how quickly do we detect emotions when listening to a speaker’s voice? And what neural mechanisms are involved?

Another question that we are addressing is:  when listeners recognize vocal expressions of emotion, does this information guide their visual attention and/or judgements of visual stimuli (e.g., facial expressions) in systematic ways? Answering this question will tell us much about natural social interactions, where humans are typically confronted by emotional cues in more than one sensory modality and must integrate these different cues in socially meaningful and adaptive ways.

Most of our studies involve young, healthy adults and our questions are being tested from different vantage points, using behavioural approaches, eye-tracking, electrophysiology (ERPs), and neuroimaging. Our research will lead to a more sophisticated model of the neuro-cognitive mechanisms that support emotional communication through the voice, and in broad terms, it will shed light on the uniquely human capacity to communicate both linguistic and emotional meanings using complex auditory signals.


Research funded by:


Back to top