We are just beginning to understand how vocal expressions of emotion are processed during on-line speech processing, and how vocal emotion cues are integrated with other social cues such as facial expressions.
The Facial Affect Decision Task (FADT) is an effective method to investigate the underlying emotional dimensions between two stimuli; for example, an emotionally-inflected prime utterance and a target facial expression. Participants view emotional faces (e.g., happy, sad expression) or 'grimace' faces and must judge whether the facial expression conveys an emotional meaning (yes/no response). By manipulating different parameters of an emotionally-inflected utterance and/or the task, one can look at how the speed and accuracy of the 'facial affect decision' is influenced by the emotional relationship of the utterance and the face (congruent vs. incongruent), by the duration of vocal cues in the prime, or other factors. This method is providing new information about how emotional prosody is implicitly processed during speech recognition in real time.