Quick Links

Research publications

Please click on a year to view faculty research publications and click on an article name to view its abstract.

2013

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM,S. (Ménard, L., Toupin, C., Baum, S., Drouin, S., Aubin, J., & Tiede, M.) (2013). Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 134, 2975-2987.

Abstract: In a previous paper [Ménard et al., J. Acoust. Soc. Am. 126, 1406–1414 (2009)], it was demonstrated that, despite enhanced auditory discrimination abilities for synthesized vowels, blind adult French speakers produced vowels that were closer together in the acoustic space than those produced by sighted adult French speakers, suggesting finer control of speech production in the sighted speakers. The goal of the present study is to further investigate the articulatory effects of visual deprivation on vowels produced by 11 blind and 11 sighted adult French speakers. Synchronous ultrasound, acoustic, and video recordings of the participants articulating the ten French oral vowels were made. Results show that sighted speakers produce vowels that are spaced significantly farther apart in the acoustic vowel space than blind speakers. Furthermore, blind speakers use smaller differences in lip protrusion but larger differences in tongue position and shape than their sighted peers to produce rounding and place of articulation contrasts. Trade-offs between lip and tongue positions were examined. Results are discussed in the light of the perception-for-action control theory.

Link to article

Dr. Meghan Clayards
CLAYARDS,M. (Brosseau-Lapré, F., Rvachew, S., Clayards, M., Dickson, D.) (2013). Stimulus variability and perceptual learning of non-native vowel categories. Applied Psycholinguistics. 34 (3), 419-441 doi:10.1017/S0142716411000750

Abstract: English-speakers' learning of a French vowel contrast (/ə/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

Dr. Laura Gonnerman
GONNERMAN,L. (Blais, M-J., & Gonnerman, L.M.) (2013). Explicit and implicit semantic processing of verb-particle constructions by French-English bilinguals. Bilingualism: Language and Cognition, 16, 829-846.

Abstract: Verb–particle constructions are a notoriously difficult aspect of English to acquire for second-language (L2) learners. The present study investigated whether L2 English speakers are sensitive to gradations in semantic transparency of verb–particle constructions (e.g., finish up vs. chew out). French–English bilingual participants (first language: French, second language: English) completed an off-line similarity ratings survey, as well as an on-line masked priming task. Results of the survey showed that bilinguals’ similarity ratings became more native-like as their English proficiency levels increased. Results from the masked priming task showed that response latencies from high, but not low-proficiency bilinguals were similar to those of monolinguals, with mid- and high-similarity verb–particle/verb pairs (e.g., finish up/finish) producing greater priming than low-similarity pairs (e.g., chew out/chew). Taken together, the results suggest that L2 English speakers develop both explicit and implicit understanding of the semantic properties of verb–particle constructions, which approximates the sensitivity of native speakers as English proficiency increases.

Link to article

--(Rvachew, S., *Marquis, A., *Brosseau-Lapré, F., *Paul, M., Royle, P., Gonnerman, L.M.) (2013). Speech articulation performance of francophone children in the early school years: Norming of the Test de Dépistage Francophone de Phonologie. Clinical Linguistics and Phonetics, 27, 950-968.

Abstract: Good quality normative data are essential for clinical practice in speech-language pathology but are largely lacking for French-speaking children. We investigated speech production accuracy by French-speaking children attending kindergarten (maternelle) and first grade (première année). The study aimed to provide normative data for a new screening test – the Test de Dépistage Francophone de Phonologie. Sixty-one children named 30 pictures depicting words selected to be representative of the distribution of phonemes, syllable shapes and word lengths characteristic of Québec French. Percent consonants’ correct was approximately 90% and did not change significantly with age although younger children produced significantly more syllable structure errors than older children. Given that the word set reflects the segmental and prosodic characteristics of spoken Québec French, and that ceiling effects were not observed, these results further indicate that phonological development is not complete by the age of seven years in French-speaking children.

Link to article

--(Kolne, K.L.D, *Hill, K.J., & Gonnerman, L.M.) (2013). The role of morphology in spelling: Long-term effects of training. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society (pp. 2766-2771). Austin, TX: Cognitive Science Society.

Abstract: We directly compared the effectiveness of a spelling intervention focused on morphological structure with one that emphasized the meanings of complex words, to differentiate their relative contributions to spelling acquisition in grade 3 and grade 5. We found that the morphology intervention provided a greater improvement than the vocabulary intervention, especially for children in grade 5. To compare the long-term effects of the two interventions, we tested the children’s spelling ability six-months after the conclusion of the intervention program. Results show that both grades maintain an increase in spelling accuracy compared to their pre-intervention performance. Additionally, the children in grade 5 who received morphological instruction retained more spelling knowledge than those who received the vocabulary instruction. These results suggest that teaching children about the structure of complex words supports their spelling ability in the long-term, providing evidence for the important role of morphological knowledge in literacy development.

Link to article

Dr. Vincent Gracco
GRACCO,V.L.(Tremblay, P., Deschamps, I., & Gracco, V.L.) (2013) Regional heterogeneity in the processing and the production of speech in the human planum temporale. Cortex, 49: 143-157.

Abstract:
INTRODUCTION:
The role of the left planum temporale (PT) in auditory language processing has been a central theme in cognitive neuroscience since the first descriptions of its leftward neuroanatomical asymmetry. While it is clear that PT contributes to auditory language processing there is still some uncertainty about its role in spoken language production.

METHODS:
Here we examine activation patterns of the PT for speech production, speech perception and single word reading to address potential hemispheric and regional functional specialization in the human PT. To this aim, we manually segmented the left and right PT in three non-overlapping regions (medial, lateral and caudal PT) and examined, in two complementary experiments, the contribution of exogenous and endogenous auditory input on PT activation under different speech processing and production conditions.

RESULTS:
Our results demonstrate that different speech tasks are associated with different regional functional activation patterns of the medial, lateral and caudal PT. These patterns are similar across hemispheres, suggesting bilateral processing of the auditory signal for speech at the level of PT.

CONCLUSIONS:
Results of the present studies stress the importance of considering the anatomical complexity of the PT in interpreting fMRI data.

Link to article

--(Beal,D., Gracco, V.L., Brettschneider, J., Kroll, R.M., DeNil,L.) (2013). A voxel-based morphometry (VBM) analysis of regional grey and white matter volume abnormalities within the speech production network of children who stutter. Cortex, 49: 2151-2161.

Abstract: It is well documented that neuroanatomical differences exist between adults who stutter and their fluently speaking peers. Specifically, adults who stutter have been found to have more grey matter volume (GMV) in speech relevant regions including inferior frontal gyrus, insula and superior temporal gyrus (Beal et al., 2007; Song et al., 2007). Despite stuttering having its onset in childhood only one study has investigated the neuroanatomical differences between children who do and do not stutter. Chang et al. (2008) reported children who stutter had less GMV in the bilateral inferior frontal gyri and middle temporal gyrus relative to fluently speaking children. Thus it appears that children who stutter present with unique neuroanatomical abnormalities as compared to those of adults who stutter. In order to better understand the neuroanatomical correlates of stuttering earlier in its development, near the time of onset, we used voxel-based morphometry to examine volumetric differences between 11 children who stutter and 11 fluent children. Children who stutter had less GMV in the bilateral inferior frontal gyri and left putamen but more GMV in right Rolandic operculum and superior temporal gyrus relative to fluent children. Children who stutter also had less white matter volume bilaterally in the forceps minor of the corpus callosum. We discuss our findings of widespread anatomic abnormalities throughout the cortical network for speech motor control within the context of the speech motor skill limitations identified in people who stutter (Namasivayam and van Lieshout, 2008; Smits-Bandstra et al., 2006).

Link to article

--(Sato,M., Troille,E., Menard, L., Cathiard, M-A., Gracco, V.L.) (2013). Silent articulation modulates auditory and audiovisual speech perception. Experimental Brain Research, DOI 10.1007/s00221-013-3510-8.

Abstract: The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.

Link to article

--(Smits-Bandstra, S., Gracco, V.L.) (2013). Verbal Implicit Sequence Learning in Persons who Stutter and Persons with Parkinson’s Disease. Journal of Motor Behavior, 45(5): 381-393.

Abstract: The authors investigated the integrity of implicit learning systems in 14 persons with Parkinson's disease (PPD), 14 persons who stutter (PWS), and 14 control participants. In a 120-min session participants completed a verbal serial reaction time task, naming aloud 4 syllables in response to 4 visual stimuli. Unbeknownst to participants, the syllables formed a repeating 8-item sequence. PWS and PPD demonstrated slower reaction times for early but not late learning trials relative to controls reflecting delays but not deficiencies in general learning. PPD also demonstrated less accuracy in general learning relative to controls. All groups demonstrated similar limited explicit sequence knowledge. Both PWS and PPD demonstrated significantly less implicit sequence learning relative to controls, suggesting that stuttering may be associated with compromised functional integrity of the cortico-striato-thalamo-cortical loop.

Link to article

--(Grabski, K., Tremblay, P., Gracco, V.L., Girin, L., Granjon, L., Sato, M.) (2013). A mediating role of the auditory dorsal pathway in selective adaptation to speech: a state-dependent transcranial magnetic stimulation study. Brain Research, 1515: 55-65.

Abstract: In addition to sensory processing, recent neurobiological models of speech perception postulate the existence of a left auditory dorsal processing stream, linking auditory speech representations in the auditory cortex with articulatory representations in the motor system, through sensorimotor interaction interfaced in the supramarginal gyrus and/or the posterior part of the superior temporal gyrus. The present state-dependent transcranial magnetic stimulation study is aimed at determining whether speech recognition is indeed mediated by the auditory dorsal pathway, by examining the causal contribution of the left ventral premotor cortex, supramarginal gyrus and posterior part of the superior temporal gyrus during an auditory syllable identification/categorization task. To this aim, participants listened to a sequence of /ba/ syllables before undergoing a two forced-choice auditory syllable decision task on ambiguous syllables (ranging in the categorical boundary between /ba/ and /da/). Consistent with previous studies on selective adaptation to speech, following adaptation to /ba/, participants responses were biased towards /da/. In contrast, in a control condition without prior auditory adaptation no such bias was observed. Crucially, compared to the results observed without stimulation, single-pulse transcranial magnetic stimulation delivered at the onset of each target stimulus interacted with the initial state of each of the stimulated brain area by enhancing the adaptation effect. These results demonstrate that the auditory dorsal pathway contribute to auditory speech adaptation.

Link to article

--(Arnaud, L., Sato, M., Menard, L., Gracco, V.L.) (2013). Speech adaptation reveals enhanced neural processing in the associative occipital and parietal cortex of congenitally blind adults. PLoS ONE 8(5): e64553. doi:10.1371/journal.pone.0064553.

Abstract: In the congenitally blind (CB), sensory deprivation results in cross-modal plasticity, with visual cortical activity observed for various auditory tasks. This reorganization has been associated with enhanced auditory abilities and the recruitment of visual brain areas during sound and language processing. The questions we addressed are whether visual cortical activity might also be observed in CB during passive listening to auditory speech and whether cross-modal plasticity is associated with adaptive differences in neuronal populations compared to sighted individuals (SI). We focused on the neural substrate of vowel processing in CB and SI adults using a repetition suppression (RS) paradigm. RS has been associated with enhanced or accelerated neural processing efficiency and synchronous activity between interacting brain regions. We evaluated whether cortical areas in CB were sensitive to RS during repeated vowel processing and whether there were differences across the two groups. In accordance with previous studies, both groups displayed a RS effect in the posterior temporal cortex. In the blind, however, additional occipital, temporal and parietal cortical regions were associated with predictive processing of repeated vowel sounds. The findings suggest a more expanded role for cross-modal compensatory effects in blind persons during sound and speech processing and a functional transfer of specific adaptive properties across neural regions as a consequence of sensory deprivation at birth.

Link to article

--(Mollaei, F., Shiller, D., Gracco, V.L.) (2013). Sensorimotor adaptation of speech in Parkinson’s disease. Movement Disorders. DOI: 10.1002/mds.25588

Abstract: The basal ganglia are involved in establishing motor plans for a wide range of behaviors. Parkinson's disease (PD) is a manifestation of basal ganglia dysfunction associated with a deficit in sensorimotor integration and difficulty in acquiring new motor sequences, thereby affecting motor learning. Previous studies of sensorimotor integration and sensorimotor adaptation in PD have focused on limb movements using visual and force-field alterations. Here, we report the results from a sensorimotor adaptation experiment investigating the ability of PD patients to make speech motor adjustments to a constant and predictable auditory feedback manipulation. Participants produced speech while their auditory feedback was altered and maintained in a manner consistent with a change in tongue position. The degree of adaptation was associated with the severity of motor symptoms. The patients with PD exhibited adaptation to the induced sensory error; however, the degree of adaptation was reduced compared with healthy, age-matched control participants. The reduced capacity to adapt to a change in auditory feedback is consistent with reduced gain in the sensorimotor system for speech and with previous studies demonstrating limitations in the adaptation of limb movements after changes in visual feedback among patients with PD.

Link to article

--(Klepousniotou, E., Gracco, V.L., Pike, G.B.) (2013). Pathways to lexical ambiguity: fMRI evidence for bilateral fronto-parietal involvement in language processing. Brain & Language, 123:11-21.

Abstract: Numerous functional neuroimaging studies reported increased activity in the pars opercularis and the pars triangularis (Brodmann’s areas 44 and 45) of the left hemisphere during the performance of linguistic tasks. The role of these areas in the right hemisphere in language processing is not understood and, although there is evidence from lesion studies that the right hemisphere is involved in the appreciation of semantic relations, no specific anatomical substrate has yet been identified. This event-related functional magnetic resonance imaging study compared brain activity during the performance of language processing trials in which either dominant or subordinate meaning activation of ambiguous words was required. The results show that the ventral part of the pars opercularis both in the left and the right hemisphere is centrally involved in language processing. In addition, they highlight the bilateral co-activation of this region with the supramarginal gyrus of the inferior parietal lobule during the processing of this type of linguistic material. This study, thus, provides the first evidence of co-activation of Broca’s region and the inferior parietal lobule, succeeding in further specifying the relative contribution of these cortical areas to language processing.

Link to article

Dr. Aparna Nadig
NADIG,A. (Nadig, A., & Shaw, H.) (Published online 18 Dec 2012). Acoustic marking of prominence: How do preadolescent speakers with and without high-functioning autism mark contrast in an interactive task? Language and Cognitive Processes.

Abstract: The acoustic correlates of discourse prominence have garnered much interest in recent adult psycholinguistics work, and the relative contributions of amplitude, duration and pitch to prominence have also been explored in research with young children. In this study, we bridge these two age groups by examining whether specific acoustic features are related to the discourse function of marking contrastive stress by preadolescent speakers, via speech obtained in a referential communication task that presented situations of explicit referential contrast. In addition, we broach the question of listener-oriented versus speaker-internal factors in the production of contrastive stress by examining both speakers who are developing typically and those with high-functioning autism (HFA). Diverging from conventional expectations and early reports, we found that speakers with HFA, like their typically developing peers (TYP), appropriately marked prominence in the expected location, on the pre-nominal adjective, in instructions such as “Pick up the BIG cup”. With respect to the use of specific acoustic features, both groups of speakers employed amplitude and duration to mark the contrastive element, whereas pitch was not produced selectively to mark contrast by either group. However, the groups also differed in their relative reliance on acoustic features, with HFA speakers relying less consistently on amplitude than TYP speakers, and TYP speakers relying less consistently on duration than HFA speakers. In summary, the production of contrastive stress was found to be globally similar across groups, with fine-grained differences in the acoustic features employed to do so. These findings are discussed within a developmental framework of the production of acoustic features for marking discourse prominence, and with respect to the variations among speakers with autism spectrum disorders that may lead to appropriate production of contrastive stress.

Link to article

-- (Bang, J., Burns, J. & Nadig, A.) (2013). Conveying subjectivity in conversation: Mental state terms and personal narratives in typical development and children with high functioning autism. Journal of Autism and Developmental Disorders, 43 (7), 1732-1740.

Abstract: Mental state terms and personal narratives are conversational devices used to communicate subjective experience in conversation. Pre-adolescents with high-functioning autism (HFA, n = 20) were compared with language-matched typically-developing peers (TYP, n = 17) on production of mental state terms (i.e., perception, physiology, desire, emotion, cognition) and personal narratives (sequenced retelling of life events) during short conversations. HFA and TYP participants did not differ in global use of mental state terms, nor did they exhibit reduced production of cognitive terms in particular. Participants with HFA produced significantly fewer personal narratives. They also produced a smaller proportion of their mental state terms during personal narratives. These findings underscore the importance of assessing and developing qualitative aspects of conversation in highly verbal individuals with autism.

Link to article

--(Bani Hani, H., Gonzalez-Barrero, A. & Nadig, A.) (2013). Children’s referential understanding of novel words and parent labelling behaviours: similarities across children with and without autism spectrum disorders. Journal of Child Language, 40 (5), 971-1002.

Abstract: This study examined two facets of the use of social cues for early word learning in parent–child dyads, where children had an Autism Spectrum Disorder (ASD) or were typically developing. In Experiment 1, we investigated word learning and generalization by children with ASD (age range: 3;01–6;02) and typically developing children (age range: 1;02–4;09) who were matched on language ability. In Experiment 2, we examined verbal and non-verbal parental labeling behaviors. First, we found that both groups were similarly able to learn a novel label using social cues alone, and to generalize this label to other representations of the object. Children who utilized social cues for word learning had higher language levels. Second, we found that parental cues used to introduce object labels were strikingly similar across groups. Moreover, parents in both groups adapted labeling behavior to their child's language level, though this surfaced in different ways across groups.

Link to article

Dr. Marc Pell
PELL,M.D. (Garrido-Vásquez, P., Pell, M.D., Paulmann, S., Strecker, K., Schwarz, J., & Kotz, S.A.) (2013). An ERP study of vocal emotion processing in asymmetric Parkinson's disease. Social, Cognitive and Affective Neuroscience, 8 (8), 918-927.

Abstract: Parkinson's disease (PD) has been related to impaired processing of emotional speech intonation (emotional prosody). One distinctive feature of idiopathic PD is motor symptom asymmetry, with striatal dysfunction being strongest in the hemisphere contralateral to the most affected body side. It is still unclear whether this asymmetry may affect vocal emotion perception. Here, we tested 22 PD patients (10 with predominantly left-sided [LPD] and 12 with predominantly right-sided motor symptoms) and 22 healthy controls in an event-related potential study. Sentences conveying different emotional intonations were presented in lexical and pseudo-speech versions. Task varied between an explicit and an implicit instruction. Of specific interest was emotional salience detection from prosody, reflected in the P200 component. We predicted that patients with predominantly right-striatal dysfunction (LPD) would exhibit P200 alterations. Our results support this assumption. LPD patients showed enhanced P200 amplitudes, and specific deficits were observed for disgust prosody, explicit anger processing and implicit processing of happy prosody. Lexical speech was predominantly affected while the processing of pseudo-speech was largely intact. P200 amplitude in patients correlated significantly with left motor scores and asymmetry indices. The data suggest that emotional salience detection from prosody is affected by asymmetric neuronal degeneration in PD.

Link to article

-- (Rigoulot, S., *Wassiliwizky, E., & Pell, M.D.) (2013). Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition. Frontiers in Psychology, 4, 1-14. Doi: 10.3389/fpsyg.2013.00367.

Abstract: Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400-1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.

Link to article

Dr. Linda Polka
POLKA, L. (Nazzi, T., Mersad, K., Sundara, M., Iakimova, G., & Polka, L.) (2013). Early word segmentation in infants acquiring Parisian French: task-dependent and dialect-specific aspects, Journal of Child Language, 1-24.

Abstract: Six experiments explored Parisian French-learning infants' ability to segment bisyllabic words from fluent speech. The first goal was to assess whether bisyllabic word segmentation emerges later in infants acquiring European French compared to other languages. The second goal was to determine whether infants learning different dialects of the same language have partly different segmentation abilities, and whether segmenting a non-native dialect has a cost. Infants were tested on standard European or Canadian French stimuli, in the word-passage or passage-word order. Our study first establishes an early onset of segmentation abilities: Parisian infants segment bisyllabic words at age 0;8 in the passage-word order only (revealing a robust order of presentation effect). Second, it shows that there are differences in segmentation abilities across Parisian and Canadian French infants, and that there is a cost for cross-dialect segmentation for Parisian infants. We discuss the implications of these findings for understanding word segmentation processes.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Marquis, A., Brosseau‐Lapré, F., Royle, P., Paul, M., & Gonnerman, L. M.) (2013). Speech articulation performance of francophone children in the early school years: Norming of the Test de Dépistage Francophone de Phonologie. Clinical Linguistics & Phonetics, Early Online, 1‐19. doi:10.3109/02699206.2013.830149.

Abstract: Good quality normative data are essential for clinical practice in speech-language pathology but are largely lacking for French-speaking children. We investigated speech production accuracy by French-speaking children attending kindergarten (maternelle) and first grade (première année). The study aimed to provide normative data for a new screening test – the Test de Dépistage Francophone de Phonologie. Sixty-one children named 30 pictures depicting words selected to be representative of the distribution of phonemes, syllable shapes and word lengths characteristic of Québec French. Percent consonants’ correct was approximately 90% and did not change significantly with age although younger children produced significantly more syllable structure errors than older children. Given that the word set reflects the segmental and prosodic characteristics of spoken Québec French, and that ceiling effects were not observed, these results further indicate that phonological development is not complete by the age of seven years in French-speaking children.

Link to article

--(Brosseau‐Lapré, F., & Rvachew, S.) (2013). Cross‐linguistic comparison of speech errors produced by English‐ and French‐speaking preschool age children with developmental phonological disorders. International Journal of Speech‐Language Pathology, Early Online, 1‐11.

Abstract: Twenty-four French-speaking children with developmental phonological disorders (DPD) were matched on percentage of consonants correct (PCC)-conversation, age, and receptive vocabulary measures to English-speaking children with DPD in order to describe how speech errors are manifested differently in these two languages. The participants' productions of consonants on a single-word test of articulation were compared in terms of feature-match ratios for the production of target consonants, and type of errors produced. Results revealed that the French-speaking children had significantly lower match ratios for the major sound class features [+ consonantal] and [+ sonorant]. The French-speaking children also obtained significantly lower match ratios for [+ voice]. The most frequent type of errors produced by the French-speaking children was syllable structure errors, followed by segment errors, and a few distortion errors. On the other hand, the English-speaking children made more segment than syllable structure and distortion errors. The results of the study highlight the need to use test instruments with French-speaking children that reflect the phonological characteristics of French at multiple levels of the phonological hierarchy.

Link to article

--(Brosseau‐Lapré, F., Rvachew, S., Clayards, M. & Dickson, D.) (2013). Stimulus variability and perceptual learning of non‐native vowel categories. Applied Psycholinguistics, 34, 419‐441. doi:10.1017/S0142716411000750

Abstract: English-speakers' learning of a French vowel contrast (/ə/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (Royle, P., Drury, J. E., & Steinhauer, K.) (2013). ERPs and task effects in the auditory processing of gender agreement and semantics in French. The Mental Lexicon, 8(2), 216‐244.

Abstract: We investigated task effects on violation ERP responses to Noun-Adjective gender mismatches and lexical/conceptual semantic mismatches in a combined auditory/visual paradigm in French. Participants listened to sentences while viewing pictures of objects. This paradigm was designed to investigate language processing in special populations (e.g., children) who may not be able to read or to provide stable behavioural judgment data. Our main goal was to determine how ERP responses to our target violations might differ depending on whether participants performed a judgment task (Task) versus listening for comprehension (No-Task). Characterizing the influence of the presence versus absence of judgment tasks on violation ERP responses allows us to meaningfully interpret data obtained using this paradigm without a behavioural task and relate them to judgment-based paradigms in the ERP literature. We replicated previously observed ERP patterns for semantic and gender mismatches, and found that the task especially affected the later P600 component.

Link to article

--(Nickels, S., Opitz, B., & Steinhauer, K.) (2013). ERPs show that classroom‐instructed late second language learners rely on the same prosodic cues in syntactic parsing as native speakers. Neuroscience letters, 557, 107‐111.

Abstract: The loss of brain plasticity after a 'critical period' in childhood has often been argued to prevent late language learners from using the same neurocognitive mechanisms as native speakers and, therefore, from attaining a high level of second language (L2) proficiency [7,11]. However, more recent behavioral and electrophysiological research has challenged this 'Critical Period Hypothesis', demonstrating that even late L2 learners can display native-like performance and brain activation patterns [17], especially after longer periods of immersion in an L2 environment. Here we use event-related potentials (ERPs) to show that native-like processing can also be observed in the largely under-researched domain of speech prosody - even when L2 learners are exposed to their second language almost exclusively in a classroom setting. Participants listened to spoken sentences whose prosodic boundaries would either cooperate or conflict with the syntactic structure. Previous work had shown that this paradigm is difficult for elderly native speakers, however, German L2 learners of English showed very similar ERP components for on-line prosodic phrasing as well as for prosody-syntax mismatches (garden path effects) as the control group of native speakers. These data suggest that L2 immersion is not always necessary to master complex L2 speech processing in a native-like way.

Link to article

--(Bowden, H. W., Steinhauer, K., Sanz, C., & Ullman, M. T.) (2013). Native‐like brain processing of syntax can be attained by university foreign language learners. Neuropsychologia, 51(13), 2492‐2511.

Abstract: Using event-related potentials (ERPs), we examined the neurocognition of late-learned second language (L2) Spanish in two groups of typical university foreign-language learners (as compared to native (L1) speakers): one group with only one year of college classroom experience, and low-intermediate proficiency (L2 Low), and another group with over three years of college classroom experience as well as 1–2 semesters of immersion experience abroad, and advanced proficiency (L2 Advanced). Semantic violations elicited N400s in all three groups, whereas syntactic word-order violations elicited LAN/P600 responses in the L1 and L2 Advanced groups, but not the L2 Low group. Indeed, the LAN and P600 responses were statistically indistinguishable between the L1 and L2 Advanced groups. The results support and extend previous findings. Consistent with previous research, the results suggest that L2 semantic processing always depends on L1-like neurocognitive mechanisms, whereas L2 syntactic processing initially differs from L1, but can shift to native-like processes with sufficient proficiency or exposure, and perhaps with immersion experience in particular. The findings further demonstrate that substantial native-like brain processing of syntax can be achieved even by typical university foreign-language learners.

Link to article

--(Courteau E, Royle P, Gascon A, Marquis A, Drury JE, Steinhauer K.) (2013) Gender concord and semantic processing in French children: An auditory ERP study. Dans S Baiz, N, Goldman & R Hawkes (Éds.),Proceedings of the 37th Annual Boston University Conference on Language Development. (Vol. 1, pp. 87‐99). Boston: Cascadilla.

Abstract: The present study used event - related brain potentials (ERPs) to investigate language processing in young children, focusing on gender agreement (determiner - noun and noun - adjective) and conceptual semantic s in French.Electrophysiological measurement techniques provide a valuable addition to our methodological toolkit for studying agreement processing in this population, in particular concerning noun - adjective agreement (concord), since ot her traditional sources of data have tended to be uninformative. A lthough children arguably exhibit systematic constraints on their linguistic behavior, this is not always evident in the laboratory (e.g., where task demands may mask the presence of linguis tic knowledge) or in investigations of child language corpora. For example, although French - speaking children seem to master adjective and determiner concord early on, productive use of gender - marked adjectives is not clearly supported in the corpus, where determiner use predominates ( Valois & Royle, 2009) , or in elicitation, where idiosyncratic gender marking on adjectives may result in variabl e mastery of feminine forms ( Royle & Valois, 2010). Here we report on an auditory/visual ERP study that shows that the processing of gender agreement can be reliably tapped in young French children.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E. & Brandeker, M.) (2013). The effect of bilingual exposure versus language impairment on nonword repetition and sentence imitation scores. Journal of Communication Disorders, 46, 1-16.

Abstract: Purpose:
Nonword repetition (NWR) and sentence imitation (SI) are increasingly used as diagnostic tools for the identification of Primary Language Impairment (PLI). They may be particularly promising diagnostic tools for bilingual children if performance on them is not highly affected by bilingual exposure. Two studies were conducted which examined (1) the effect of amount of bilingual exposure on performance on French and English nonword repetition and sentence imitation in 5-year-old French-English bilingual children and (2) the diagnostic accuracy of the French versions of these measures and of receptive vocabulary in 5-year-old monolingual French-speakers and bilingual speakers with and without PLI, carefully matched on language exposure.

Method:
Study 1 included 84 5-year-olds acquiring French and English simultaneously, differing in their amount of exposure to the two languages but equated on age, nonverbal cognition and socio-economic status. Children were administered French and English tests of NWR and SI. In Study 2, monolingual and bilingual children with and without PLI (four groups, n = 14 per group) were assessed for NWR, SI, and receptive vocabulary in French to determine diagnostic accuracy.

Results:
Study 1: Both processing measures, but in particular NWR, were less affected by previous exposure than vocabulary measures. Bilingual children with varying levels of exposure were unaffected by the length of nonwords. Study 2: In contrast to receptive vocabulary, NWR and SI correctly distinguished children with PLI from children with typical development (TD) regardless of bilingualism. Sensitivity levels were acceptable, but specificity was lower.

Conclusions:
Bilingual children perform differently than children with PLI on NWR and SI. In contrast to children with PLI, bilingual children with a large range of previous exposure levels achieve high NWR scores and are unaffected by the length of the nonwords.

Link to article

2012

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Bélanger, N., Mayberry, R., & Baum, S.) (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16, 263-285.

Abstract: Deaf people often achieve low levels of reading skills. The hypothesis that the useof phonological codes is associated with good reading skills in deaf readers is not yet fully supported in the literature. We investigated skilled and less skilled adultdeaf readers’ use of orthographic and phonological codes in reading. Experiment 1used a masked priming paradigm to investigate automatic use of these codes duringvisual word processing. Experiment 2 used a serial recall task to determine whetherorthographic and phonological codes are used to maintain words in memory. Skilled hearing, skilled deaf, and less skilled deaf readers used orthographic codes duringword recognition and recall, but only skilled hearing readers relied on phonologicalcodes during these tasks. It is important to note that skilled and less skilled deaf readers performed similarly in both tasks, indicating that reading dif?culties in deaf adults may not be linked to the activation of phonological codes during reading

Link to article

-- (Zatorre, R. & Baum, S.) (2012). Musical melody and speech intonation: Singing a different tune? PLoS Biology, 10(7): e1001372. doi:10.1371/journal.pbio.1001372.

Abstract: Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L. (Gonnerman, L.M.) (2012). The roles of efficiency and complexity in the processing of verb particle constructions. Journal of Speech Sciences, 2, 3-31.

Abstract: Recent theories have proposed that processing difficulty affects both individuals’ choice of grammatical structures and the distribution of these structures across languages of the world (Hawkins, 2004). Researchers have proposed that performance constraints, such as efficiency, integration, and storage costs, drive languages to choose word orders that minimize processing demands for individual speakers (Hawkins, 1994; Gibson, 2000). This study investigates whether three performance factors, adjacency, dependency, and complexity, affect reading times of sentences with verb-particle constructions. Results indicate that it is more difficult to process dependent verb-particles in shifted sentences that contain more complex intervening noun phrases. These findings demonstrate how performance factors interact and how the relative weight of each affects processing. The results also support the notion that processing ease affects grammaticalization, such that those structures which are more easily processed by individuals (subject relatives and adjacent dependent constituents) are more common across languages (Keenan & Hawkins, 1987).

Link to article

-- (Blais, M-J., & Gonnerman, L.M.,) (2012). The role of semantic transparency in the processing of verb particle constructions by French-English bilinguals. In N. Miyake, D. Peebles, & R.P. Cooper (Eds.), Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society (pp. 1338-1343). Austin, TX: Cognitive Science Society.

Abstract: Verb-particle constructions (phrasal verbs) are a notoriously difficult aspect of English to acquire for second-language (L2) learners. This study was conducted to assess whether L2 English speakers would show sensitivity to the subtle semantic properties of these constructions, namely the gradations in semantic transparency of different verb-particle constructions (e.g., finish up vs. chew out). L1 French, L2 English bilingual participants completed an off-line (explicit) survey of similarity ratings, as well as an on-line (implicit) masked priming task. Bilinguals showed less agreement in their off-line ratings of semantic similarity, but their ratings were generally similar to those of monolinguals. On the masked priming task, the more proficient bilinguals showed a pattern of effects parallel to monolinguals, indicating similar sensitivity to semantic similarity at an implicit level. These findings suggest that the properties of verb-particle constructions can be both implicitly and explicitly grasped by L2 speakers whose L1 lacks phrasal verbs.

Link to article

-- (Marquis, A., Royle, P., Gonnerman, L. & Rvachew, S.) (2012). La conjugaison du verbe en début de scolarisation. Travaux interdisciplinaires sur la parole et le langage, 28, 2-13.

Abstract: We evaluated 35 Québec French children on their ability to produce regular, sub-regular, and irregular passé composé verb forms (ending in -é, -i, -u or other). An elicitation task was administered to children attending preschool or first grade. Target verbs were presented, along with images representing them, in infinitive (e.g., Marie va cacher ses poupées ‘Mary aux.pres. hide-inf. her dolls’= ‘Mary will hide her dolls’) and present tense (ex. Marie cache toujours ses poupées ‘Mary hide-3s. always her dolls’= ‘Mary always hides her dolls’) contexts, in order to prime the appropriate inflectional ending. Children were asked to produce target verb forms in the passé composé (perfect past) by answering the question ‘What did he/she do yesterday?’. Results show no reduction of erroneous productions or error types with age. Response patterns highlight morphological pattern frequency effects, in addition to productivity and reliability effects, on children’s mastery of French conjugation. These data have consequences for psycholinguistic models of regular and irregular morphology processing and acquisition.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Klepousniotou E, Pike GB, Steinhauer K, Gracco VL) (2012). Not all ambiguous words are created equal: An EEG investigation of homonymy and polysemy. Brain &Language, 123(1): 1-7.

Abstract: Event-related potentials (ERPs) were used to investigate the time-course of meaning activation of different types of ambiguous words. Unbalanced homonymous ("pen"), balanced homonymous ("panel"), metaphorically polysemous ("lip"), and metonymically polysemous words ("rabbit") were used in a visual single-word priming delayed lexical decision task. The theoretical distinction between homonymy and polysemy was reflected in the N400 component. Homonymous words (balanced and unbalanced) showed effects of dominance/frequency with reduced N400 effects predominantly observed for dominant meanings. Polysemous words (metaphors and metonymies) showed effects of core meaning representation with both dominant and subordinate meanings showing reduced N400 effects. Furthermore, the division within polysemy, into metaphor and metonymy, was supported. Differences emerged in meaning activation patterns with the subordinate meanings of metaphor inducing differentially reduced N400 effects moving from left hemisphere electrode sites to right hemisphere electrode sites, potentially suggesting increased involvement of the right hemisphere in the processing of figurative meaning.

Link to article

-- (Beal D, Gracco VL, Brettschneider J, Kroll RM, DeNil L) (2012). A voxel-based morphometry (VBM) analysis of regional grey and white matter volume abnormalities within the speech production network of children who stutter. Cortex, DOI information: 10.1016/j.cortex.2012.08.013.

Abstract: It is well documented that neuroanatomical differences exist between adults who stutter and their fluently speaking peers. Specifically, adults who stutter have been found to have more grey matter volume (GMV) in speech relevant regions including inferior frontal gyrus, insula and superior temporal gyrus (Beal et al., 2007; Song et al., 2007). Despite stuttering having its onset in childhood only one study has investigated the neuroanatomical differences between children who do and do not stutter. Chang et al. (2008) reported children who stutter had less GMV in the bilateral inferior frontal gyri and middle temporal gyrus relative to fluently speaking children. Thus it appears that children who stutter present with unique neuroanatomical abnormalities as compared to those of adults who stutter. In order to better understand the neuroanatomical correlates of stuttering earlier in its development, near the time of onset, we used voxel-based morphometry to examine volumetric differences between 11 children who stutter and 11 fluent children. Children who stutter had less GMV in the bilateral inferior frontal gyri and left putamen but more GMV in right Rolandic operculum and superior temporal gyrus relative to fluent children. Children who stutter also had less white matter volume bilaterally in the forceps minor of the corpus callosum. We discuss our findings of widespread anatomic abnormalities throughout the cortical network for speech motor control within the context of the speech motor skill limitations identified in people who stutter (Namasivayam and van Lieshout, 2008; Smits-Bandstra et al., 2006).

Link to article

Dr. Aparna Nadig
NADIG, A. (Bourguignon, N., Nadig, A. & Valois, D.) (2012). The Biolinguistics of Autism: Emergent Perspectives. Biolinguistics, 6 (2), 124-165.

Abstract: This contribution attempts to import the study of autism into the biolinguistics program by reviewing the current state of knowledge on its neurobiology, physiology and verbal phenotypes from a comparative vantage point. A closer look at alternative approaches to the primacy of social cognition impairments in autism spectrum disorders suggests fundamental differences in every aspect of language comprehension and production, suggesting productive directions of research in auditory and visual speech processing as well as executive control. Strong emphasis is put on the great heterogeneity of autism phenotypes, raising important caveats towards an all-or-nothing classification of autism. The study of autism brings interesting clues about the nature and evolution of language, in particular its ontological connections with musical and visual perception as well as executive functions and generativity. Success in this endeavor hinges upon expanding beyond the received wisdom of autism as a purely social disorder and favoring a “cognitive style” approach increasingly called for both inside and outside the autistic community.

Link to article

-- (Nadig, A. & Shaw, H.) (2012). Expressive prosody in high-functioning autism: Increased pitch range and what it means to listeners. Journal of Autism and Developmental Disorders, 42 (4), 499-511.

Abstract: Are there consistent markers of atypical prosody in speakers with high functioning autism (HFA) compared to typically-developing speakers? We examined: (1) acoustic measurements of pitch range, mean pitch and speech rate in conversation, (2) perceptual ratings of conversation for these features and overall prosody, and (3) acoustic measurements of speech from a structured task. Increased pitch range was found in speakers with HFA during both conversation and structured communication. In global ratings listeners rated speakers with HFA as having atypical prosody. Although the HFA group demonstrated increased acoustic pitch range, listeners did not rate speakers with HFA as having increased pitch variation. We suggest that the quality of pitch variation used by speakers with HFA was non-conventional and thus not registered as such by listeners.

Link to article

Dr. Marc Pell
PELL, M.D. (Liu, P. & Pell, M.D.) (2012). Recognizing vocal emotions in Mandarin Chinese: A validated database of Chinese vocal emotional stimuli. Behavior Research Methods, 44, 1042-1051.

Abstract: To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan [dot] liu [at] mail [dot] mcgill [dot] ca.

Link to article

-- (Schwartz, R. & Pell, M.D.) (2012). Emotional speech processing at the intersection of prosody and semantics. PLoS ONE, 7 (10): e47279.

Abstract: The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing.

Link to article

-- (Pell, M.D., Robin, J., & Paulmann, S.) (2012). How quickly do listeners recognize emotional prosody in their native versus a foreign language? Speech Prosody 6th International Conference Proceedings, Shanghai, China.

Abstract: This study investigated whether the recognition of emotions from speech prosody occurs in a similar manner and has a similar time course when adults listen to their native language versus a foreign language. Native English listeners were presented emotionally-inflected pseudo-utterances produced in English or Hindi which had been gated to different time durations (200, 400, 500, 600, 700 ms). Results looked at how accurate the participants were to recognize emotions in each language condition and explored whether particular emotions could be identified from shorter time segments, and whether this was influenced by language experience. Results demonstrated that listeners recognized emotions reliably in both their native and in a foreign language; however, they demonstrated an advantage in accuracy and speed to detect some, but not all emotions, in the native language condition.

Link to article

-- (Rigoulot, S. & Pell, M.D.) (2012). Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces. PLoS ONE, 7 (1): e30740.

Abstract: Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.

Link to article

-- (Paulmann, S., Titone, D., & Pell, M.D.) (2012). How emotional prosody guides your way: evidence from eye movements. Speech Communication, 54, 92-107.

Abstract: This study investigated cross-modal effects of emotional voice tone (prosody) on face processing during instructed visual search. Specifically, we evaluated whether emotional prosodic cues in speech have a rapid, mandatory influence on eye movements to an emotionally-related face, and whether these effects persist as semantic information unfolds. Participants viewed an array of six emotional faces while listening to instructions spoken in an emotionally congruent or incongruent prosody (e.g., “Click on the happy face” spoken in a happy or angry voice). The duration and frequency of eye fixations were analyzed when only prosodic cues were emotionally meaningful (pre-emotional label window: “Click on the/…”), and after emotional semantic information was available (post-emotional label window: “…/happy face”). In the pre-emotional label window, results showed that participants made immediate use of emotional prosody, as reflected in significantly longer frequent fixations to emotionally congruent versus incongruent faces. However, when explicit semantic information in the instructions became available (post-emotional label window), the influence of prosody on measures of eye gaze was relatively minimal. Our data show that emotional prosody has a rapid impact on gaze behavior during social information processing, but that prosodic meanings can be overridden by semantic cues when linguistic information is task relevant.

Link to article

-- (Jaywant, A. & Pell, M.D.) (2012). Categorical processing of negative emotions from speech prosody. Speech Communication, 54, 1-10.

Abstract: Everyday communication involves processing nonverbal emotional cues from auditory and visual stimuli. To characterize whether emotional meanings are processed with category-specificity from speech prosody and facial expressions, we employed a cross-modal priming task (the Facial Affect Decision Task; Pell, 2005a) using emotional stimuli with the same valence but that differed by emotion category. After listening to angry, sad, disgusted, or neutral vocal primes, subjects rendered a facial affect decision about an emotionally congruent or incongruent face target. Our results revealed that participants made fewer errors when judging face targets that conveyed the same emotion as the vocal prime, and responded significantly faster for most emotions (anger and sadness). Surprisingly, participants responded slower when the prime and target both conveyed disgust, perhaps due to attention biases for disgust-related stimuli. Our findings suggest that vocal emotional expressions with similar valence are processed with category specificity, and that discrete emotion knowledge implicitly affects the processing of emotional faces between sensory modalities.

Link to article

Dr. Linda Polka
POLKA, L. (Nazzi,T., Goyet, L., Sundara, M. & Polka, L.) (2012). Différences linguistiques et dialectales dans la mise en place des procédures de segmentation de la parole. Enfance, 127-146

Abstract: This paper presents a review of recent studies investigating the issue of the early segmentation of continuous speech into words, a step in language acquisition that is a prerequisite for lexical acquisition. After having underlined the importance of this issue, we present studies having explored young infants’ use of two major segmentation cues: distributional cues and rhythmic unit cues. The first cue is considered to be non-specific to the language spoken in the infant’s environment, while the second cue differs across languages. The first cue thus predicts similar developmental trajectories for segmentation across languages, while the second cue predicts different types of developmental trajectories according to the rhythmic type of the language in acquisition. It was found that segmentation abilities emerge around 8 months of age and develop during the months that follow, and that the weight of the different cues vary across languages, according to the developmental period, and probably as a function of dialectal differences within a given language. We then discuss the fact that word form segmentation requires in all likeliness the combined use of different segmentation cues from the youngest age. We conclude by delineating some pending issues to be addressed in future research.

Link to article

-- (Polka, L. & Sundara, M,) (2012). Word segmentation in monolingual infants acquiring Canadian English and Canadian French: Native language, cross-language, and cross-dialect comparisons, Infancy, 17(2), 198-232.

Abstract: In five experiments, we tested segmentation of word forms from natural speech materials by 8-month-old monolingual infants who are acquiring Canadian French or Canadian English. These two languages belong to different rhythm classes; Canadian French is syllable-timed and Canada English is stress-timed. Findings of Experiments 1, 2, and 3 show that 8-month-olds acquiring either Canadian French or Canadian English can segment bi-syllable words in their native language. Thus, word segmentation is not inherently more difficult in a syllable-timed compared to a stress-timed language. Experiment 4 shows that Canadian French-learning infants can segment words in European French. Experiment 5 shows that neither Canadian French- nor Canadian English-learning infants can segment two syllable words in the other language. Thus, segmentation abilities of 8-month-olds acquiring either a stress-timed or syllable-timed language are language specific.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Marquis, A., Royle, P., Gonnerman, L. & Rvachew, S.) (2012). La conjugaison du verbe en début de scolarisation. Travaux interdisciplinaires sur la parole et le langage, 28, 2-13.

Abstract: We evaluated 35 Québec French children on their ability to produce regular, sub-regular, and irregular passé composé verb forms (ending in -é, -i, -u or other). An elicitation task was administered to children attending preschool or first grade. Target verbs were presented, along with images representing them, in infinitive (e.g., Marie va cacher ses poupées ‘Mary aux.pres. hide-inf. her dolls’= ‘Mary will hide her dolls’) and present tense (ex. Marie cache toujours ses poupées ‘Mary hide-3s. always her dolls’= ‘Mary always hides her dolls’) contexts, in order to prime the appropriate inflectional ending. Children were asked to produce target verb forms in the passé composé (perfect past) by answering the question ‘What did he/she do yesterday?’. Results show no reduction of erroneous productions or error types with age. Response patterns highlight morphological pattern frequency effects, in addition to productivity and reliability effects, on children’s mastery of French conjugation. These data have consequences for psycholinguistic models of regular and irregular morphology processing and acquisition.

Link to article

-- (Rvachew, S. & Brosseau-Lapré, F.) (2012). An input-focused intervention for children with developmental phonological disorders. Perspectives on Language Learning and Education, 19, 31-35.

In this article, we consider recent advances in theory and practice related to developmental phonological disorders (PDP). We consider the benefits of structured speech input to address PDP and provide a summary of a recent study designed to address phonological disorders in children using input-focused intervention. Results revealed that intervention focusing on input resulted in similar gains when compared to intervention focusing on speech production practice. We then discuss clinical implications.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (White, E.J., Genesee, F., & Steinhauer, K.) (2012). Brain Responses Before and After Intensive Second Language Learning: Proficiency Based Changes and First Language Background Effects in Adult Learners. PLoS ONE, 7(12), e52318. [PONE-D-12-22453]

Abstract: This longitudinal study tracked the neuro-cognitive changes associated with second language (L2) grammar learning in adults in order to investigate how L2 processing is shaped by a learner’s first language (L1) background and L2 proficiency. Previous studies using event-related potentials (ERPs) have argued that late L2 learners cannot elicit a P600 in response to L2 grammatical structures that do not exist in the L1 or that are different in the L1 and L2. We tested whether the neuro-cognitive processes underlying this component become available after intensive L2 instruction. Korean- and Chinese late-L2-learners of English were tested at the beginning and end of a 9-week intensive English-L2 course. ERPs were recorded while participants read English sentences containing violations of regular past tense (a grammatical structure that operates differently in Korean and does not exist in Chinese). Whereas no P600 effects were present at the start of instruction, by the end of instruction, significant P600s were observed for both L1 groups. Latency differences in the P600 exhibited by Chinese and Korean speakers may be attributed to differences in L1–L2 reading strategies. Across all participants, larger P600 effects at session 2 were associated with: 1) higher levels of behavioural performance on an online grammaticality judgment task; and 2) with correct, rather than incorrect, behavioural responses. These findings suggest that the neuro-cognitive processes underlying the P600 (e.g., “grammaticalization”) are modulated by individual levels of L2 behavioural performance and learning.

Link to article

-- (Royle, P., Drury, J.E., Bourguignon, N., & Steinhauer, K.) (2012). The temporal dynamics of inflected word recognition: A masked ERP priming study of French verbs. Neuropsychologia, 50, 3542–3553. Doi: 10.1016/j.neuropsychologia.2012.09.007

Abstract: Morphological aspects of human language processing have been suggested by some to be reducible to the combination of orthographic and semantic effects, while others propose that morphological structure is represented separately from semantics and orthography and involves distinct neuro-cognitive processing mechanisms. Here we used event-related brain potentials (ERPs) to investigate semantic, morphological and formal (orthographic) processing conjointly in a masked priming paradigm. We directly compared morphological to both semantic and formal/orthographic priming (shared letters) on verbs. Masked priming was used to reduce strategic effects related to prime perception and to suppress semantic priming effects. The three types of priming led to distinct ERP and behavioral patterns: semantic priming was not found, while formal and morphological priming resulted in diverging ERP patterns. These results are consistent with models of lexical processing that make reference to morphological structure. We discuss how they fit in with the existing literature and how unresolved issues could be addressed in further studies.

Link to article

-- (Klepousniotou E, Pike GB, Steinhauer K, Gracco VL) (2012). Not all ambiguous words are created equal: An EEG investigation of homonymy and polysemy. Brain &Language, 123(1): 1-7.

Abstract: Event-related potentials (ERPs) were used to investigate the time-course of meaning activation of different types of ambiguous words. Unbalanced homonymous ("pen"), balanced homonymous ("panel"), metaphorically polysemous ("lip"), and metonymically polysemous words ("rabbit") were used in a visual single-word priming delayed lexical decision task. The theoretical distinction between homonymy and polysemy was reflected in the N400 component. Homonymous words (balanced and unbalanced) showed effects of dominance/frequency with reduced N400 effects predominantly observed for dominant meanings. Polysemous words (metaphors and metonymies) showed effects of core meaning representation with both dominant and subordinate meanings showing reduced N400 effects. Furthermore, the division within polysemy, into metaphor and metonymy, was supported. Differences emerged in meaning activation patterns with the subordinate meanings of metaphor inducing differentially reduced N400 effects moving from left hemisphere electrode sites to right hemisphere electrode sites, potentially suggesting increased involvement of the right hemisphere in the processing of figurative meaning.

Link to article

-- (Bourguignon, N., Drury, J.E., Valois, D., & Steinhauer, K.) (2012). Decomposing animacy reversals between Agents and Experiencers: An ERP study. Brain and Language, 122, 179- 189.

Abstract: The present study aimed to refine current hypotheses regarding thematic reversal anomalies, which have been found to elicit either N400 or – more frequently – “semantic-P600” (sP600) effects. Our goal was to investigate whether distinct ERP profiles reflect aspectual-thematic differences between Agent-Subject Verbs (ASVs; e.g., ‘to eat’) and Experiencer-Subject Verbs (ESVs; e.g., ‘to love’) in English. Inanimate subject noun phrases created reversal anomalies on both ASV and ESV. Animacy-based prominence effects and semantic association were controlled to minimize their contribution to any ERP effects. An N400 was elicited by the target verb in the ESV but not the ASV anomalies, supporting the hypothesis of a distinctive aspectual-thematic structure between ESV and ASV. Moreover, the N400 finding for English ESV shows that, in contrast to previous claims, the presence versus absence of N400s for this kind of anomaly cannot be exclusively explained in terms of typological differences across languages.

Link to article

-- (Morgan-Short, K., Steinhauer, K., Sanz, C., & Ullman, M.T.) (2012). Explicit and implicit second language training differentially affect the achievement of native-language brain patterns. Journal of Cognitive Neuroscience, 24 (4), 933-947.

Abstract: It is widely believed that adults cannot learn a foreign language in the same way that children learn a first language. However, recent evidence suggests that adult learners of a foreign language can come to rely on native-like language brain mechanisms. Here, we show that the type of language training crucially impacts this outcome. We used an artificial language paradigm to examine longitudinally whether explicit training (that approximates traditional grammar-focused classroom settings) and implicit training (that approximates immersion settings) differentially affect neural (electrophysiological) and behavioral (performance) measures of syntactic processing. Results showed that performance of explicitly and implicitly trained groups did not differ at either low or high proficiency. In contrast, electrophysiological (ERP) measures revealed striking differences between the groups' neural activity at both proficiency levels in response to syntactic violations. Implicit training yielded an N400 at low proficiency, whereas at high proficiency, it elicited a pattern typical of native speakers: an anterior negativity followed by a P600 accompanied by a late anterior negativity. Explicit training, by contrast, yielded no significant effects at low proficiency and only an anterior positivity followed by a P600 at high proficiency. Although the P600 is reminiscent of native-like processing, this response pattern as a whole is not. Thus, only implicit training led to an electrophysiological signature typical of native speakers. Overall, the results suggest that adult foreign language learners can come to rely on native-like language brain mechanisms, but that the conditions under which the language is learned may be crucial in attaining this goal.

Link to article

-- (Steinhauer, K. & Drury, J.E.) (2012). On the early left-anterior negativity (ELAN) in syntax studies. Brain and Language. 120 (2), 135-162.

Abstract: Within the framework of Friederici's (2002) neurocognitive model of sentence processing, the early left anterior negativity (ELAN) in event-related potentials (ERPs) has been claimed to be a brain marker of syntactic first-pass parsing. As ELAN components seem to be exclusively elicited by word category violations (phrase structure violations), they have been taken as strong empirical support for syntax-first models of sentence processing and have gained considerable impact on psycholinguistic theory in a variety of domains. The present article reviews relevant ELAN studies and raises a number of serious issues concerning the reliability and validity of the findings. We also discuss how baseline problems and contextual factors can contribute to early ERP effects in studies examining word category violations. We conclude that--despite the apparent wealth of ELAN data--the functional significance of these findings remains largely unclear. The present paper does not claim to have falsified the existence of ELANs or syntax-related early frontal negativities. However, by separating facts from myths, the paper attempts to make a constructive contribution to how future ERP research in the area of syntax processing may better advance our understanding of online sentence comprehension.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Elin Thordardottir & Anna Gudrun Juliusdottir) (2012). Icelandic as a second language: A longitudinal study of language knowledge and processing by school-age children. International Journal of Bilingual Education and Bilingualism, 1-25. DOI:10.10801/ 13670050.2012.693062.

Abstract: School-age children (n=39) acquiring Icelandic as a second language were tested yearly over three years on Icelandic measures of language knowledge and language processing. Comparison with native speaker norms revealed large and significant differences for the great majority of the children. Those who scored within the normal monolingual range had a mean length of residence (LOR) of close to 8 years and had arrived to the country at an early age. Raw test scores revealed significant improvement across test times. However, the rate of learning did not occur sufficiently fast for the gap relative to native speakers to diminish over time. Effects of age at arrival and LOR were difficult to tease out. However, children arriving to the country in adolescence performed consistently less well than children with the same LOR arriving in mid childhood. In spite of low scores on standardized tests of language knowledge, the L2 learners scored uniformly high on an Icelandic test of nonword repetition. The acquisition of Icelandic as an L2 appears to occur at a slower rate than the L2 acquisition of English. This may be related to the grammatical complexity of the language as well as to the low global economic value of the Icelandic language.

Link to article

2011

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Thibeault, M., Baum, S., Ménard, L., Richard, G., & McFarland, D.) (2011). Articulatory movements during speech adaptation to palatal perturbation. Journal of the Acoustical Society of America, 129, 2112-2120.

Abstract: Previous work has established that speakers have difficulty making rapid compensatory adjustments in consonant production (especially in fricatives) for structural perturbations of the vocal tract induced by artificial palates with thicker-than-normal alveolar regions. The present study used electromagnetic articulography and simultaneous acoustic recordings to estimate tongue configurations during production of [s š t k] in the presence of a thin and a thick palate, before and after a practice period. Ten native speakers of English participated in the study. In keeping with previous acoustic studies, fricatives were more affected by the palate than were the stops. The thick palate lowered the center of gravity and the jaw was lower and the tongue moved further backwards and downwards. Center of gravity measures revealed complete adaptation after training, and with practice, subjects’ decreased interlabial distance. The fact that adaptation effects were found for [k], which are produced with an articulatory gesture not directly impeded by the palatal perturbation, suggests a more global sensorimotor recalibration that extends beyond the specific articulatory target.

Link to article

-- (Pauker, E., Itzhak, I., Baum, S. R., & Steinhauer, K.) (2011). Co-operating and conflicting prosody in spoken English garden path sentences: Evidence from event-related potentials. Journal of Cognitive Neuroscience, 23, 2731-2751.

Abstract: In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.

Link to article

Dr. Meghan Clayards
CLAYARDS, M. (Niebuhr, O., Clayards, M., Meunier, C., & Lancia, L.) (2011) On place assimilation within sibilant sequences – comparing French and English. Journal of Phonetics 39, 429-451

Abstract: Two parallel acoustic analyses were performed for French and Englishsibilantsequences, based on comparably structured read-speech corpora. They comprised all sequences of voiced and voiceless alveolar and postalveolar sibilants that can occur across word boundaries in the two languages, as well as the individual alveolar and postalveolar sibilants, combined with preceding or following labial consonants across word boundaries. The individual sibilants provide references in order to determine type and degree of placeassimilation in the sequences. Based on duration and centre-of-gravity measurements that were taken for each sibilant and sibilantsequence, we found clear evidence for placeassimilation not only for English, but also for French. In both languages the assimilation manifested itself gradually in the time as well as in the frequency domain. However, while in Englishassimilation occurred strictly regressively and primarily towards postalveolar, Frenchassimilation was solely towards postalveolar, but in both regressive and progressive directions. Apart from these basic differences, the degree of assimilation in French and English was independent of simultaneous voice assimilation but varied considerably between the individual speakers. Overall, the context-dependent and speaker-specific assimilation patterns match well with previous findings.

Link to article

-- (Bejjanki, V.R., Clayards, M., Knill, D.C., & Aslin, R.N.) (2011) Cue integration in categorical tasks: insights from audio-visual speech perception. PLOS one 6(5): e19812. doi:10.1371/journal.pone.0019812

Abstract: Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks.

Link to article

-- (Clayards, M., & Doty, E.) (2011) Automatic analysis of sibilant assimilation in English. Proceedings of Acoustics week in Canada. Canadian Acoustics 39(3), 194-195
Dr. Vincent Gracco
GRACCO, V. (Shum M., Shiller D., Baum S., & Gracco V.) (2011) Sensorimotor integration for speech motor learning involves the inferior parietal cortex. European Journal of Neuroscience, 34(11), 1817-1822.

Abstract: Sensorimotor integration is important for motor learning. The inferior parietal lobe, through its connections with the frontal lobe and cerebellum, has been associated with multisensory integration and sensorimotor adaptation for motor behaviors other than speech. In the present study, the contribution of the inferior parietal cortex to speech motor learning was evaluated using repetitive transcranial magnetic stimulation (rTMS) prior to a speech motor adaptation task. Subjects' auditory feedback was altered in a manner consistent with the auditory consequences of an unintended change in tongue position during speech production, and adaptation performance was used to evaluate sensorimotor plasticity and short-term learning. Prior to the feedback alteration, rTMS or sham stimulation was applied over the left supramarginal gyrus (SMG). Subjects who underwent the sham stimulation exhibited a robust adaptive response to the feedback alteration whereas subjects who underwent rTMS exhibited a diminished adaptive response. The results suggest that the inferior parietal region, in and around SMG, plays a role in sensorimotor adaptation for speech. The interconnections of the inferior parietal cortex with inferior frontal cortex, cerebellum and primary sensory areas suggest that this region may be an important component in learning and adapting sensorimotor patterns for speech.

Link to article

-- (Tremblay P., Deschamps I., & Gracco,V.L.) (2011) Regional heterogeneity in the processing and the production of speech in the human planum temporale. Cortex, doi: 10.1016/j.cortex.2011.09.004

Abstract:
INTRODUCTION: The role of the left planum temporale (PT) in auditory language processing has been a central theme in cognitive neuroscience since the first descriptions of its leftward neuroanatomical asymmetry. While it is clear that PT contributes to auditory language processing there is still some uncertainty about its role in spoken language production.

METHODS: Here we examine activation patterns of the PT for speech production, speech perception and single word reading to address potential hemispheric and regional functional specialization in the human PT. To this aim, we manually segmented the left and right PT in three non-overlapping regions (medial, lateral and caudal PT) and examined, in two complementary experiments, the contribution of exogenous and endogenous auditory input on PT activation under different speech processing and production conditions.

RESULTS: Our results demonstrate that different speech tasks are associated with different regional functional activation patterns of the medial, lateral and caudal PT. These patterns are similar across hemispheres, suggesting bilateral processing of the auditory signal for speech at the level of PT.

CONCLUSIONS: Results of the present studies stress the importance of considering the anatomical complexity of the PT in interpreting fMRI data.

Link to article

-- (Beal D., Quraan M., Cheyne D., Taylor M., Gracco V.L., & DeNil L.) (2011) Speech-induced suppression of evoked auditory fields in children who stutter. NeuroImage. 54(4), 2994-3003.

Abstract: Auditory responses to speech sounds that are self-initiated are suppressed compared to responses to the same speech sounds during passive listening. This phenomenon is referred to as speech-induced suppression, a potentially important feedback-mediated speech-motor control process. In an earlier study, we found that both adults who do and do not stutter demonstrated a reduced amplitude of the auditory M50 and M100 responses to speech during active production relative to passive listening. It is unknown if auditory responses to self-initiated speech-motor acts are suppressed in children or if the phenomenon differs between children who do and do not stutter. As stuttering is a developmental speech disorder, examining speech-induced suppression in children may identify possible neural differences underlying stuttering close to its time of onset. We used magnetoencephalography to determine the presence of speech-induced suppression in children and to characterize the properties of speech-induced suppression in children who stutter. We examined the auditory M50 as this was the earliest robust response reproducible across our child participants and the most likely to reflect a motor-to-auditory relation. Both children who do and do not stutter demonstrated speech-induced suppression of the auditory M50. However, children who stutter had a delayed auditory M50 peak latency to vowel sounds compared to children who do not stutter indicating a possible deficiency in their ability to efficiently integrate auditory speech information for the purpose of establishing neural representations of speech sounds.

Link to article

-- (Feng Y., Gracco V.L., & Max L.) (2011) Integration of auditory and somatosensory error signals in the neural control of speech movements. Journal of Neurophysiology, 106(2), 667-679.

Abstract: We investigated auditory and somatosensory feedback contributions to the neural control of speech. In Task I, sensorimotor adaptation was studied by perturbing one of these sensory modalities or both modalities simultaneously. The first formant frequency (F1) in the auditory feedback was shifted up by a real-time processor and/or the extent of jaw opening was increased or decreased with a force field applied by a robotic device. All eight subjects lowered F1 to compensate for the up-shifted F1 in the feedback signal regardless of whether or not the jaw was perturbed. Adaptive changes in subjects' acoustic output resulted from adjustments in articulatory movements of the jaw or tongue. Adaptation in jaw opening extent in response to the mechanical perturbation occurred only when no auditory feedback perturbation was applied or when the direction of adaptation to the force was compatible with the direction of adaptation to a simultaneous acoustic perturbation. In Tasks II and III, subjects' auditory and somatosensory precision and accuracy were estimated. Correlation analyses showed that the relationships (a) between F1 adaptation extent and auditory acuity for F1 and (b) between jaw position adaptation extent and somatosensory acuity for jaw position were weak and statistically not significant. Taken together, the combined findings from this work suggest that, in speech production, sensorimotor adaptation updates the underlying control mechanisms in such a way that the planning of vowel-related articulatory movements takes into account a complex integration of error signals from previous trials but likely with a dominant role for the auditory modality.

Link to article

Dr. Marc Pell
PELL, M.D. & Kotz, S.A. (2011). On the time course of vocal emotion recognition. PLoS ONE, 6 (11): e27256. doi: 10.1371/journal.pone.0027256.

Abstract: How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n?=?48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the "identification point" for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M?=?517 ms), sadness (M?=?576 ms), and neutral (M?=?510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.

Link to article

-- (Jesso, S., Morlog, D., Ross, S., Pell, M.D., Pasternak, S., Mitchell, D., Kertesz, A., & Finger, E.) (2011). The effects of oxytocin on social cognition and behaviour in frontotemporal dementia. Brain, 134, 2493-2501.

Abstract: Patients with behavioural variant frontotemporal dementia demonstrate abnormalities in behaviour and social cognition, including deficits in emotion recognition. Recent studies suggest that the neuropeptide oxytocin is an important mediator of social behaviour, enhancing prosocial behaviours and some aspects of emotion recognition across species. The objective of this study was to assess the effects of a single dose of intranasal oxytocin on neuropsychiatric behaviours and emotion processing in patients with behavioural variant frontotemporal dementia. In a double-blind, placebo-controlled, randomized cross-over design, 20 patients with behavioural variant frontotemporal dementia received one dose of 24 IU of intranasal oxytocin or placebo and then completed emotion recognition tasks known to be affected by frontotemporal dementia and by oxytocin. Caregivers completed validated behavioural ratings at 8?h and 1 week following drug administrations. A significant improvement in scores on the Neuropsychiatric Inventory was observed on the evening of oxytocin administration compared with placebo and compared with baseline ratings. Oxytocin was also associated with reduced recognition of angry facial expressions by patients with behavioural variant frontotemporal dementia. Together these findings suggest that oxytocin is a potentially promising, novel symptomatic treatment candidate for patients with behavioural variant frontotemporal dementia and that further study of this neuropeptide in frontotemporal dementia is warranted.

Link to article

-- (Pell, M.D., Jaywant, A., Monetta, L., & Kotz, S.A.) (2011). Emotional speech processing: disentangling the effects of prosody and semantic cues. Cognition & Emotion, 25 (5), 834-853.

Abstract: To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody-semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.

Link to article

-- (Paulmann, S. & Pell, M.D.) (2011). Is there an advantage for recognizing multi-modal emotional stimuli? Motivation and Emotion, 35 (2), 192-201.

Abstract: Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2011). Recognizing sarcasm without language: A cross-linguistic study of English and Cantonese. Pragmatics & Cognition, 19 (2), 203-223. (Special issue on “Prosody and Humour”.)

Abstract: The goal of the present research was to determine whether certain speaker intentions conveyed through prosody in an unfamiliar language can be accurately recognized. English and Cantonese utterances expressing sarcasm, sincerity, humorous irony, or neutrality through prosody were presented to English and Cantonese listeners unfamiliar with the other language. Listeners identified the communicative intent of utterances in both languages in a crossed design. Participants successfully identified sarcasm spoken in their native language but identified sarcasm at near-chance levels in the unfamiliar language. Both groups were relatively more successful at recognizing the other attitudes when listening to the unfamiliar language (in addition to the native language). Our data suggest that while sarcastic utterances in Cantonese and English share certain acoustic features, these cues are insufficient to recognize sarcasm between languages; rather, this ability depends on (native) language experience.

Link to article

Dr. Linda Polka
POLKA, L. (Best, C., Bradlow, A., Guion, S., & Polka, L.) (2011). Using the lens of phonetic experience to resolve phonological forms. Journal of Phonetics,39, 453-455.

Abstract: This special issue of the Journal contains a selection of papers developed from original presentations at the 2nd ASA Special Workshop on Speech with the theme of Cross-Language Speech Perception and Variations in Linguistics Experience. The papers represent major theoretical and empirical contributions that converge upon the common theme of how our perception of phonological forms is guided and constrained by our experience with the phonetic details of the language(s) we have learned. Several of the papers presented here offer key theoretical advances and lay out novel or newly expanded frameworks that increase our understanding of speech perception as shaped by universal, first language acquisition abilities, general learning mechanisms, and language-specific perceptual tuning. Others offer careful empirical investigations of language learning by simultaneous bilinguals, as well as by later second language learners, and discuss their new findings in light of the theoretical proposals. The work presented here will provide a stimulating and thoughtful impetus toward further progress on the fundamentally significant issue of understanding of how language experience shapes our perception of phonetic details and phonological structure in spoken language.

Link to article

-- (Polka, L., & Bohn, O-S.) (2011). Natural Referent Vowel (NRV) framework: An emerging view of early phonetic development, Journal of Phonetics, 39, 467-478.

Abstract: The aim of this paper is to provide an overview of an emerging new framework for understanding earlyphoneticdevelopment—the NaturalReferentVowel (NRV) framework. The initial support for this framework was the finding that directional asymmetries occur often in infant vowel discrimination. The asymmetries point to an underlying perceptual bias favoring vowels that fall closer to the periphery of the F1/F2 vowel space. In Polka and Bohn (2003) we reviewed the data on asymmetries in infant vowel perception and proposed that certain vowels act as naturalreferentvowels and play an important role in shaping vowel perception. In this paper we review findings from studies of infant and adult vowel perception that emerged since Polka and Bohn (2003), from other labs and from our own work, and we formally introduce the NRV framework. We outline how this framework connects with linguistic typology and other models of speech perception and discuss the challenges and promise of NRV as a conceptual tool for advancing our understanding of phoneticdevelopment.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (MacLeod, A.A.N., Laukys, K., & Rvachew, S.) (2011). The impact of bilingual language learning on whole-word complexity and segmental accuracy among children aged 18 and 36 months. International Journal of Speech-Language Pathology, 13(6), 490-499.

Abstract: This study investigates the phonological acquisition of 19 monolingual English children and 21 English?French bilingual children at 18 and 36 months. It contributes to the understanding of age-related changes to phonological complexity and to differences due to bilingual language development. In addition, preliminary normative data is presented for English children and English?French bilingual children. Five measures were targeted to represent a range of indices of phonological development: the phonological mean length of utterance (pMLU) of the adult target, the pMLU produced by the child, the proportion of whole-word proximity (PWP), proportion of consonants correct (PCC), and proportion of whole words correct (PWC). The measures of children's productions showed improvements from 18 to 36 months; however, the rate of change varied across the measures, with PWP improving faster, then PCC, and finally PWC. The results indicated that bilingual children can keep pace with their monolingual peers at both 18 months and 36 months of age, at least in their dominant language. Based on these findings, discrepancies with monolingual phonological development that one might observe in a bilingual child's non-dominant language could be explained by reduced exposure to the language rather than a general slower acquisition of phonology.

Link to article

-- (Rvachew, S., Mattock, K., Clayards, M., Chiang, P., & Brosseau-Lapré, F.) (2011). Perceptual considerations in multilingual adult and child speech acquisition (pp. 58-68). In S. McLeod & B.A. Goldstein (Eds.), Multilingual Aspects of Speech Sound Disorders in Children. Bristol, UK: Multilingual Matters.

Book description: Multilingual Aspects of Speech Sound Disorders in Children explores both multilingual and multicultural aspects of children with speech sound disorders. The 30 chapters have been written by 44 authors from 16 different countries about 112 languages and dialects. The book is designed to translate research into clinical practice. It is divided into three sections: (1) Foundations, (2) Multilingual speech acquisition, (3) Speech-language pathology practice. An introductory chapter discusses cross-linguistic and multilingual aspects of speech sound disorders in children. Subsequent chapters address speech sound acquisition, how the disorder manifests in different languages, cultural contexts, and speakers, and addresses diagnosis, assessment and intervention. The research chapters synthesize available research across a wide range of languages. A unique feature of this book are the chapters that translate research into clinical practice. These chapters provide real-life vignettes for specific geographical or linguistic contexts.

Book information:
ISBN: 9781847695123

Link to book

-- (Brosseau-Lapré, F., Rvachew, S., Clayards, M. & Dickson, D.) (2011). Stimulus variability and perceptual learning of non-native vowel categories. Applied Psycholinguistics, doi:10.1017/S0142716411000750 NEWSLETTER ARTICLE

Abstract: English-speakers' learning of a French vowel contrast (/?/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

-- (Rvachew, S. & Brosseau-Lapré, F.) (2011). Preschoolers with phonological disorders learn language and literacy skills in 12 weeks. Communiqué, 25(3), 18-19.
Dr. Karsten Steinhauer
STEINHAUER, K. (Hwang, H. & Steinhauer, K.) (2011). Phrase length matters: The interplay between implicit prosody and syntax in Korean ‘garden path’ sentences. Journal of Cognitive Neuroscience, 23 (11), 3555-3575. (doi:10.1162/jocn_a_00001)

Abstract: In spoken language comprehension, syntactic parsing decisions interact with prosodic phrasing, which is directly affected by phrase length. Here we used ERPs to examine whether a similar effect holds for the on-line processing of written sentences during silent reading, as suggested by theories of "implicit prosody." Ambiguous Korean sentence beginnings with two distinct interpretations were manipulated by increasing the length of sentence-initial subject noun phrases (NPs). As expected, only long NPs triggered an additional prosodic boundary reflected by a closure positive shift (CPS) in ERPs. When sentence materials further downstream disambiguated the initially dispreferred interpretation, the resulting P600 component reflecting processing difficulties ("garden path" effects) was smaller in amplitude for sentences with long NPs. Interestingly, additional prosodic revisions required only for the short subject disambiguated condition-the delayed insertion of an implicit prosodic boundary after the subject NP-were reflected by a frontal P600-like positivity, which may be interpreted in terms of a delayed CPS brain response. These data suggest that the subvocally generated prosodic boundary after the long subject NP facilitated the recovery from a garden path, thus primarily supporting one of two competing theoretical frameworks on implicit prosody. Our results underline the prosodic nature of the cognitive processes underlying phrase length effects and contribute cross-linguistic evidence regarding the on-line use of implicit prosody for parsing decisions in silent reading.

Link to article

-- (Pauker, E., Itzhak, I., Baum, S.R., & Steinhauer, K.) (2011). Effects of cooperating and conflicting prosody in spoken English garden path sentences: ERP evidence for the boundary deletion hypothesis. Journal of Cognitive Neuroscience, 23 (10), 2731-2751. (doi: 10.1162/jocn.2011.21610)

Abstract: In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.

Link to article

-- (Steinhauer, K.) (2011). Combining Behavioral Measures and Brain Potentials to Study Categorical Prosodic Boundary Perception and Relative Boundary Strength. Proceedings of the 17th International Congress of Phonetic Sciences (ICPhS XVII), Hongkong (China), p. 1898-1901.

Abstract: Two controversial issues in speech prosody research concern (i) the traditional notion of categorical boundary perception (i.e., intermediate phrase [ip] boundaries versus intonation phrase boundaries [IPh]), and (ii) the suggestion that the relative strength of competing boundaries (rather than the mere presence of boundaries) may account for prosody effects on sentences interpretation. An alternative to qualitatively distinct boundary categories is the idea of a “gradient quantitative boundary size” (e.g., Wagner & Crivellaro [14]), which may also imply a graded spectrum of relative strength effects. Based on promising behavioral data supporting this view, we propose to study these predictions in more detail using event-related potentials (ERPs). In phonetics and phonology, these electrophysiological measures have been shown to provide an excellent tool to investigate online processes across the entire time course of a spoken utterance, with a temporal resolution in the range of milliseconds. Thus, ERPs are expected to reflect both the real-time processing and integration at the boundary positions as well as its subsequent effects on sentence interpretation.

Link to article

-- (Prévost, A.E., Goad, H., & Steinhauer, K.) (2011). Prosodic transfer: An event-related potentials approach. Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznan, Poland, 1-3 May 2010. (eds. Katarzyna Dziubalska-Kolaczyk, Magdalena Wrembel, Malgorzata Kul), 361-366. (ISBN: 978-83-928167-9-9)

Abstract: This study investigates the possible electrophysiological evidence of the influence of L1 prosodic structure on a speakers second language, specifically in the context of the Prosodic Transfer Hypothesis of Goad & White (2004, 2009), with Turkish as the L1 and English as the L2. Turkish prosodic structure differs from English in its treatment of articles in ways that suggest that Turkish articles are affixal clitics whereas English articles are free clitics. Crucially, it follows that a correct English article-adjective-noun sequence violates Turkish prosody, since adjectives cannot intervene between articles and noun heads in Turkish, and therefore that Turkish speakers will be unable to correctly prosodify the sequence. Behavioural production evidence in which Turkish speakers delete, substitute, or stress the English article in asymmetrical ways predictable by prosodic structure robustly supports this claim. The current experiment uses ERP recording to elucidate the online processing of Turkish speakers hearing English sentences that either do or do not violate Turkish prosodic structure, with the aim of demonstrating real-time neural responses to L1-L2 prosodic mismatch.

Link to article

-- (Steinhauer, K. & Connolly, J.F.) (2011). Event-related potentials in the study of language. In: Whitaker, H. (ed.), 191-203. Concise Encyclopedia of Brain and Language. Oxford: Elsevier.

Book description: This volume descibes, in up-to-date terminology and authoritative interpretation, the field of neurolinguistics, the science concerned with the neural mechanisms underlying the comprehension, production and abstract knowledge of spoken, signed or written language. An edited anthology of 165 articles from the award-winning Encyclopedia of Language and Linguistics 2nd edition, Encyclopedia of Neuroscience 4th Edition and Encyclopedia of the Neorological Sciences and Neurological Disorders, it provides the most comprehensive one-volume reference solution for scientists working with language and the brain ever published.

Book information:
ISBN-10: 080964982
ISBN-13: 9780080964980

Link to article

-- (Prévost, A.E., Goad, H., & Steinhauer, K.) (2011). Prosodic transfer: An event-related potentials approach. Achievements and Perspectives in SLA of speech II: New Sounds 2010 (eds. Magdalena Wrembel, Malgorzata Kul, Katarzyna Dziubalska-Kolaczyk), 217-226. Peter Lang (ISBN 978-3-631-60723-7 hb).

Book description: This publication constitutes a selection of papers presented at the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, held in Poznan, Poland. It consists of two volumes, presenting state-of-the-art achievements and perspectives for future research related to the acquisition of second language phonetics and phonology. The key issues include the development of explanatory frameworks of phonological SLA, the expanded scope of domains under investigation, modern methods applied in phonological research, and a new take on the causal variables related to ultimate proficiency in L2 speech. This second volume contains a selection of 26 articles that cover a wide variety of themes including L2 speech perception and production, segmental and prosodic features, as well as factors related to individual variability and foreign accent.

Book information:
ISBN: 978-3-631-60723-7 hb

Link to book

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2011). The relationship between bilingual exposure and vocabulary development. International Journal of Bilingualism, 14 (5), 426-445., DOI: 10.1177/1367006911403202

Abstract: The relationship between amount of bilingual exposure and performance in receptive and expressive vocabulary in French and English was examined in 5-year-old Montreal children acquiring French and English simultaneously as well as in monolingual children. The children were equated on age, socio-economic status, nonverbal cognition, and on minority/majority language status (both languages have equal status), but differed in the amount of exposure they had received to each language spanning the continuum of bilingual exposure levels. A strong relationship was found between amount of exposure to a language and performance in that language. This relationship was different for receptive and expressive vocabulary. Children having been exposed to both languages equally scored comparably to monolingual children in receptive vocabulary, but greater exposure was required to match monolingual standards in expressive vocabulary. Contrary to many previous studies, the bilingual children were not found to exhibit a significant gap relative to monolingual children in receptive vocabulary. This was attributed to the favorable language-learning environment for French and English in Montreal and might also be related to the fact the two languages are fairly closely related. Children with early and late onset (before 6 months and after 20 months) of bilingual exposure who were equated on overall amount of exposure to each language did not differ significantly on any vocabulary measure.

Link to article

-- (Thordardottir, E., Kehayia, E., Mazer, B., Lessard, N., Majnemer, A., Sutton, A., Trudeau, N., & Chilingarian, G.) (2011). Sensitivity and specificity of French language measures for the identification of Primary Language Impairment at age 5. Journal of Speech, Language and Hearing Research, 54, 580-597.

Abstract:
PURPOSE: Research on the diagnostic accuracy of different language measures has focused primarily on English. This study examined the sensitivity and specificity of a range of measures of language knowledge and language processing for the identification of primary language impairment (PLI) in French-speaking children. Because of the lack of well-documented language measures in French, it is difficult to accurately identify affected children, and thus research in this area is impeded.

METHOD: The performance of 14 monolingual French-speaking children with confirmed, clinically identified PLI (M = 61.4 months of age, SD = 7.2 months) on a range of language and language processing measures was compared with the performance of 78 children with confirmed typical language development (M age = 58.9 months, SD = 5.7). These included evaluations of receptive vocabulary, receptive grammar, spontaneous language, narrative production, nonword repetition, sentence imitation, following directions, rapid automatized naming, and digit span. Sensitivity, specificity, and likelihood ratios were determined at 3 cutoff points: (a) -1 SD, (b) -1.28 SD, and (b) -2 SD below mean values. Receiver operator characteristic curves were used to identify the most accurate cutoff for each measure.

RESULTS: Significant differences between the PLI and typical language development groups were found for the majority of the language measures, with moderate to large effect sizes. The measures differed in their sensitivity and specificity, as well as in which cutoff point provided the most accurate decision. Ideal cutoff points were in most cases between the mean and -1 SD. Sentence imitation and following directions appeared to be the most accurate measures.

CONCLUSIONS: This study provides evidence that standardized measures of language and language processing provide accurate identification of PLI in French. The results are strikingly similar to previous results for English, suggesting that in spite of structural differences between the languages, PLI in both languages involves a generalized language delay across linguistic domains, which can be identified in a similar way using existing standardized measures.

Link to article

2010

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Dwivedi, V., Drury, J., Molnar, M., Phillips, N., Baum, S., & Steinhauer, K. (2010). ERPs reveal sensitivity to hypothetical contexts in spoken discourse. Neuroreport, 21, 791-795.

Abstract: We used event-related potentials to examine the interaction between two dimensions of discourse comprehension: (i) referential dependencies across sentences (e.g. between the pronoun 'it' and its antecedent 'a novel' in: 'John is reading a novel. It ends quite abruptly'), and (ii) the distinction between reference to events/situations and entities/individuals in the real/actual world versus in hypothetical possible worlds. Cross-sentential referential dependencies are disrupted when the antecedent for a pronoun is embedded in a sentence introducing hypothetical entities (e.g. 'John is considering writing a novel. It ends quite abruptly'). An earlier event-related potential reading study showed such disruptions yielded a P600-like frontal positivity. Here we replicate this effect using auditorily presented sentences and discuss the implications for our understanding of discourse-level language processing.

Link to article

-- (Dwivedi, V., Phillips, N., Einagel, S., & Baum, S.) (2010). The neural underpinnings of linguistic ambiguity, Brain Research, 1311, 93-109.

Abstract: We used event-related brain potentials (ERPs) in order to investigate how definite NP anaphors are integrated into semantically ambiguous contexts. Although sentences such as Every kid climbed a tree lack any syntactic or lexical ambiguity, these structures exhibit two possible meanings, where either many trees or only one tree was climbed. This semantic ambiguity is the result of quantifier scope ambiguity. Previous behavioural studies have shown that a plural definite NP continuation is preferred (as reflected in a continuation sentence, e.g., The trees were in the park) over singular NPs (e.g., The tree was in the park). This study aimed to identify the neurophysiological pattern associated with the integration of the continuation sentences, as well as the time course of this process. We examined ERPs elicited by the noun and verb in continuation sentences following ambiguous and unambiguous context sentences. A sustained negative shift was most evident at the Verb position in sentences exhibiting scope ambiguity. Furthermore, this waveform did not differentiate itself until 900 ms after the presentation of the Noun, suggesting that the parser waits to assign meaning in contexts exhibiting quantifier scope ambiguity, such that such contexts are left as underspecified representations.

Link to article

-- (Itzhak, I., Pauker, E., Drury, J., Baum, S., & Steinhauer, K.) (2010). Interactions of prosody and transitivity bias in the processing of closure ambiguities in spoken sentences: ERP evidence. Neuroreport, 21, 8-13.
-- (Steinhauer, K., Pauker, E., Itzhak, I., Abada, S., & Baum, S.) (2010). Prosody-syntax interactions in aging: Event-related potentials reveal dissociations between on-line and off-line measures. Neuroscience Letters, 472, 133-138.

Abstract: This study used ERPs to determine whether older adults use prosody in resolving early and late closure ambiguities comparably to young adults. Participants made off-line acceptability judgments on well-formed sentences or those containing prosody-syntax mismatches. Behaviorally, both groups identified mismatches, but older subjects accepted mismatches significantly more often than younger participants. ERP results demonstrate CPS components and garden-path effects (P600s) in both groups, however, older adults displayed no N400 and more anterior P600 components. The data provide the first electrophysiological evidence suggesting that older adults process and integrate prosodic information in real-time, despite off-line behavioral differences. Age-related differences in neurocognitive processing mechanisms likely contribute to this dissociation.

Link to article

Dr. Meghan Clayards
CLAYARDS, M. (2010). Using probability distributions to account for recognition of canonical and reduced word forms. Proceedings of the Annual Meeting of the Linguistics Society of America, Baltimore, MD.

Abstract: The frequency of a word form influences how efficiently it is processed, but canonical forms often show an advantage over reduced forms even when the reduced form is more frequent. This paper addresses this paradox by considering a model in which representations of lexical items consist of a distribution over forms. Optimal inference given these distributions accounts for item based differences in recognition of phonological variants and canonical form advantage.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Shiller, D., Gracco, V.L., & Rvachew, S.) (2010). Auditory-motor learning during speech production in 9-11 year-old children. PLoS-One, 5(9), e12975.

Abstract:
BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

Link to article

-- (Beal D., Cheyne D., Gracco V.L., & DeNil, L.) (2010). Auditory evoked responses to vocalization during passive listening and active generation in adults who stutter. NeuroImage, 52, 1645-1653.

Abstract: We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering.

Link to article

-- (Tiede, M., Boyce, S., Espy-Wilson, C., & Gracco, V.L .) (2010). Variability of North American English /r/ production in response to palatal perturbation. In B. Maassen & P. H.H.M. van Lieshout (Eds.), Speech Motor Control: New Developments in Basic and Applied Research (pp. 53-67). Oxford University Press.

Book description:
Speaking is not only the basic mode of communication, but also the most complex motor skill humans can perform. Disorders of speech and language are the most common sequelae of brain disease or injury, a condition faced by millions of people each year. Health care practitioners need to interact with basic scientists in order to develop and evaluate new methods of clinical diagnosis and therapy to help their patients overcome or compensate their communication difficulties. In recent years, collaboration between those in the the disciplines of neurophysiology, cognitive psychology, mathematical modelling, neuroscientists, and speech science have helped accelerate progress in the field.

This book presents the latest and most important theoretical developments in the area of speech motor control, offering new insights by leaders in their field into speech disorders. The scope of this book is broad - presenting state-of-the art research in the areas of modelling, genetics, brain imaging, behavioral experimentation in addition to clinical applications.

The book will be valuable for researchers and clinicians in speech-language pathology, cognitive neuroscience, clinical psychology, and neurology.

Book information:
ISBN13: 978-0-19-923579-7
ISBN10: 0-19-923579-1

Link to book

-- (Tremblay, P., & Gracco, V.L.) (2010). On the selection of words and oral motor responses: evidence of a response-independent fronto-parietal network. Cortex, 46(1), 15-28.

Abstract: Several brain areas including the medial and lateral premotor areas, and the prefrontal cortex, are thought to be involved in response selection. It is unclear, however, what the specific contribution of each of these areas is. It is also unclear whether the response selection process operates independent of response modality or whether a number of specialized processes are recruited depending on the behaviour of interest. In the present study, the neural substrates for different response selection modes (volitional and stimulus-driven) were compared, using sparse-sampling functional magnetic resonance imaging, for two different response modalities: words and comparable oral motor gestures. Results demonstrate that response selection relies on a network of prefrontal, premotor and parietal areas, with the pre-supplementary motor area (pre-SMA) at the core of the process. Overall, this network is sensitive to the manner in which responses are selected, despite the absence of a medio-lateral axis, as was suggested by Goldberg (1985). In contrast, this network shows little sensitivity to the modality of the response, suggesting of a domain-general selection process. Theoretical implications of these results are discussed.

Link to article

Dr. Aparna Nadig
NADIG. A. (Nadig, A., Lee, I., Bosshart, K. & Ozonoff, S.) (2010). How does the topice of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism. Neuropsychologia, 48(9), 2730-2739.

Abstract: Conversation is a primary area of difficulty for individuals with high-functioning autism (HFA) although they have unimpaired formal language abilities. This likely stems from the unstructured nature of face-to-face conversation as well as the need to coordinate other modes of communication (e.g. eye gaze) with speech. We conducted a quantitative analysis of both verbal exchange and gaze data obtained from conversations between children with HFA and an adult, compared with those of typically developing children matched on language level. We examined a new question: how does speaking about a topic of interest affect reciprocity of verbal exchange and eye gaze? Conversations on generic topics were compared with those on individuals' circumscribed interests, particularly intense interests characteristic of HFA. Two opposing hypotheses were evaluated. Speaking about a topic of interest may improve reciprocity in conversation by increasing participants' motivation and engagement. Alternatively, it could engender more one-sided interaction, given the engrossing nature of circumscribed interests. In their verbal exchanges HFA participants demonstrated decreased reciprocity during the interest topic, evidenced by fewer contingent utterances and more monologue-style speech. Moreover, a measure of stereotyped behaviour and restricted interest symptoms was inversely related to reciprocal verbal exchange. However, both the HFA and comparison groups looked significantly more to their partner's face during the interest than generic topic. Our interpretation of results across modalities is that circumscribed interests led HFA participants to be less adaptive to their partner verbally, but speaking about a highly practiced topic allowed for increased gaze to the partner. The function of this increased gaze to partner may differ for the HFA and comparison groups.

Link to article

Dr. Marc Pell
Pell, M.D. (Paulmann, S. & Pell, M.D.) (2010). Dynamic emotion processing in Parkinson’s disease as a function of channel availability. Journal of Clinical and Experimental Neuropsychology, 32(8), 822-835.

Abstract: Parkinson's disease (PD) is linked to impairments for recognizing emotional expressions, although the extent and nature of these communication deficits are uncertain. Here, we compared how adults with and without PD recognize dynamic expressions of emotion in three channels, involving lexical-semantic, prosody, and/or facial cues (each channel was investigated individually and in combination). Results indicated that while emotion recognition increased with channel availability in the PD group, patients performed significantly worse than healthy participants in all conditions. Difficulties processing dynamic emotional stimuli in PD could be linked to striatal dysfunction, which reduces efficient binding of sequential information in the disease.

Link to article

-- (Paulmann, S. & Pell, M.D.) (2010). Contextual influences of emotional speech prosody on face processing: how much is enough? Cognitive, Affective and Behavioral Neuroscience, 10, 230-242.

Abstract: The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.

Link to article

-- (Dimoska, A., McDonald, S., Pell, M.D., Tate, R., & James, C.) (2010). Recognizing vocal expressions of emotion in patients with social skills deficits following traumatic brain injury. Journal of the International Neuropsychological Society, 16, 369-382.

Abstract: Perception of emotion in voice is impaired following traumatic brain injury (TBI). This study examined whether an inability to concurrently process semantic information (the "what") and emotional prosody (the "how") of spoken speech contributes to impaired recognition of emotional prosody and whether impairment is ameliorated when little or no semantic information is provided. Eighteen individuals with moderate-to-severe TBI showing social skills deficits during inpatient rehabilitation were compared with 18 demographically matched controls. Participants completed two discrimination tasks using spoken sentences that varied in the amount of semantic information: that is, (1) well-formed English, (2) a nonsense language, and (3) low-pass filtered speech producing "muffled" voices. Reducing semantic processing demands did not improve perception of emotional prosody. The TBI group were significantly less accurate than controls. Impairment was greater within the TBI group when accessing semantic memory to label the emotion of sentences, compared with simply making "same/different" judgments. Findings suggest an impairment of processing emotional prosody itself rather than semantic processing demands which leads to an over-reliance on the "what" rather than the "how" in conversational remarks. Emotional recognition accuracy was significantly related to the ability to inhibit prepotent responses, consistent with neuroanatomical research suggesting similar ventrofrontal systems subserve both functions.

Link to article

-- (Jaywant, A. & Pell, M.D.) (2010). Listener impressions of speakers with Parkinson’s disease. Journal of the International Neuropsychological Society, 16, 49-57.

Abstract: Parkinson’s disease (PD) has several negative effects on speech production and communication. However, few studies have looked at how speech patterns in PD contribute to linguistic and social impressions formed about PD patients from the perspective of listeners. In this study, discourse recordings elicited from nondemented PD speakers (n = 18) and healthy controls (n = 17) were presented to 30 listeners unaware of the speakers’ disease status. In separate conditions, listeners rated the discourse samples based on their impressions of the speaker or of the linguistic content. Acoustic measures of the speech samples were analyzed for comparison with listeners’ perceptual ratings. Results showed that although listeners rated the content of Parkinsonian discourse as linguistically appropriate (e.g., coherent, well-organized, easy to follow), the PD speakers were perceived as significantly less interested, less involved, less happy, and less friendly than healthy speakers. Negative social impressions demonstrated a relationship to changes in vocal intensity (loudness) and temporal characteristics (dysfluencies) of Parkinsonian speech. Our findings emphasize important psychosocial ramifications of PD that are likely to limit opportunities for communication and social interaction for those affected, because of the negative impressions drawn by listeners based on their speaking voice. (JINS, 2010, 16, 49–57.)

Link to article

-- (Dara, C. & Pell, M.D.) (2010). Hemispheric contributions for processing pitch and speech rate cues to emotion: fMRI data. Speech Prosody 5th International Conference Proceedings, Chicago, USA.

Abstract: To determine the neural mechanisms involved in vocal emotion processing, the current study employed functional magnetic resonance imaging (fMRI) to investigate the neural structures engaged in processing acoustic cues to infer emotional meaning. Two critical acoustic cues – pitch and speech rate – were systematically manipulated and presented in a discrimination task. Results confirmed that a bilateral network constituting frontal and temporal regions is engaged when discriminating vocal emotion expressions; however, we observed greater sensitivity to pitch cues in the right mid superior temporal gyrus/sulcus (STG/STS), whereas activation in both left and right mid STG/STS was observed for speech rate processing.

Link to article

-- (Pell, M.D., Jaywant, A., Monetta, L., & Kotz, S.A.) (2010). The contributions of prosody and semantic context in emotional speech processing. Speech Prosody 5th International Conference Proceedings, Chicago, USA.

Abstract: The present study examined the relative contributions of prosody and semantic context in the implicit processing of emotions from spoken language. In three separate tasks, we compared the degree to which happy and sad emotional prosody alone, emotional semantic context alone, and combined emotional prosody and semantic information would prime subsequent decisions about an emotionally congruent or incongruent facial expression. In all three tasks, we observed a congruency effect, whereby prosodic or semantic features of the prime facilitated decisions about emotionally-congruent faces. However, the extent of this priming was similar in the three tasks. Our results imply that prosody and semantic cues hold similar potential to activate emotion-related knowledge in memory when they are implicitly processed in speech, due to underlying connections in associative memory shared by prosody, semantics, and facial displays of emotion.

Link to article

Dr. Linda Polka
POLKA, L. (Mattock, K., Polka, L., & Rvachew, S.) (2010) The first steps in word learning are easier when the shoes fit: Comparing monolingual and bilingual infants. Developmental Science 13(1), 229-243.

Abstract: English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a / b / vs. / g / contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled ‘bowce’ or ‘gowce’ and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Shiller, D., & Gracco, V.) (2010). Auditory-motor learning during speech production in 9-11 year-old children. PLoS-One, 5(9), e12975.

Abstract:
BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

Link to article

-- (Shiller, D. M., Rvachew, S., & Brosseau-Lapré, F.) (2010). Importance of the auditory perceptual target to the achievement of speech production accuracy. Canadian Journal of Speech-Language Pathology and Audiology, 34, 181-192.

Abstract: The purpose of this paper is to discuss the clinical implications of a model of the segmental component of speech motor control called the DIVA model (Directions into Velocities of Articulators). The DIVA model is implemented on the assumption that the infant has perceptual knowledge of the auditory targets in place before learning accurate production of speech sounds and suggests that diffi culties with speech perception would lead to imprecise speech and inaccurate articulation. We demonstrate through a literature review that children with speech delay, on average, have signifi cant diffi culty with perceptual knowledge of speech sounds that they misarticulate. We hypothesize, on the basis of the DIVA model, that a child with speech delay who has good perceptual knowledge of a phonological target will learn to make the appropriate articulatory adjustments to achieve phonological goals. We support the hypothesis with two case studies. The fi rst case study involved short-term learning in a laboratory task by a child with speech delay. Although the child misarticulated sibilants, he had good perceptual and articulatory knowledge of vowels. He demonstrated that he was fully capable of spontaneously adapting his articulatory patterns to compensate for altered feedback of his own speech output. The second case study involved longer-term learning during speech therapy. This francophone child received 6 weeks of intervention that was largely directed at improving her perceptual knowledge of /?/, leading to signifi cant improvements in her ability to produce this phoneme correctly, both during minimal pair activities in therapy and during post-treatment testing.

Link to article

-- (Mortimer, J., & Rvachew, S.) (2010). A longitudinal investigation of morpho-syntax in children with Speech Sound Disorders. Journal of Communication Disorders, 43, 61-76.

Abstract:
PURPOSE: The intent of this study was to examine the longitudinal morpho-syntactic progression of children with Speech Sound Disorders (SSD) grouped according to Mean Length of Utterance (MLU) scores.

METHODS: Thirty-seven children separated into four clusters were assessed in their pre-kindergarten and Grade 1 years. Cluster 1 were children with typical development; the other clusters were children with SSD. Cluster 2 had good pre-kindergarten MLU; Clusters 3 and 4 had low MLU scores in pre-kindergarten, and (respectively) good and poor MLU outcomes.

RESULTS: Children with SSD in pre-kindergarten had lower Developmental Sentence Scores (DSS) and made fewer attempts at finite embedded clauses than children with typical development. All children with SSD, especially Cluster 4, had difficulty with finite verb morphology.

CONCLUSIONS: Children with SSD and typical MLU may be weak in some areas of syntax. Children with SSD who have low MLU scores and poor finite verb morphology skills in pre-kindergarten may be at risk for poor expressive language outcomes. However, these results need to be replicated with larger groups.

LEARNING OUTCOMES: The reader should (1) have a general understanding of findings from studies on morpho-syntax and SSD conducted over the last half century (2) be aware of some potential areas of morpho-syntactic weakness in young children with SSD who nonetheless have typical MLU, and (3) be aware of some potential longitudinal predictors of continued language difficulty in young children with SSD and poor MLU.

Link to article

-- (Rvachew, S. & Bernhardt, M.) (2010). Clinical implications of the dynamic systems approach to phonological development. American Journal of Speech-Language Pathology, 19, 34-50.

Abstract:
Purpose: To examine treatment outcomes in relation to the complexity of treatment goals for children with speech sound disorders.

Method: The clinical implications of dynamic systems theory in contrast with learnability theory are discussed, especially in the context of target selection decisions for children with speech sound disorders. Detailed phonological analyses of pre-and posttreatment speech samples are provided for 6 children who received treatment in a previously published randomized controlled trial of contrasting approaches to target selection (Rvachew & Nowak, 2001). Three children received treatment for simple target phonemes that did not introduce any new feature contrasts into the children's phonological systems. Three children received treatment for complex targets that represented feature contrasts that were absent from the children's phonological systems.

Results: Children who received treatment for simple targets made more progress toward the acquisition of the target sounds and demonstrated emergence of complex untreated segments and feature contrasts. Children who received treatment for complex targets made little measurable gain in phonological development.

Conclusions: Treatment outcomes will be enhanced if the clinician selects treatment targets at the segmental and prosodic levels of the phonological system in such a way as to stabilize the child's knowledge of subcomponents that form the foundation for the emergence of more complex phoneme contrasts.

Link to article

-- (Mattock, K., Polka, L., & Rvachew, S.) (2010). The first steps in word learning are easier when the shoes fit: Comparing monolingual and bilingual infants. Developmental Science, 13, 229-243.

Abstract: English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a / b / vs. / g / contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled ‘bowce’ or ‘gowce’ and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.

Link to article

-- (Rvachew, S. & Brosseau-Lapre, F.) (2010). Speech perception intervention. In L.Williams, S. McLeod, & R. McCauley (Eds.), Treatment of Speech Sound Disorders in Children (pp. 295-314). Baltimore, Maryland: Paul Brookes Publishing Co.
 
Dr. Karsten Steinhauer
STEINHAUER, K. (Dwivedi, V., Drury, J., Molnar, M., Phillips, N., Baum, S., & Steinhauer, K. (2010). ERPs reveal sensitivity to hypothetical contexts in spoken discourse. Neuroreport, 21, 791-795.

Abstract: We used event-related potentials to examine the interaction between two dimensions of discourse comprehension: (i) referential dependencies across sentences (e.g. between the pronoun 'it' and its antecedent 'a novel' in: 'John is reading a novel. It ends quite abruptly'), and (ii) the distinction between reference to events/situations and entities/individuals in the real/actual world versus in hypothetical possible worlds. Cross-sentential referential dependencies are disrupted when the antecedent for a pronoun is embedded in a sentence introducing hypothetical entities (e.g. 'John is considering writing a novel. It ends quite abruptly'). An earlier event-related potential reading study showed such disruptions yielded a P600-like frontal positivity. Here we replicate this effect using auditorily presented sentences and discuss the implications for our understanding of discourse-level language processing.

Link to article

-- (Steinhauer, K., Drury, J.E., Portner, P., Walenski, M., & Ullman, M.T.) (2010). Syntax, concepts, and logic in the temporal dynamics of language comprehension: Evidence from event-related potentials. Neuropsychologia, 48(6), 1525-1542.

Abstract: Logic has been intertwined with the study of language and meaning since antiquity, and such connections persist in present day research in linguistic theory (formal semantics) and cognitive psychology (e.g., studies of human reasoning). However, few studies in cognitive neuroscience have addressed logical dimensions of sentence-level language processing, and none have directly compared these aspects of processing with syntax and lexical/conceptual-semantics. We used ERPs to examine a violation paradigm involving "Negative Polarity Items" or NPIs (e.g., ever/any), which are sensitive to logical/truth-conditional properties of the environments in which they occur (e.g., presence/absence of negation in: John hasn't ever been to Paris, versus: John has *ever been to Paris). Previous studies examining similar types of contrasts found a mix of effects on familiar ERP components (e.g., LAN, N400, P600). We argue that their experimental designs and/or analyses were incapable of separating which effects are connected to NPI-licensing violations proper. Our design enabled statistical analyses teasing apart genuine violation effects from independent effects tied solely to lexical/contextual factors. Here unlicensed NPIs elicited a late P600 followed in onset by a late left anterior negativity (or "L-LAN"), an ERP profile which has also appeared elsewhere in studies targeting logical semantics. Crucially, qualitatively distinct ERP-profiles emerged for syntactic and conceptual semantic violations which we also tested here. We discuss how these findings may be linked to previous findings in the ERP literature. Apart from methodological recommendations, we suggest that the study of logical semantics may aid advancing our understanding of the underlying neurocognitive etiology of ERP components.

Link to article

-- (Steinhauer, K., Pauker, E., Itzhak, I., Abada, S., & Baum, S.) (2010). Prosody-syntax interactions in aging: Event-related potentials reveal dissociations between on-line and off-line measures. Neuroscience Letters, 472, 133-138.

Abstract: This study used ERPs to determine whether older adults use prosody in resolving early and late closure ambiguities comparably to young adults. Participants made off-line acceptability judgments on well-formed sentences or those containing prosody-syntax mismatches. Behaviorally, both groups identified mismatches, but older subjects accepted mismatches significantly more often than younger participants. ERP results demonstrate CPS components and garden-path effects (P600s) in both groups, however, older adults displayed no N400 and more anterior P600 components. The data provide the first electrophysiological evidence suggesting that older adults process and integrate prosodic information in real-time, despite off-line behavioral differences. Age-related differences in neurocognitive processing mechanisms likely contribute to this dissociation.

Link to article

-- (Morgan-Short, K., Sanz, C., Steinhauer, K., & Ullman, M.T.) (2010). Second language acquisition of gender agreement in explicit and implicit training conditions: An event-related potential study. Language Learning, 60, 154-193.

Abstract: This study employed an artificial language learning paradigm together with a combined behavioral/event-related potential (ERP) approach to examine the neurocognition of the processing of gender agreement, an aspect of inflectional morphology that is problematic in adult second language (L2) learning. Subjects learned to speak and comprehend an artificial language under either explicit (classroomlike) or implicit (immersionlike) training conditions. In each group, both noun-article and noun-adjective gender agreement processing were examined behaviorally and with ERPs at both low and higher levels of proficiency. Results showed that the two groups learned the language to similar levels of proficiency but showed somewhat different ERP patterns. At low proficiency, both types of agreement violations (adjective, article) yielded N400s, but only for the group with implicit training. Additionally, noun-adjective agreement elicited a late N400 in the explicit group at low proficiency. At higher levels of proficiency, noun-adjective agreement violations elicited N400s for both the explicit and implicit groups, whereas noun-article agreement violations elicited P600s for both groups. The results suggest that interactions among linguistic structure, proficiency level, and type of training need to be considered when examining the development of aspects of inflectional morphology in L2 acquisition.

Link to article

-- (Itzhak, I., Pauker, E., Drury, J.E., Baum, S.R., & Steinhauer, K.) (2010). Event-related potentials show online influence of lexical biases on prosodic processing. NeuroReport, 21, 8-13.

Abstract: This event-related potential study examined how the human brain integrates (i) structural preferences, (ii) lexical biases, and (iii) prosodic information when listeners encounter ambiguous 'garden path' sentences. Data showed that in the absence of overt prosodic boundaries, verb-intrinsic transitivity biases influence parsing preferences (late closure) online, resulting in a larger P600 garden path effect for transitive than intransitive verbs. Surprisingly, this lexical effect was mediated by prosodic processing, a closure positive shift brain response was elicited in total absence of acoustic boundary markers for transitively biased sentences only. Our results suggest early interactive integration of hierarchically organized processes rather than purely independent effects of lexical and prosodic information. As a primacy of prosody would predict, overt speech boundaries overrode both structural preferences and transitivity biases.

Link to article

-- (Royle, P., Drury, J.E., Bourguignon, N. & Steinhauer, K.) (2010). Morphology and word recognition: An ERP approach. In H. Melinda (Ed.), Proceedings of the 2010 annual conference of the Canadian Linguistic Association, 1-13.
-- (Abada, S., Steinhauer, K., Drury, J.E., & Baum, S.R.) (2010). Age differences in electrophysiological correlates of cross-modal interpretation. Speech Prosody 2010 Proceedings,100346 pp.1-4.

Abstract: Research shows that older adults may be more sensitive than young adults to prosody, although performance varies depending on task requirements. Here we used electroencephalography to examine responses to simple phrases produced with an Early or Late boundary, presented with matching or mismatching visual displays. While some older adults successfully detected prosodic mismatches, many failed to do so. Nonetheless, mismatches elicited a P600-like positivity in all participants. Those individuals who accurately judged prosody also displayed a second negative-going prosodic mismatch response. Findings show that older adults vary in their reliance on prosody, as reflected both in behavioral and ERP responses.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2010). Towards evidence based practice in language intervention for bilingual children. Journal of Communication Disorders, 43, 523-537.

Abstract: Evidence-based practice requires that clinical decisions be based on evidence from rigorously controlled research studies. At this time, very few studies have directly examined the efficacy of clinical intervention methods for bilingual children. Clinical decisions for this population cannot, therefore, be based on the strongest forms of research evidence, but must be inferred from other sources. This article reviews the available intervention research on bilingual children, the current clinical recommendations for this population, and the strength of the empirical and theoretical support on which these recommendations are based. Finally, future directions are suggested for documenting current methods of intervention and developing optimal methods for different groups of bilingual children. Although the current research base is limited, the few studies available to date uniformly suggest that interventions that include a focus on both languages are superior to those that focus on only one language. The available research offers little guidance, however, as to the particular treatment methods that may be most appropriate. Further research is required to examine efficacy with larger numbers of children and children of various bilingual backgrounds. It is suggested that efforts to develop and test intervention methods for bilingual children must carefully consider the linguistic heterogeneity of bilingual children and the cultural variation in communication styles, child rearing practices, and child rearing beliefs. This will lead to the development of methods that may involve treatment methods that are more suitable for other languages and cultures. LEARNING OUTCOMES: Readers will become familiar with current recommendations for the treatment of bilingual children with language impairment, including which language or languages to use, the requirement for cultural sensitivity, and specific procedures that may be beneficial for bilingual populations. The heterogeneity of the bilingual population of children is highlighted. Readers will gain an understanding of the strength of research evidence backing up recommended practices, as well as of gaps in our current knowledge base and directions for further development and research.

Link to article

-- (MacLeod, A., Sutton, A., Trudeau, N., & Thordardottir, E.) (2010). Phonological development in québecois French: A cross-sectional study of preschool age children. International Journal of Speech-Language Pathology, Early Online, 1-17.

Abstract:

This study provides a systematic description of French consonant acquisition in a large cohort of pre-school aged children: 156 children aged 20–53 months participated in a picture-naming task. Five analyses were conducted to study consonant acquisition: (1) consonant inventory, (2) consonant accuracy, (3) consonant acquisition, (4) a comparison of consonant inventory to consonant acquisition, and (5) a comparison to English cross-sectional data. Results revealed that more consonants emerge at an earlier age in word initial position, followed by medial position, and then word final position. Consonant accuracy underwent the greatest changes before the age of 36 months, and achieved a relative plateau towards 42 months. The acquisition of consonants revealed that four early consonants were acquired before the age of 36 months (i.e., /t, m, n, z/); 12 intermediate consonants were acquired between 36 and 53 months (i.e., /p, b, d, k, , ?, f, v, , l, w, ?/); and four consonants were acquired after 53 months (/s, ?, ?, j/). In comparison to English data, language specific patterns emerged that influence the order and pace of phonological acquisition. These findings highlight the important role of language specific developmental data in understanding the course of consonant acquisition.

Link to article

-- (Namazi, M. & Thordardottir, E.) (2010). A working memory, not a bilingual advantage in controlled attention. International Journal of Bilingual Education and Bilingualism, 13, 597-616.

Abstract: We explored the relationship between working memory (WM) and visually controlled attention (CA) in young bilingual and monolingual children. Previous research has shown that balanced bilingual children outperform monolinguals in CA. However, it is unclear whether this advantage is truly associated with bilingualism or whether potential WM and/or language differences led to the observed effects. Therefore, we examined whether bilingual and monolingual children differ on a visual measure of CA after potential differences in verbal and visual WM had been accounted for. We also looked at the relationship between visually CA and visual WM. Fifteen French monolingual children, 15 English monolingual children, and 15 early simultaneous bilingual children completed verbal short-term memory, verbal WM, visual WM, and visual CA tasks. Detailed information regarding language exposure was collected and abilities in each language were evaluated. A bilingual advantage was not found; that is, monolingual and bilingual children were equally successful in ignoring the irrelevant perceptual distraction on the Simon Task. However, children with better visual WM scores were also more faster and more accurate on the Simon Task. Furthermore, visual WM correlated significantly with the visual CA task.

Link to article

-- (Thordardottir, E., Kehayia, E., Lessard, N., Sutton, A. & Trudeau, N.) (2010). Typical performance on tests of language knowledge and language processing of French-speaking 5-year-olds. Canadian Journal of Speech Language Pathology and Audiology, 34, 5-16.

Abstract: The evaluation of the language skills of francophone children for clinical and research purposes is complicated by a lack of appropriate norm-referenced assessment tools. The purpose of this study was the collection of normative data for measures assessing major areas of language for 5-year-old monolingual speakers of Quebec French. Children in three age-groups (4;6, 5;0 and 5;6 years, n=78) were administered tests of language knowledge and linguistic processing, addressing vocabulary, morphosyntax, syntax, narrative structure, nonword repetition, sentence imitation, rapid automatized naming, following directions, and short term memory. The assessment measures were drawn from existing tools and from tools developed for this study, and included formal tests as well as spontaneous language measures. Normative data are presented for the three age groups. Results showed a systematic increase with age for most of the measures. Correlational analysis revealed relationships of varying strength between the measures, indicating some overlap between the measures, but also suggesting that the measures differ in the linguistic skills they tap into. The normative data presented will facilitate the language assessment of French-speaking 5-year-olds, permitting their performance to be compared to the normal range of typically developing monolingual French-speaking children and allowing the documentation of children’s profi les of relative strengths and weaknesses within language.

Link to article

2009

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Bélanger, N., Baum, S. & Titone, D.) (2009). Use of prosodic cues in the production of idiomatic and literal sentences by individuals with right- and left-hemisphere damage,” Brain & Language, 110, 38-42.

Abstract: The neural bases of prosody during the production of literal and idiomatic interpretations of literally plausible idioms was investigated. Left- and right-hemisphere-damaged participants and normal controls produced literal and idiomatic versions of idioms (He hit the books.) All groups modulated duration to distinguish the interpretations. LHD patients, however, showed typical speech timing difficulties. RHD patients did not differ from the normal controls. The results partially support a differential lateralization of prosodic cues in the two cerebral hemispheres [Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963-970]. Furthermore, extended final word lengthening appears to mark idiomaticity.

Link to article

-- (Ménard, L., Dupont, S., Baum, S., Aubin, J., & Schwartz, J-L.) (2009). Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 126, 1406-1414.

Abstract: The goal of this study is to investigate the production and perception of French vowels by blind and sighted speakers. 12 blind adults and 12 sighted adults served as subjects. The auditory-perceptual abilities of each subject were evaluated by discrimination tests (AXB). At the production level, ten repetitions of the ten French oral vowels were recorded. Formant values and fundamental frequency values were extracted from the acoustic signal. Measures of contrasts between vowel categories were computed and compared for each feature (height, place of articulation, roundedness) and group (blind, sighted). The results reveal a significant effect of group (blind vs sighted) on production, with sighted speakers producing vowels that are spaced further apart in the vowel space than those of blind speakers. A group effect emerged for a subset of the perceptual contrasts examined, with blind speakers having higher peak discrimination scores than sighted speakers. Results suggest an important role of visual input in determining speech goals.

Link to article

-- (Shiller, D., Sato, M., Gracco, V., & Baum, S.) (2009). Perceptual recalibration of speech sounds following speech motor learning. Journal of the Acoustical Society of America, 125, 1103-1113

Abstract: The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L. (Almor, A., Aronoff, J.M., MacDonald, M.C., Gonnerman, L.M., Kempler, D., Hintiryan, H., Hayes, U.L., Arunachalam, S., & Andersen, E.S.) (2009). A common mechanism in verb and noun naming deficits in Alzheimer's patients. Brain and Language, 111, 8-19.

Abstract: We tested the ability of Alzheimer's patients and elderly controls to name living and non-living nouns, and manner and instrument verbs. Patients' error patterns and relative performance with different categories showed evidence of graceful degradation for both nouns and verbs, with particular domain-specific impairments for living nouns and instrument verbs. Our results support feature-based, semantic representations for nouns and verbs and support the role of inter-correlated features in noun impairment, and the role of noun knowledge in instrument verb impairment.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Tremblay, P., & Gracco, V.L.) (2009). The essential role of the pre-SMA in the production of words and non-speech oral motor gestures, as revealed by repetitive transcranial magnetic stimulation (rTMS). Brain Research, 1268, 112-24.

Abstract: An emerging theoretical perspective, largely based on neuroimaging studies, suggests that the pre-SMA is involved in planning cognitive aspects of motor behavior and language, such as linguistic and non-linguistic response selection. Neuroimaging studies, however, cannot indicate whether a brain region is equally important to all tasks in which it is activated. In the present study, we tested the hypothesis that the pre-SMA is an important component of response selection, using an interference technique. High frequency repetitive TMS (10 Hz) was used to interfere with the functioning of the pre-SMA during tasks requiring selection of words and oral gestures under different selection modes (forced, volitional) and attention levels (high attention, low attention). Results show that TMS applied to the pre-SMA interferes selectively with the volitional selection condition, resulting in longer RTs. The low- and high-attention forced selection conditions were unaffected by TMS, demonstrating that the pre-SMA is sensitive to selection mode but not attentional demands. TMS similarly affected the volitional selection of words and oral gestures, reflecting the response-independent nature of the pre-SMA contribution to response selection. The implications of these results are discussed.

Link to article

-- (Shiller, D., Sato, M., Gracco, V., & Baum, S.) (2009). Perceptual recalibration of speech sounds following speech motor learning. Journal of the Acoustical Society of America, 125, 1103-1113

Abstract: The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.

Link to article

-- (Sato, M., Tremblay, P., & Gracco, V.L.) (2009). A mediating role of the premotor cortex in phoneme segmentation. Brain and Language, 111, 1-7.

Abstract: Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech perception under normal listening conditions remains debated. To directly test this hypothesis, we applied rTMS to the left ventral premotor cortex and participants performed auditory speech tasks involving the same set of syllables but differing in the use of phonemic segmentation processes. Compared to sham stimulation, rTMS applied over the ventral premotor cortex resulted in slower phoneme discrimination requiring phonemic segmentation. No effect was observed in phoneme identification and syllable discrimination tasks that could be performed without need for phonemic segmentation. The findings demonstrate a mediating role of the ventral premotor cortex in speech segmentation under normal listening conditions and are interpreted in relation to theories assuming a link between perception and action in the human speech processing system.

Link to article

Dr. Aparna Nadig
NADIG, A. (Nadig, A., Vivanti, G. & Ozonoff, S.) (2009). Adaptation of object descriptions to a partner under increasing communicative demands: A comparison of children with and without autism. Autism Research, 2, 1-14.

Abstract: This study compared the object descriptions of school-age children with high-functioning autism (HFA) with those of a matched group of typically developing children. Descriptions were elicited in a referential communication task where shared information was manipulated, and in a guessing game where clues had to be provided about the identity of an object that was hidden from the addressee. Across these tasks, increasingly complex levels of audience design were assessed: (1) the ability to give adequate descriptions from one's own perspective, (2) the ability to adjust descriptions to an addressee's perspective when this differs from one's own, and (3) the ability to provide indirect yet identifying descriptions in a situation where explicit labeling is inappropriate. Results showed that there were group differences in all three cases, with the HFA group giving less efficient descriptions with respect to the relevant context than the comparison group. More revealing was the identification of distinct adaptation profiles among the HFA participants: those who had difficulty with all three levels, those who displayed Level 1 audience design but poor Level 2 and Level 3 design, and those demonstrated all three levels of audience design, like the majority of the comparison group. Higher structural language ability, rather than symptom severity or social skills, differentiated those HFA participants with typical adaptation profiles from those who displayed deficient audience design, consistent with previous reports of language use in autism.

Link to article

Dr. Marc Pell
PELL, M. (Paulmann, S. & Pell, M.D.) (2009). Facial expression decoding as a function of emotional meaning status: ERP evidence. NeuroReport, 20, 1603-1608.

Abstract: To further specify the time course of (emotional) face processing, this study compared event-related potentials elicited by faces conveying prototypical basic emotions, nonprototypical affective expressions (grimaces), and neutral faces. Results showed that prototypical and nonprototypical facial expressions could each be differentiated from neutral expressions in three different event-related potential component amplitudes (P200, early negativity, and N400), which are believed to index distinct processing stages in facial expression decoding. On the basis of the distribution of effects, our results suggest that early processing is mediated by shared neural generators for prototypical and nonprototypical facial expressions; however, later processing stages seem to engage distinct subsystems for the three facial expression types investigated according to their emotionality and meaning status.

Link to article

-- (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2009). Comparative processing of emotional prosody and semantics following basal ganglia infarcts: ERP evidence of selective impairments for disgust and fear. Brain Research, 1295, 159-169.

Abstract: There is evidence from neuroimaging and clinical studies that functionally link the basal ganglia to emotional speech processes. However, in most previous studies, explicit tasks were administered. Thus, the underlying mechanisms substantiating emotional speech are not separated from possibly process-related task effects. Therefore, the current study tested emotional speech processing in an event-related potential (ERP) experiment using an implicit emotional processing task (probe verification). The interactive time course of emotional prosody in the context of emotional semantics was investigated using a cross-splicing method. As previously demonstrated, combined prosodic and semantic expectancy violations elicit N400-like negativities irrespective of emotional categories in healthy listeners. In contrast, basal ganglia patients show this negativity only for the emotions of happiness and anger, but not for fear or disgust. The current data serve as first evidence that lesions within the left basal ganglia affect the comparative online processing of fear and disgust prosody and semantics. Furthermore, the data imply that previously reported emotional speech recognition deficits in basal ganglia patients may be due to misaligned processing of emotional prosody and semantics.

Link to article

-- (Pell, M.D., Paulmann, S., Dara, C., Alasseri, A., & Kotz, S.A.) (2009). Factors in the recognition of vocally expressed emotions: a comparison of four languages. Journal of Phonetics, 37, 417-435.

Abstract: To understand how language influences the vocal communication of emotion, we investigated how discrete emotions are recognized and acoustically differentiated in four language contexts—English, German, Hindi, and Arabic. Vocal expressions of six emotions (anger, disgust, fear, sadness, happiness, pleasant surprise) and neutral expressions were elicited from four native speakers of each language. Each speaker produced pseudo-utterances (“nonsense speech”) which resembled their native language to express each emotion type, and the recordings were judged for their perceived emotional meaning by a group of native listeners in each language condition. Emotion recognition and acoustic patterns were analyzed within and across languages. Although overall recognition rates varied by language, all emotions could be recognized strictly from vocal cues in each language at levels exceeding chance. Anger, sadness, and fear tended to be recognized most accurately irrespective of language. Acoustic and discriminant function analyses highlighted the importance of speaker fundamental frequency (i.e., relative pitch level and variability) for signalling vocal emotions in all languages. Our data emphasize that while emotional communication is governed by display rules and other social variables, vocal expressions of ‘basic’ emotion in speech exhibit modal tendencies in their acoustic and perceptual attributes which are largely unaffected by language or linguistic similarity.

Link to article

-- (Monetta, L., Grindrod, C. & Pell, M.D.) (2009). Irony comprehension and theory of mind deficits in patients with Parkinson’s disease. Cortex, 45(8), 972-981. (Special Issue on “Parkinson’s disease, Language, and Cognition”)

Abstract: The goal of this study was to identify acoustic parameters associated with the expression of sarcasm by Cantonese speakers, and to compare the observed features to similar data on English [Cheang, H. S. and Pell, M. D. (2008). Speech Commun. 50, 366-381]. Six native Cantonese speakers produced utterances to express sarcasm, humorous irony, sincerity, and neutrality. Each utterance was analyzed to determine the mean fundamental frequency (F0), F0-range, mean amplitude, amplitude-range, speech rate, and harmonics-to-noise ratio (HNR) (to probe voice quality changes). Results showed that sarcastic utterances in Cantonese were produced with an elevated mean F0, and reductions in amplitude- and F0-range, which differentiated them most from sincere utterances. Sarcasm was also spoken with a slower speech rate and a higher HNR (i.e., less vocal noise) than the other attitudes in certain linguistic contexts. Direct Cantonese-English comparisons revealed one major distinction in the acoustic pattern for communicating sarcasm across the two languages: Cantonese speakers raised mean F0 to mark sarcasm, whereas English speakers lowered mean F0 in this context. These findings emphasize that prosody is instrumental for marking non-literal intentions in speech such as sarcasm in Cantonese as well as in other languages. However, the specific acoustic conventions for communicating sarcasm seem to vary among languages.

Link to article

-- (Monetta, L., Grindrod, C. & Pell, M.D.) (2009). Irony comprehension and theory of mind deficits in patients with Parkinson’s disease. Cortex, 45(8), 972-981. (Special Issue on “Parkinson’s disease, Language, and Cognition”)

Abstract: Many individuals with Parkinson's disease (PD) are known to have difficulties in understanding pragmatic aspects of language. In the present study, a group of eleven non-demented PD patients and eleven healthy control (HC) participants were tested on their ability to interpret communicative intentions underlying verbal irony and lies, as well as on their ability to infer first- and second-order mental states (i.e., theory of mind). Following Winner et al. (1998), participants answered different types of questions about the events which unfolded in stories which ended in either an ironic statement or a lie. Results showed that PD patients were significantly less accurate than HC participants in assigning second-order beliefs during the story comprehension task, suggesting that the ability to make a second-order mental state attribution declines in PD. The PD patients were also less able to distinguish whether the final statement of a story should be interpreted as a joke or a lie, suggesting a failure in pragmatic interpretation abilities. The implications of frontal lobe dysfunction in PD as a source of difficulties with working memory, mental state attributions, and pragmatic language deficits are discussed in the context of these findings.

Link to article

-- (Pell, M.D., Monetta, L.,Paulmann, S., & Kotz, S.A.) (2009). Recognizing emotions in a foreign language. Journal of Nonverbal Behavior, 33(2), 107-120.

Abstract: Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker’s voice, regardless of an individual’s culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances (“nonsense speech”) produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language (“in-group advantage”). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables.

Link to article

Dr. Linda Polka
POLKA, L. (Shahnaz, N., Bork, L., Polka, L. Longridge, N., Westerberg, B., & Bell, D.) (2009). Energy reflectance (ER) and tympanometry in normal and otosclerotic ears. Ear and Hearing, 30, 219-233.

Abstract: Objective: The major goal of this study was to examine differences in the middle ear mechano-acoustical properties of normal ears and ears with surgically confirmed otosclerosis using conventional and multifrequency tympanometry (MFT) as well as energy reflectance (ER). Second, we sought to compare ER, standard tympanometry, and MFT in their ability to distinguish healthy and otosclerotic ears examining both overall test performance (sensitivity and specificity) and receiver- operating characteristic analyses.

Design: Sixty-two normal-hearing adults and 28 patients diagnosed with otosclerosis served as subjects. Tympanometric data were gathered on a clinical immittance machine, the Virtual 310 equipped with a high-frequency option. Two of the parameters, static admittance and tympanometric width, were measured automatically at a standard 226 Hz frequency. The remaining two parameters, resonant frequency and frequency corresponding to admittance phase angle of 45 degree (F45°), were derived from MFT, multicomponent tympanometry, using a mathematical approach similar to the method used in GSI Tympstar Version 2. ER data were gathered using Mimosa Acoustics (RMS-system v4.0.4.4) equipment.

Results: Analyses of receiver-operating characteristic plots confirmed the advantage of MFT measures of resonant frequency and F45° over the standard low-frequency measures of static admittance and tympanometric width with respect to distinguishing otosclerotic ears from normal ears. The F45° measure was also found to be the best single index for making this distinction among tympanometric parameters. ER less than 1 kHz was significantly higher in otosclerotic ears than normal ears. This indicates that most of the incident energy below 1 kHz is reflected back into the ear canal in otosclerotic ears. ER patterns exceeding the 90th percentile of the normal ears across all frequencies correctly identify 82% of the otosclerotic ears while maintaining a low false alarm rate (17.2%); thus, this measure outperforms the other individual tympanometric parameters. Combination of ER and F45° were able to distinguish all otosclerotic ears. Correlations and the individual patterns of test performance revealed that information provided by ER is supplemental to the information provided by conventional and MFT with respect to distinguishing otosclerotic ears from normal ears.

Conclusion: The present findings show that the overall changes of ER across frequencies can distinguish otosclerotic ears from normal ears and from other sources of conductive hearing loss. Incorporating ER in general practice will improve the identification of otosclerotic ears when conventional tympanometry and MFT may fail to do so. To further improve the false alarm rate, ER should be interpreted in conjunction with other audiologic test batteries because it is unlikely that signs of a conductive component, including abnormal middle ear muscle reflex and ER responses, would be observed in an ear with normal middle ear function.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (2009). Perceptually based interventions. In C. Bowen, Children's speech sound disorders (pp. 152-155). Oxford: Wiley-Blackwell.

Book description: Caroline Bowen’s Children’s Speech Sound Disorders will be welcomed by experienced and novice clinicians, clinical educators, and students in the field of speech-language pathology/speech and language therapy for its practical, clinical focus. Drawing on the evidence base where possible, and making important theory to practice links overt, Bowen enhances her comprehensive account of assessment and clinical management of children with protracted or problematic speech development, with the addition of forty nine expert essays. These unique contributions are authored by fifty one internationally respected academicians, clinicians, researchers and thinkers representing a range of work settings, expertise, paradigms and theoretical orientations. In response to frequently asked questions about their work they address key theoretical, assessment, intervention, and service delivery issues.

Book information:
Publication Date: June 18, 2009
ISBN-10: 0470723645
ISBN-13: 978-0470723647
Edition: 1st

Dr. Karsten Steinhauer
STEINHAUER, K. (Palmer, C., Jewett, L., Steinhauer, K.) (2009). Contextual effects on electrophysiological response to musical accents. The Neurosciences and Music III: Disorders and Plasticity, Annals of the New York Academy of Sciences, 1169, 470-480.

Abstract: Listeners' aesthetic and emotional responses to music typically occur in the context of long musical passages that contain structures defined in terms of the events that precede them. We describe an electrophysiological study of listeners' brain responses to musical accents that coincided in longer musical sequences. Musically trained listeners performed a timbre-change detection task in which a single-tone timbre change was positioned within 4-bar melodies composed of 350-ms tones to coincide or not with melodic contour accents and temporal accents (induced with temporal gaps). Event-related potential responses to (task-relevant) attended timbre changes elicited an early negativity (MMN/N2b) around 200 ms and a late positive component around 350 ms (P300), reflecting updating of the timbre change in working memory. The amplitudes of both components changed systematically across the sequence, consistent with expectancy-based context effects. Furthermore, melodic contour changes modulated the MMN/N2b response (but not the P300) to timbre changes in later sequence positions. In contrast, task-irrelevant temporal gaps elicited an MMN that was not modulated by position within the context; absence of a P300 indicated that temporal-gap accents were not updated in working memory. Listeners' neural responses to musical structure changed systematically as sequential predictability and listeners' expectations changed across the melodic context.

Link to article

-- (Steinhauer, K., White, E. & Drury, J.E.) (2009). Temporal dynamics of late second language acquisition: Evidence from event-related brain potentials. Second Language Research, 25(1), 13-41.

Abstract: The ways in which age of acquisition (AoA) may affect (morpho)syntax in second language acquisition (SLA) are discussed. We suggest that event-related brain potentials (ERPs) provide an appropriate online measure to test some such effects. ERP findings of the past decade are reviewed with a focus on recent and ongoing research. It is concluded that, in contrast to previous suggestions, there is little evidence for a strict critical period in the domain of late acquired second language (L2) morphosyntax. As illustrated by data from our lab and others, proficiency rather than AoA seems to predict brain activity patterns in L2 processing, including native-like activity at very high levels of proficiency. Further, a strict distinction between linguistic structures that late L2 learners can vs. cannot learn to process in a native-like manner (Clahsen and Felser, 2006a; 2006b) may not be warranted. Instead, morphosyntactic real-time processing in general seems to undergo dramatic, but systematic, changes with increasing proficiency levels. We describe the general dynamics of these changes (and the corresponding ERP components) and discuss how ERP research can advance our current understanding of SLA in general.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2009). Fallorðaspilið [The case-marking game] (S.Guðmundsdóttir, Ed.). Kópavogur, Iceland: Námsgagnastofnun [The National Centre for Educational Materials]. Educational game.

 

2008

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Abada, S., Baum, S., & Titone, D.) ( 2008). The effects of central and peripheral feature semantic biasing contexts on phonetic identification in younger and older listeners. Experimental Aging Research, 34, 232-250.

Abstract: It has often been reported that older listeners have difficulty discriminating between phonetically similar items, but may rely on contextual cues as a compensatory mechanism. The present study examined the effects of different degrees of semantic bias on speech perception in groups of younger and older listeners. Stimuli from two /g/-/k/ voice onset time (VOT) continua were presented at the end of biasing and neutral sentences. Results indicated that context strongly influenced phonetic identification in older listeners; this was true for younger listeners only in the case of less-than-ideal stimuli. Findings are discussed in relation to theories concerning age-related changes in speech processing.

Link to article

-- (Gracco, V., Klepousniotou, E., Itzhak, I., & Baum, S.) (2008). Sensorimotor and motorsensory interactions in speech. In Sock, Fuchs, & Laprie (Eds), Proceedings of the 8th International Seminar on Speech Production. Strasbourg, France: INRIA.

Abstract: A long-standing issue in psycholinguistics is whether language production and language comprehension share a common neural substrate. Recent neuroimaging studies of speech appear to support overlap of brain regions for both production and perception. However, what is not known is how to interpret the perceptual activation of motor regions. In the following, the brain regions associated with producing heard speech are described to identify the sensorimotor components of the speech motor network. The brain regions associated with speech production are then examined for their activation during passive perception of lexical items presented as heard words, pictures and printed text. A number of overlapping cortical and subcortical areas were activated during both perception and production. Interestingly, all brain areas associated with passive perception increased their activation for speech production. The increased activation in the classical sensory/perceptual areas for production suggests an interactive process in which motor areas project back to sensory/perceptual areas reflecting a binding of perception (sensory) and production (motor) regions within the network.

Link to article

-- (Taler, V., Baum, S., Saumier, D., & Chertkow, H.) (2008). Comprehension of grammatical and emotional prosody is impaired in Alzheimer’s disease. Neuropsychology, 22, 188-195.

Abstract: Previous research has demonstrated impairment in comprehension of emotional prosody in individuals diagnosed with Alzheimer's disease (AD). The present pilot study further explored the prosodic processing impairment in AD, aiming to extend our knowledge to encompass both grammatical and emotional prosody processing. As expected, impairments were seen in emotional prosody. AD individuals were also found to be impaired in detecting sentence modality, suggesting that impairments in affective prosody processing in AD may be ascribed to a more general prosodic processing impairment, specifically in comprehending prosodic information signaled across the sentence level. AD participants were at a very mild stage of the disease, suggesting that prosody impairments occur early in the disease course.

Link to article

Dr. Vincent Gracco
GRACCO, V. (DeNil, L.F., Beal, D.S., Lafaille, S.J., Kroll, R.M., Crawley, A.P., & Gracco, V.L.) (2008). The effects of simulated stuttering and prolonged speech on neural activation patterns of stuttering and nonstuttering speakers. Brain and Language, 107(2), 114-123.

Abstract: Functional magnetic resonance imaging was used to investigate the neural correlates of passive listening, habitual speech and two modified speech patterns (simulated stuttering and prolonged speech) in stuttering and nonstuttering adults. Within-group comparisons revealed increased right hemisphere biased activation of speech-related regions during the simulated stuttered and prolonged speech tasks, relative to the habitual speech task, in the stuttering group. No significant activation differences were observed within the nonstuttering participants during these speech conditions. Between-group comparisons revealed less left superior temporal gyrus activation in stutterers during habitual speech and increased right inferior frontal gyrus activation during simulated stuttering relative to nonstutterers. Stutterers were also found to have increased activation in the left middle and superior temporal gyri and right insula, primary motor cortex and supplementary motor cortex during the passive listening condition relative to nonstutterers. The results provide further evidence for the presence of functional deficiencies underlying auditory processing, motor planning and execution in people who stutter, with these differences being affected by speech manner.

Link to article

-- (Tremblay, P., Shiller, D., & Gracco, V.L.) (2008). On the time-course and frequency selectivity of the EEG for different modes of response selection: evidence from speech production and keyboard pressing. Clinical Neurophysiology, 119, 88-99.

Abstract:
OBJECTIVE: To compare brain activity in the alpha and beta bands in relation to different modes of response selection, and to assess the domain generality of the response selection mechanism using verbal and non-verbal tasks.

METHODS: We examined alpha and beta event-related desynchronization (ERD) to analyze brain reactivity during the selection of verbal (word production) and non-verbal motor actions (keyboard pressing) under two different response modes: externally selected and self-selected.

RESULTS: An alpha and beta ERD was observed for both the verbal and non-verbal tasks in both the externally and the self-selected modes. For both tasks, the beta ERD started earlier and was longer in the self-selected mode than in the externally selected mode. The overall pattern of results between the verbal and non-verbal motor behaviors was similar.

CONCLUSIONS: The pattern of alpha and beta ERD is affected by the mode of response selection suggesting that the activity in both frequency bands contributes to the process of selecting actions. We suggest that activity in the alpha band may reflect attentional processes while activity in the beta band may be more closely related to the execution and selection process.

SIGNIFICANCE: These results suggest that a domain general process contributes to the planning of speech and other motor actions. This finding has potential clinical implications, for the use of diverse motor tasks to treat disorders of motor planning.

Link to article

-- (Gracco, V., Klepousniotou, E., Itzhak, I., & Baum, S.) (2008). Sensorimotor and motorsensory interactions in speech. In Sock, Fuchs, & Laprie (Eds), Proceedings of the 8th International Seminar on Speech Production. Strasbourg, France: INRIA.

Abstract: A long-standing issue in psycholinguistics is whether language production and language comprehension share a common neural substrate. Recent neuroimaging studies of speech appear to support overlap of brain regions for both production and perception. However, what is not known is how to interpret the perceptual activation of motor regions. In the following, the brain regions associated with producing heard speech are described to identify the sensorimotor components of the speech motor network. The brain regions associated with speech production are then examined for their activation during passive perception of lexical items presented as heard words, pictures and printed text. A number of overlapping cortical and subcortical areas were activated during both perception and production. Interestingly, all brain areas associated with passive perception increased their activation for speech production. The increased activation in the classical sensory/perceptual areas for production suggests an interactive process in which motor areas project back to sensory/perceptual areas reflecting a binding of perception (sensory) and production (motor) regions within the network.

Link to article

-- (Sato, M., Troille, E., Ménard, L., Cathiard, M.A., & Gracco, V.L.) (2008). Listening while speaking: new behavioral evidence for articulatory-to-auditory feedback projections. Proceedings of the International Conference on Auditory-Visual Speech Processing. Tangalooma, Australia.

Abstract: The existence of feedback control mechanisms from motor to sensory systems is a central idea in speech production research. Consistent with the view that articulation modulates the activity of the auditory cortex, it has been shown that silent articulation improved identification of concordant speech sounds [1]. In the present study, we replicated and extended this finding by demonstrating that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable may speed the perceptual processing of auditory and auditory visual speech stimuli. These results provide new behavioral evidence for the existence of motor-to-sensory discharge in speech production and suggest a functional connection between action and perception systems.

Link to article

Dr. Aparna Nadig
NADIG, A. (Vivanti, G., Nadig., A., Ozonoff, S., & Rogers, S.J.) (2008). What to children with autism attend to during imitation tasks? Journal of Experimental Child Psychology, Special issue on Imitation in Autism, 101, 186-205.

Abstract: Individuals with autism show a complex profile of differences in imitative ability, including a general deficit in precision of imitating another's actions and special difficulty in imitating nonmeaningful gestures relative to meaningful actions on objects. Given that they also show atypical patterns of visual attention when observing social stimuli, we investigated whether possible differences in visual attention when observing an action to be imitated may contribute to imitative difficulties in autism in both nonmeaningful gestures and meaningful actions on objects. Results indicated that (a) a group of 18 high-functioning 8- to 15-year-olds with autistic disorder, in comparison with a matched group of 13 typically developing children, showed similar patterns of visual attention to the demonstrator's action but decreased attention to his face when observing a model to be imitated; (b) nonmeaningful gestures and meaningful actions on objects triggered distinct visual attention patterns that did not differ between groups; (c) the autism group demonstrated reduced imitative precision for both types of imitation; and (d) duration of visual attention to the demonstrator's action was related to imitation precision for nonmeaningful gestures in the autism group.

Link to article

Dr. Marc Pell
PELL, M. (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2008). Functional contributions of the basal ganglia to emotional prosody: evidence from ERPs. Brain Research, 1217, 171-178.

Abstract: The basal ganglia (BG) have been functionally linked to emotional processing [Pell, M.D., Leonard, C.L., 2003. Processing emotional tone form speech in Parkinson's Disease: a role for the basal ganglia. Cogn. Affec. Behav. Neurosci. 3, 275-288; Pell, M.D., 2006. Cerebral mechanisms for understanding emotional prosody in speech. Brain Lang. 97 (2), 221-234]. However, few studies have tried to specify the precise role of the BG during emotional prosodic processing. Therefore, the current study examined deviance detection in healthy listeners and patients with left focal BG lesions during implicit emotional prosodic processing in an event-related brain potential (ERP)-experiment. In order to compare these ERP responses with explicit judgments of emotional prosody, the same participants were tested in a follow-up recognition task. As previously reported [Kotz, S.A., Paulmann, S., 2007. When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Res. 1151, 107-118; Paulmann, S. & Kotz, S.A., 2008. An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo- and lexical sentence context. Brain Lang. 105, 59-69], deviance of prosodic expectancy elicits a right lateralized positive ERP component in healthy listeners. Here we report a similar positive ERP correlate in BG-patients and healthy controls. In contrast, BG-patients are significantly impaired in explicit recognition of emotional prosody when compared to healthy controls. The current data serve as first evidence that focal lesions in left BG do not necessarily affect implicit emotional prosodic processing but evaluative emotional prosodic processes as demonstrated in the recognition task. The results suggest that the BG may not play a mandatory role in implicit emotional prosodic processing. Rather, executive processes underlying the recognition task may be dysfunctional during emotional prosodic processing.

Link to article

-- (Monetta, L., Grindrod, C.M., & Pell, M.D.) (2008). Effects of working memory capacity on inference generation during story comprehension in adults with Parkinson’s disease. Journal of Neurolinguistics, 21, 400-417.

Abstract: A group of non-demented adults with Parkinson's disease (PD) were studied to investigate how PD affects pragmatic-language processing, and, specifically, to test the hypothesis that the ability to draw inferences from discourse in PD is critically tied to the underlying working memory (WM) capacity of individual patients [Monetta, L., & Pell, M. D. (2007). Effects of verbal working memory deficits on metaphor comprehension in patients with Parkinson's disease. Brain and Language, 101, 80–89]. Thirteen PD patients and a matched group of 16 healthy control (HC) participants performed the Discourse Comprehension Test [Brookshire, R. H., & Nicholas, L. E. (1993). Discourse comprehension test. Tucson, AZ: Communication Skill Builders], a standardized test which evaluates the ability to generate inferences based on explicit or implied information relating to main ideas or details presented in short stories. Initial analyses revealed that the PD group as a whole was significantly less accurate than the HC group when comprehension questions pertained to implied as opposed to explicit information in the stories, consistent with previous findings [Murray, L. L., & Stout, J. C. (1999). Discourse comprehension in Huntington's and Parkinson's diseases. American Journal of Speech–Language Pathology, 8, 137–148]. However, subsequent analyses showed that only a subgroup of PD patients with WM deficits, and not PD patients with WM capacity within the control group range, were significantly impaired for drawing inferences (especially predictive inferences about implied details in the stories) when compared to the control group. These results build on a growing body of literature, which demonstrates that compromise of frontal–striatal systems and subsequent reductions in processing/WM capacity in PD are a major source of pragmatic-language deficits in many PD patients.

Link to article

-- (Pell, M.D. & Monetta, L.) (2008). How Parkinson’s disease affects nonverbal communication and language processing. Language and Linguistics Compass, 2(5), 739-759.

Abstract: In addition to difficulties that affect movement, many adults with Parkinson's disease (PD) experience changes that negatively impact on receptive aspects of their communication. For example, some PD patients have difficulties processing non-verbal expressions (facial expressions, voice tone) and many are less sensitive to ‘non-literal’ or pragmatic meanings of language, at least under certain conditions. This chapter outlines how PD can affect the comprehension of language and non-verbal expressions and considers how these changes are related to concurrent alterations in cognition (e.g., executive functions, working memory) and motor signs associated with the disease. Our summary underscores that the progressive course of PD can interrupt a number of functional systems that support cognition and receptive language, and in different ways, leading to both primary and secondary impairments of the systems that support linguistic and non-verbal communication.

Link to article

-- (Monetta, L., Cheang, H.S., & Pell, M.D.) (2008). Understanding speaker attitudes from prosody by adults with Parkinson’s disease. Journal of Neuropsychology, 2(2), 415-430.

Abstract: The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).

Link to article

-- (Pell, M.D. & Skorup, V.) (2008). Implicit processing of emotional prosody in a foreign versus native language. Speech Communication, 50(6), 519-530.

Abstract: To test ideas about the universality and time course of vocal emotion processing, 50 English listeners performed an emotional priming task to determine whether they implicitly recognize emotional meanings of prosody when exposed to a foreign language. Arabic pseudo-utterances produced in a happy, sad, or neutral prosody acted as primes for a happy, sad, or ‘false’ (i.e., non-emotional) face target and participants judged whether the facial expression represents an emotion. The prosody-face relationship (congruent, incongruent) and the prosody duration (600 or 1000 ms) were independently manipulated in the same experiment. Results indicated that English listeners automatically detect the emotional significance of prosody when expressed in a foreign language, although activation of emotional meanings in a foreign language may require increased exposure to prosodic information than when listening to the native language.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2008). The sound of sarcasm. Speech Communication, 50 (5), 366-381.

Abstract: The present study was conducted to identify possible acoustic cues of sarcasm. Native English speakers produced a variety of simple utterances to convey four different attitudes: sarcasm, humour, sincerity, and neutrality. Following validation by a separate naïve group of native English speakers, the recorded speech was subjected to acoustic analyses for the following features: mean fundamental frequency (F0), F0 standard deviation, F0 range, mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio (HNR, to probe for voice quality changes), and one-third octave spectral values (to probe resonance changes). The results of analyses indicated that sarcasm was reliably characterized by a number of prosodic cues, although one acoustic feature appeared particularly robust in sarcastic utterances: overall reductions in mean F0 relative to all other target attitudes. Sarcasm was also reliably distinguished from sincerity by overall reductions in HNR and in F0 standard deviation. In certain linguistic contexts, sarcasm could be differentiated from sincerity and humour through changes in resonance and reductions in both speech rate and F0 range. Results also suggested a role of language used by speakers in conveying sarcasm and sincerity. It was concluded that sarcasm in speech can be characterized by a specific pattern of prosodic cues in addition to textual cues, and that these acoustic characteristics can be influenced by language used by the speaker.

Link to article

-- (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2008). How aging affects the recognition of emotional speech. Brain and Language, 104, 262-269.

Abstract: To successfully infer a speaker's emotional state, diverse sources of emotional information need to be decoded. The present study explored to what extent emotional speech recognition of 'basic' emotions (anger, disgust, fear, happiness, pleasant surprise, sadness) differs between different sex (male/female) and age (young/middle-aged) groups in a behavioural experiment. Participants were asked to identify the emotional prosody of a sentence as accurately as possible. As a secondary goal, the perceptual findings were examined in relation to acoustic properties of the sentences presented. Findings indicate that emotion recognition rates differ between the different categories tested and that these patterns varied significantly as a function of age, but not of sex.

Link to article

-- (Dara, C., Monetta, L., & Pell, M.D.) (2008). Vocal emotion processing in Parkinson’s disease: reduced sensitivity to negative emotions. Brain Research, 1188, 100-111.

Abstract: To document the impact of Parkinson's disease (PD) on communication and to further clarify the role of the basal ganglia in the processing of emotional speech prosody, this investigation compared how PD patients identify basic emotions from prosody and judge specific affective properties of the same vocal stimuli, such as valence or intensity. Sixteen non-demented adults with PD and 17 healthy control (HC) participants listened to semantically-anomalous pseudo-utterances spoken in seven emotional intonations (anger, disgust, fear, sadness, happiness, pleasant surprise, neutral) and two distinct levels of perceived emotional intensity (high, low). On three separate occasions, participants classified the emotional meaning of the prosody for each utterance (identification task), rated how positive or negative the stimulus sounded (valence rating task), or rated how intense the emotion was expressed by the speaker (intensity rating task). Results indicated that the PD group was significantly impaired relative to the HC group for categorizing emotional prosody and showed a reduced sensitivity to valence, but not intensity, attributes of emotional expressions conveying anger, disgust, and fear. The findings are discussed in light of the possible role of the basal ganglia in the processing of discrete emotions, particularly those associated with negative vigilance, and of how PD may impact on the sequential processing of prosodic expressions.

Link to article

-- (Paulmann, S., Schmidt, P., Pell, M.D., & Kotz, S.A.) (2008). Rapid processing of emotional and voice information as evidenced by ERPs. Speech Prosody 4th International Conference Proceedings, (pp. 205-209). Campinas, Brazil

Abstract: Next to linguistic content, the human voice carries speaker identity information (e.g. female/male, young/old) and can also carry emotional information. Although various studies have started to specify the brain regions that underlie the different functions of human voice processing, few studies have aimed to specify the time course underlying these processes. By means of event-related potentials (ERPs) we aimed to determine the time-course of neural responses to emotional speech, speaker identification, and their interplay. While engaged in an implicit voice processing task (probe verification) participants listened to emotional sentences spoken by two female and two male speakers of two different ages (young and middle-aged). For all four speakers rapid emotional decoding was observed as emotional sentences could be differentiated from neutral sentences already within 200 ms after sentence onset (P200). However, results also imply that individual capacity to encode emotional expressions may have an influence on this early emotion detection as the P200 differentiation pattern (neutral vs. emotion) differed for each individual speaker.

Link to article

Dr. Linda Polka
POLKA, L. (Shahnaz, N., Miranda, T., & Polka, L.) (2008) Multi-frequency tympanometry in neonatal intensive care unit & well babies. Journal of the American Academy of Audiology, 19(5), 392-418.

Abstract:
Conventional low probe tone frequency tympanometry has not been successful in identifying middle ear effusion in newborn infants due to differences in the physiological properties of the middle ear in newborn infants and adults. With a rapid increase in newborn hearing screening programs, there is a need for a reliable test of middle ear function for the infant population. In recent years, new evidence has shown that tympanometry performed at higher probe tone frequencies may be more sensitive to middle ear disease than conventional low probe tone frequency in newborn infants.

PURPOSE: The main goal of this study was to explore the characteristics of the normal middle ear in the NICU (neonatal intensive care unit) and well babies using conventional and multifrequency tympanometry (MFT). It was also within the scope of this study to compare conventional and MFT patterns in NICU and well babies to already established patterns in adults to identify ways to improve hearing assessment in newborns and young infants.

METHODS: Three experiments were conducted using standard and MFT involving healthy babies and NICU babies. NICU babies (n = 33), healthy three-week-old babies (n=16), and neonates on high-priority hearing registry (HPHR) (n=42) were tested. Thirty-two ears of 16 healthy Caucasian adults (compared to well-babies) and 47 ears of 26 healthy Caucasian adults (compared to NICU babies) were also included in this study.

RESULTS: The distribution of the Vanhuyse patterns as well as variation of admittance phase and peak compensated susceptance and conductance at different probe tone frequencies was also explored. In general, in both well babies and NICU babies, 226 Hz tympanograms are typically multipeaked in ears that passed or referred on transient otoacoustic emission (TEOAE), limiting the specificity and sensitivity of this measure for differentiating normal and abnormal middle ear conditions. Tympanograms obtained at 1 kHz are potentially more sensitive and specific to presumably abnormal and normal middle ear conditions. Tympanometry at 1 kHz is also a good predictor of presence or absence of TEOAE.

Link to article

-- (Rvachew, S., Alhaidary, A., Mattock, K., & Polka, L.) (2008), Emergence of corner vowels in the babble produced by infants exposed to Canadian English or Canadian French. Journal of Phonetics, 36, 564-577.

Abstract: This paper examined the emergence of corner vowels ([i], [u], [æ] and [a]) in the infant vowel spaces and the influence of the ambient language on babbling, in particular, on the frequency of occurrence of the corner vowels. Speech samples were recorded from 51 Canadian infants from 8 to 18 months of age, respectively, English-learning infants (n=24) and French-learning infants (n=27). The acoustic parameters (F1 and F2) of each codable infant vowel were analyzed and then used to plot all the vowels along the diffuse–compact (F2-F1) and grave–acute dimensions ([F1+F2]/2). Listener judgments of vowel category were obtained for the most extreme vowels in each infant's vowel space, i.e., the 10% vowels with minimum or maximum diffuse–compact and grave–acute values. The judgments of adult listeners, both anglophone (n=5) and francophone (n=5), confirmed the peripheral expansion of infant vowel space toward the diffuse and grave corners with age. Furthermore, English-learning infants were judged by both English and French-speaking listeners to produce a greater frequency of [u] in the grave corner, in comparison with French-learning infants. The higher proportion of [u] in English sample was observed throughout the age range suggesting the influence of ambient language at a young age.

Link to article

-- (Polka, L., Rvachew, S. & Molnar, M.) (2008). Speech perception by 6- to 8-month-olds in the presence of distracting sound. Infancy, 13(5), 421-439.

Abstract: The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6- to 8-month-olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old-new-old-new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non-speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.

Link to article

-- (Sundara, M., Polka, L., & Molnar, M.) (2008). Development of coronal stop perception: Bilingual infants keep pace with their monolingual peers. Cognition, 108, 232-242.

Abstract: Previous studies indicate that the discrimination of native phonetic contrasts in infants exposed to two languages from birth follows a different developmental time course from that observed in monolingual infants. We compared infant discrimination of dental (French) and alveolar (English) place variants of /d/ in three groups differing in language experience. At 6–8 months, infants in all three language groups succeeded; at 10–12 months, monolingual English and bilingual but not monolingual French infants distinguished this contrast. Thus, for highly frequent, similar phones, despite overlap in cross-linguistic distributions, bilingual infants performed on par with their English monolingual peers and better than their French monolingual peers.

Link to article

-- (Sundara, S. & Polka, L.) (2008). Discrimination of coronal stops by bilingual adults: The timing and nature of language interaction, Cognition, 106, 234-258.

Abstract: The current study was designed to investigate the timing and nature of interaction between the two languages of bilinguals. For this purpose, we compared discrimination of Canadian French and Canadian English coronal stops by simultaneous bilingual, monolingual and advanced early L2 learners of French and English. French /d/ is phonetically described as dental whereas English /d/ is described as alveolar. Using a categorial AXB task, the performance of all four groups was compared to chance and to the performance of native Hindi listeners. Hindi listeners performed well above chance in discriminating French and English /d/-initial syllables. The discrimination performance of advanced early L2 learners, but not simultaneous bilinguals, was consistent with one merged category for coronal stops in the two languages. The data provide evidence for interaction in L2 learners as well as simultaneous bilinguals; however, the nature of the interaction is different in the two groups.

Link to article

-- (Mattock, K. , Molnar, M., Polka, L. & Burnham, D.) (2008). The developmental time course of lexical tone perception in the first year of life. Cognition, 106, 1367-1381.

Abstract: Perceptual reorganisation of infants’ speech perception has been found from 6 months for consonants and earlier for vowels. Recently, similar reorganisation has been found for lexical tone between 6 and 9 months of age. Given that there is a close relationship between vowels and tones, this study investigates whether the perceptual reorganisation for tone begins earlier than 6 months. Non-tone language English and French infants were tested with the Thai low vs. rising lexical tone contrast, using the stimulus alternating preference procedure. Four- and 6-month-old infants discriminated the lexical tones, and there was no decline in discrimination performance across these ages. However, 9-month-olds failed to discriminate the lexical tones. This particular pattern of decline in nonnative tone discrimination over age indicates that perceptual reorganisation for tone does not parallel the developmentally prior decline observed in vowel perception. The findings converge with previous developmental cross-language findings on tone perception in English-language infants [Mattock, K., & Burnham, D. (2006). Chinese and English infants’ tone perception: Evidence for perceptual reorganization. Infancy, 10(3)], and extend them by showing similar perceptual reorganisation for non-tone language infants learning rhythmically different non-tone languages (English and French).

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Mortimer, J., & Rvachew, S.) (2008). Morphosyntax and phonological awareness in children with speech sound disorders. Annals of the New York Academy of Sciences, 1145, 275-282.

Abstract: The goals of the current study were to examine concurrent and longitudinal relationships of expressive morphosyntax and phonological awareness in a group of children with speech sound disorders. Tests of phonological awareness were administered to 38 children at the end of their prekindergarten and kindergarten years. Speech samples were elicited and analyzed to obtain a set of expressive morphosyntax variables. Finite verb morphology and inflectional suffix use by prekindergarten children were found to predict significant unique variance in change in phonological awareness a year later. These results are consistent with previous research showing finite verb morphology to be a sensitive indicator of language impairment in English.

Link to article

-- (Rvachew, S., Alhaidary, A., Mattock, K., & Polka, L.) (2008), Emergence of corner vowels in the babble produced by infants exposed to Canadian English or Canadian French. Journal of Phonetics, 36, 564-577.

Abstract: This paper examined the emergence of corner vowels ([i], [u], [æ] and [a]) in the infant vowel spaces and the influence of the ambient language on babbling, in particular, on the frequency of occurrence of the corner vowels. Speech samples were recorded from 51 Canadian infants from 8 to 18 months of age, respectively, English-learning infants (n=24) and French-learning infants (n=27). The acoustic parameters (F1 and F2) of each codable infant vowel were analyzed and then used to plot all the vowels along the diffuse–compact (F2-F1) and grave–acute dimensions ([F1+F2]/2). Listener judgments of vowel category were obtained for the most extreme vowels in each infant's vowel space, i.e., the 10% vowels with minimum or maximum diffuse–compact and grave–acute values. The judgments of adult listeners, both anglophone (n=5) and francophone (n=5), confirmed the peripheral expansion of infant vowel space toward the diffuse and grave corners with age. Furthermore, English-learning infants were judged by both English and French-speaking listeners to produce a greater frequency of [u] in the grave corner, in comparison with French-learning infants. The higher proportion of [u] in English sample was observed throughout the age range suggesting the influence of ambient language at a young age.

Link to article

-- (Rvachew, S., & Grawburg, M.) (2008). Reflections on phonological working memory, letter knowledge and phonological awareness: A reply to Hartmann (2008). Journal of Speech, Language, and Hearing Research, 51, 1219-1226.

Abstract:
Purpose: S. Rvachew and M. Grawburg (2006) found that speech perception and vocabulary skills jointly predicted the phonological awareness skills of children with a speech sound disorder. E. Hartmann (2008) suggested that the Rvachew and Grawburg model would be improved by the addition of phonological working memory. Hartmann further suggested that the link between phoneme awareness and letter knowledge should be modeled as a reciprocal relationship. In this letter, Rvachew and Grawburg respond to Hartmann's suggestions for modification of the model.

Method: The literature on the role of phonological working memory in the development of vocabulary knowledge and phonological awareness was reviewed. Data presented previously by Rvachew and Grawburg (2006) and Rvachew (2006) were reanalyzed.

Results: The reanalysis of previously reported longitudinal data revealed that the relationship between letter knowledge and specific aspects of phonological awareness was not reciprocal for kindergarten-age children with a speech sound disorder.

Conclusions: Phonological working memory, if measured so that relative performance levels do not reflect differences in articulatory accuracy, may not alter the model because of its close correspondence with speech perception skills. However, further study of the hypothesized causal relationships modeled by Rvachew and Grawburg (2006) would be valuable, especially if experimental research designs were used.

Link to article

-- (Polka, L., Rvachew, S. & Molnar, M.) (2008). Speech perception by 6- to 8-month-olds in the presence of distracting sound. Infancy, 13(5), 421-439.

Abstract: The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6- to 8-month-olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old-new-old-new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non-speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.

Link to article

-- (MacLeod, A., Brosseau-Lapré, F., & Rvachew, S.) (2008). Explorer la relation entre la production et la perception de la parole. Spectrum, 1, 10-18.

Abstract:

Link to article

Abstract: Le but de cette réflexion critique est d’explorer la relation entre la production et la perception de la parole chez les enfants présentant un développement typique et chez les enfants présentant des troubles phonologiques. En premier lieu, nous décrivons les trois principales théories de la production et la perception de la parole : théories motrices, théories gestuelles, et théories intégratives. Deuxièmement, nous décrivons les résultats des recherches actuelles qui étudient les liens entre la production et la perception de la parole. Troisièmement, nous évaluons les hypothèses proposées par les trois principales théories et les résultats de projets de recherche afin de suggérer les développements ultérieurs aux niveaux théorique et clinique. Les résultats des recherches actuelles confirment l’hypothèse d’un lien continu entre la production et la perception de la parole tel que proposé par les théories intégratives, qui suggèrent un rôle continu pour la perception de la parole dans la planification et la production de la parole.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (Steinhauer, K. & Connolly, J.F.) (2008). Event-related potentials in the study of language. In B. Stemmer & H. Whsitaker (Eds), Handbook of the Cognitive Neuroscience of Language (pp.91-104). New York: Elsevier

Book description:
In the last ten years the neuroscience of language has matured as a field. Ten years ago, neuroimaging was just being explored for neurolinguistic questions, whereas today it constitutes a routine component. At the same time there have been significant developments in linguistic and psychological theory that speak to the neuroscience of language. This book consolidates those advances into a single reference.

The Handbook of the Neuroscience of Language provides a comprehensive overview of this field. Divided into five sections, section one discusses methods and techniques including clinical assessment approaches, methods of mapping the human brain, and a theoretical framework for interpreting the multiple levels of neural organization that contribute to language comprehension. Section two discusses the impact imaging techniques (PET, fMRI, ERPs, electrical stimulation of language cortex, TMS) have made to language research. Section three discusses experimental approaches to the field, including disorders at different language levels in reading as well as writing and number processing. Additionally, chapters here present computational models, discuss the role of mirror systems for language, and cover brain lateralization with respect to language. Part four focuses on language in special populations, in various disease processes, and in developmental disorders. The book ends with a listing of resources in the neuroscience of language and a glossary of items and concepts to help the novice become acquainted with the field.

Book information:
ISBN: 9780080453521

Link to book

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2008). L’évaluation du langage des enfants bilingues. Fréquences : revue de l’ordre des orthophonistes et audiologistes du Québec.
-- (Webster, R., Erdos, C., Evans, K., Majnemer, A., Saigal, G., Kehayia, E., Thordardottir, E., & Shevell, M.) (2008). Neurological and magnetic resonance imaging findings in children with developmental language impairment. Journal of Child Neurology, 23 (8), 870-877.

Abstract: Neurologic and radiologic findings in children with well-defined developmental language impairment have rarely been systematically assessed. Children aged 7 to 13 years with developmental language impairment or normal language (controls) underwent language, nonverbal cognitive, motor and neurological assessments, standardized assessment for subtle neurological signs, and magnetic resonance imaging. Nine children with developmental language impairment and 12 controls participated. No focal abnormalities were identified on standard neurological examination. Age and developmental language impairment were independent predictors of neurological subtle signs scores (r(2) = 0.52). Imaging abnormalities were identified in two boys with developmental language impairment and no controls (P = .17). Lesions identified were predicted neither by history nor by neurological examination. Previously unsuspected lesions were identified in almost 25% of children with developmental language impairment. Constraints regarding cooperation and sedation requirements may limit the clinical application of imaging modalities in this population.

Link to article

-- (Thordardottir, E.) (2008). Language specific effects of task demands on the manifestation of specific language impairment: A comparison of English and Icelandic. Journal of Speech, Language and Hearing Research, 51, 922-937.

Abstract:
Purpose: Previous research has indicated that the manifestation of specific language impairment (SLI) varies according to factors such as language, age, and task. This study examined the effect of task demands on language production in children with SLI cross-linguistically.

Method: Icelandic- and English-speaking school-age children with SLI and normal language (NL) peers (n = 42) were administered measures of verbal working memory. Spontaneous language samples were collected in contexts that vary in task demands: conversation, narration, and expository discourse. The effect of the context-related task demands on the accuracy of grammatical inflections was examined.

Results: Children with SLI in both language groups scored significantly lower than their NL peers in verbal working memory. Nonword repetition scores correlated with morphological accuracy. In both languages, mean length of utterance (MLU) varied systematically across sampling contexts. Context exerted a significant effect on the accuracy of grammatical inflection in English only. Error rates were higher overall in English than in Icelandic, but whether the difference was significant depended on the sampling context. Errors in Icelandic involved verb and noun phrase inflection to a similar extent.

Conclusions: The production of grammatical morphology appears to be more taxing for children with SLI who speak English than for those who speak Icelandic. Thus, whereas children with SLI in both language groups evidence deficits in language processing, cross-linguistic differences are seen in which linguistic structures are vulnerable when processing load is increased. Future research should carefully consider the effect of context on children's language performance.

Link to article

-- (Royle, P. & Thordardottir, E.) (2008). Elicitation of the passé compose in French preschoolers with and without SLI. Applied Psycholinguistics, 29, 341-365.

Abstract: This study examines inflectional abilities in French-speaking children with specific language impairment (SLI) using a verb elicitation task. Eleven children with SLI and age-matched controls (37–52 months) participated in the experiment. We elicited the passé composé using eight regular and eight irregular high frequency verbs matched for age of acquisition. Children with SLI showed dissimilar productive verb inflection abilities to control children (even when comparing participants with similar verb vocabularies and mean length of utterance in words). Control children showed evidence of overregularization and sensitivity to morphological structure, whereas no such effects were observed in the SLI group. Error patterns observed in the SLI group demonstrate that, at this age, they cannot produce passé composé forms in elicitation tasks, even though some participants used them spontaneously. Either context by itself might therefore be insufficient to fully evaluate productive linguistic abilities in children with SLI.

Link to article

2007

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Klepousniotou, E., & Baum, S.) (2007). Disambiguating the ambiguity advantage effect in word recognition: An advantage for polysemous but not homonymous words, Journal of Neurolinguistics, 20, 1-24.

Abstract: Previous lexical decision studies reported a processing advantage for words with multiple meanings (i.e., the “ambiguity advantage” effect). The present study further specifies the source of this advantage by showing that it is based on the extent of meaning relatedness of ambiguous words. Four types of ambiguous words, balanced homonymous (e.g., “panel”), unbalanced homonymous (e.g., “port”), metaphorically polysemous (e.g., “lip”), and metonymically polysemous (e.g., “rabbit”), were used in auditory and visual simple lexical decision experiments. It was found that ambiguous words with multiple related senses (i.e., polysemous words) are processed faster than frequency-matched unambiguous control words, whereas ambiguous words with multiple unrelated meanings (i.e., homonymous words) do not show such an advantage. In addition, a distinction within polysemy (into metaphor and metonymy) is demonstrated experimentally. These results call for a re-evaluation of models of word recognition, so that the advantage found for polysemous, but not homonymous, words can be accommodated.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L.M. (Gonnerman, L.M., Seidenberg, M.S., & Andersen, E.S.) (2007). Graded semantic and phonological similarity effects in priming: Evidence for a distributed connectionist approach to morphology. Journal of Experimental Psychology: General, 136, 323-345.

Abstract: A considerable body of empirical and theoretical research suggests that morphological structure governs the representation of words in memory and that many words are decomposed into morphological components in processing. The authors investigated an alternative approach in which morphology arises from the interaction of semantic and phonological codes. A series of cross-modal lexical decision experiments shows that the magnitude of priming reflects the degree of semantic and phonological overlap between words. Crucially, moderately similar items produce intermediate facilitation (e.g., lately-late). This pattern is observed for word pairs exhibiting different types of morphological relationships, including suffixed-stem (e.g., teacher-teach), suffixed-suffixed (e.g., saintly-sainthood), and prefixed-stem pairs (preheat-heat). The results can be understood in terms of connectionist models that use distributed representations rather than discrete morphemes.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Beal, D.S., Gracco, V. L., Lafaille, S. J., & DeNil, L. F.) (2007). Voxel-based morphometry of auditory and speech-related cortex in stutterers. NeuroReport, 18 (12), 1102-1110.

Abstract: Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.

Link to article

Dr. Aparna Nadig
NADIG, A. (Nadig, A., Ozonoff, S., Young, G., Rozga, A., Sigman, M., & Rogers, S. J.) (2007). A prospective study of response-to-name in infants at risk for autism. Archives of Pediatrics and Adolescent Medicine, Theme issue on Autism, 161(4), 378-383.

Abstract:
OBJECTIVE: To assess the sensitivity and specificity of decreased response to name at age 12 months as a screen for autism spectrum disorders (ASD) and other developmental delays.

DESIGN: Prospective, longitudinal design studying infants at risk for ASD.

SETTING: Research laboratory at university medical center.

PARTICIPANTS: Infants at risk for autism (55 six-month-olds, 101 twelve-month-olds) and a control group at no known risk (43 six-month-olds, 46 twelve-month-olds). To date, 46 at-risk infants and 25 control infants have been followed up to 24 months. Intervention Experimental task eliciting response-to-name behavior.

MAIN OUTCOME MEASURES: Autism Diagnostic Observation Schedule, Mullen Scales of Early Learning.

RESULTS: At age 6 months, there was a nonsignificant trend for control infants to require a fewer number of calls to respond to name than infants at risk for autism. At age 12 months, 100% of infants in the control group "passed," responding on the first or second name call, while 86% in the at-risk group did. Three fourths of children who failed the task were identified with developmental problems at age 24 months. Specificity of failing to respond to name was 0.89 for ASD and 0.94 for any developmental delay. Sensitivity was 0.50 for ASD and 0.39 for any developmental delay.

CONCLUSIONS: Failure to respond to name by age 12 months is highly suggestive of developmental abnormality but does not identify all children at risk for developmental problems. Lack of responding to name is not universal among infants later diagnosed with ASD and/or other developmental delays. Poor response to name may be a trait of the broader autism phenotype in infancy.

Link to article

-- (Nadig, A., Ozonoff, S., Singh, L., Young, G., & Rogers, S. J.) (2007). Do 6-month-old infants at risk for autism display an infant-directed speech preference? Proceedings of the 31st annual Boston University Conference on Language Development. Somerville: Cascadilla Press.
Dr. Marc Pell
PELL, M. (Berney, A., Panisset, M., Sadikot, A.F., Ptito, A., Dagher, A., Fraraccio, M., Savard, G., Pell, M.D. & Benkelfat, C.) (2007). Mood stability during acute stimulator challenge in Parkinson’s disease patients under long-term treatment with subthalamic deep brain stimulation. Movement Disorders, 22 (8), 1093-1096.

Abstract: Acute and chronic behavioral effects of subthalamic stimulation (STN-DBS) for Parkinson's disease (PD) are reported in the literature. As the technique is relatively new, few systematic studies on the behavioral effects in long-term treated patients are available. To further study the putative effects of STN-DBS on mood and emotional processing, 15 consecutive PD patients under STN-DBS for at least 1 year, were tested ON and OFF stimulation while on or off medication, with instruments sensitive to short-term changes in mood and in emotional discrimination. After acute changes in experimental conditions, mood core dimensions (depression, elation, anxiety) and emotion discrimination processing remained remarkably stable, in the face of significant motor changes. Acute stimulator challenge in long-term STN-DBS-treated PD patients does not appear to provoke clinically relevant mood effects.

Link to article

-- (Pell, M.D.) (2007). Reduced sensitivity to prosodic attitudes in adults with focal right hemisphere brain damage. Brain and Language, 101, 64-79.

Abstract: Although there is a strong link between the right hemisphere and understanding emotional prosody in speech, there are few data on how the right hemisphere is implicated for understanding the emotive "attitudes" of a speaker from prosody. This report describes two experiments which compared how listeners with and without focal right hemisphere damage (RHD) rate speaker attitudes of "confidence" and "politeness" which are signalled in large part by prosodic features of an utterance. The RHD listeners displayed abnormal sensitivity to both the expressed confidence and politeness of speakers, underscoring a major role for the right hemisphere in the processing of emotions and speaker attitudes from prosody, although the source of these deficits may sometimes vary.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2007). An acoustic investigation of Parkinsonian speech in linguistic and emotional contexts. Journal of Neurolinguistics, 20, 221-241.

Abstract: The speech prosody of a group of patients in the early stages of Parkinson's disease (PD) was compared to that of a group of healthy age- and education-matched controls to quantify possible acoustic changes in speech production secondary to PD. Both groups produced standardized speech samples across a number of prosody conditions: phonemic stress, contrastive stress, and emotional prosody. The amplitude, fundamental frequency, and duration of all tokens were measured. PD speakers produced speech that was of lower amplitude than the tokens of healthy speakers in many conditions across all production tasks. Fundamental frequency distinguished the two speaker groups for contrastive stress and emotional prosody production, and duration differentiated the groups for phonemic stress production. It was concluded that motor impairments in PD lead to adverse and varied acoustic changes which affect a number of prosodic contrasts in speech and that these alterations appear to occur in earlier stages of disease progression than is often presumed by many investigators.

Link to article

-- (Monetta, L. & Pell, M.D.) (2007). Effects of verbal working memory deficits on metaphor comprehension in patients with Parkinson's disease. Brain and Language, 101, 80-89.

Abstract: This research studied one aspect of pragmatic language processing, the ability to understand metaphorical language, to determine whether patients with Parkinson disease (PD) are impaired for these abilities, and whether cognitive resource limitations/fronto-striatal dysfunction contributes to these deficits. Seventeen PD participants and healthy controls (HC) completed a series of neuropsychological tests and performed a metaphor comprehension task following the methods of Gernsbacher and colleagues [Gernsbacher, M. A., Keysar, B., Robertson, R. R. W., & Werner, N. K. (2001). The role of suppression and enhancement in understanding metaphors. Journal of Memory and Language, 45, 433-450.] When participants in the PD group were identified as "impaired" or "unimpaired" relative to the control group on a measure of verbal working memory span, we found that only PD participants with impaired working memory were simultaneously impaired in the processing of metaphorical language. Based on our findings we argue that certain "complex" forms of language processing such as metaphor interpretation are highly dependent on intact fronto-striatal systems for working memory which are frequently, although not always, compromised during the early course of PD.

Link to article

-- (Dara, C. & Pell, M.D.) (2007, Spring). Intonation in tone languages. ASHA Kiran: Newsletter of the Asian Indian Caucus, 8.
Dr. Linda Polka
POLKA, L. (Polka, L. Rvachew, S. & Mattock, K.) (2007). Experiential influences on speech perception and production during infancy. In E. Hoff & M. Shatz (Eds), Handbook of Child Language. Oxford: Blackwell.

Abstract: Mature language users are highly specialized, expert, and efficient perceivers and producers of their native language. This expertise begins to develop in infancy, a time when the infant acquires language-specific perception of native language phonetic categories and learns to produce speech-like syllables in the form of canonical babble. The emergence of these skills is well described by past research but the precise mechanisms by which these foundational abilities develop have not been identified. This chapter provides an overview of what is currently known about the impact of language experience on the development of speech perception and production during infancy. Throughout we affirm that experiential influences on phonetic development cannot be understood without considering the interaction between the constraints that the child brings to the task and the nature of the environmental input. In the perception and production domains our current understanding of this interaction is incomplete and tends to focus on the child as a passive receiver of input. In our review, we signal a recent shift in research attention to the infant’s role in actively selecting and learning from the input. We begin this chapter by describing what is currently known about the determinants of speech perception and speech production development during infancy while highlighting important gaps to be filled within each domain. We close by emphasizing the need to integrate research across the perception and production domains.

Link to book

Dr. Susan Rvachew
RVACHEW, S. (Chiang, P. & Rvachew, S. (2007). English-French bilingual children’s phonological awareness and vocabulary skills. Canadian Journal of Applied Linguistics, 10, 293-308.

Abstract: This study examined the relationship between English-speaking children’s vocabulary skills in English and in French and their phonological awareness skills in both languages. Forty-four kindergarten-aged children attending French immersion programs were administered a receptive vocabulary test, an expressive vocabulary test and a phonological awareness test in English and French. Results showed that French phonological awareness was largely explained by English phonological awareness, consistent with previous findings that phonological awareness skills transfer across languages. However, there was a small unique contribution from French expressive vocabulary size to French phonological awareness. The importance of vocabulary skills to the development of phonological awareness is discussed.

Link to article

-- (Rvachew, S.) (2007). Phonological processing and reading in children with speech sound disorders. American Journal of Speech-Language Pathology, 16, 260-270.

Abstract:
Purpose: To examine the relationship between phonological processing skills prior to kindergarten entry and reading skills at the end of 1st grade, in children with speech sound disorders (SSD).

Method: The participants were 17 children with SSD and poor phonological processing skills (SSD-low PP), 16 children with SSD and good phonological processing skills (SSD-high PP), and 35 children with typical speech who were first assessed during their prekindergarten year using measures of phonological processing (i.e., speech perception, rime awareness, and onset awareness tests), speech production, receptive and expressive language, and phonological awareness skills. This assessment was repeated when the children were completing 1st grade. The Test of Word Reading Efficiency was also conducted at that time. First-grade sight word and nonword reading performance was compared across these groups.

Results: At the end of 1st grade, the SSD-low PP group achieved significantly lower nonword decoding scores than the SSD-high PP and typical speech groups. The 2 SSD groups demonstrated similarly good receptive language skills and similarly poor articulation skills at that time, however. No between-group differences in sight word reading were observed. All but 1 child (in the SSD-low PP group) obtained reading scores that were within normal limits.

Conclusion: Weaknesses in phonological processing were stable for the SSD-low PP subgroup over a 2-year period.

Link to article

-- (Grawburg, M. & Rvachew, S.) (2007). Phonological awareness intervention for children with speech sound disorders. Journal of Speech-Language Pathology and Audiology, 31, 19-26.

Abstract: Phonological awareness (PA) development is related to the development of decoding and reading skills. PA can be measured in young children before the commencement of school and formal reading instruction. Compared to normally developing children, these children with speech sound disorders (SSD) are at increased risk for delayed PA. Children with poor PA, who are atrisk for developing poor decoding skills, can be identifi ed and treated before poor PA negatively impacts their future literacy development. This intervention program was developed as a form of early intervention for preschool-aged children with delayed PA. Ten 4-year-old children with poor PA and SSD participated in the study. The program consisted of eight sessions, which included both a PA and a speech perception component. The PA portion focused on matching words that shared either the same onset or rime. The speech perception portion focused on the identifi cation of correctly articulated or misarticulated words containing the target onset. Participants made signifi cant improvements in their PA, raising their post-treatment test scores to the level of normally developing children. The unique and important role of speech-language pathologists in the stimulation of PA in children prior to the commencement of formal schooling is highlighted.

Link to article

-- (Rvachew, S., Chiang, P., & Evans, N.) (2007). Characteristics of speech errors produced by children with and without delayed phonological awareness skills. Language, Speech, and Hearing Services in Schools, 38, 1-12.

Abstract:
PURPOSE: The purpose of this study was to examine the relationship between the types of speech errors that are produced by children with speech-sound disorders and the children's phonological awareness skills during their prekindergarten and kindergarten years.

METHOD: Fifty-eight children with speech-sound disorders were assessed during the spring of their prekindergarten year and then again at the end of their kindergarten year. The children's responses on the Goldman–Fristoe Test of Articulation (R. Goldman & M. Fristoe, 2000) were described in terms of match ratios for the features of each target sound and the type of error produced. Match ratios and error type frequencies were then examined as a function of the child's performance on a test of phonological awareness.

RESULTS: Lower match ratios for +distributed and higher frequencies of typical syllable structure errors and atypical segment errors were associated with poorer phonological awareness test performance. However, no aspect of the children's error patterns proved to be a reliable indicator of which individual child would pass or fail the test. The best predictor of test performance at the end of the kindergarten year was test performance 1 year earlier. Children who achieved age-appropriate articulation skills by the end of kindergarten also achieved age-appropriate phonological awareness skills.

CONCLUSION: Children who enter kindergarten with delayed articulation skills should be monitored to ensure age-appropriate acquisition of phonological awareness and literacy skills.

Link to article

-- (Rvachew, S.) (2007). Perceptual foundations of speech acquisition. In S. McLeod (Ed.), International Guide to Speech Acquisition (pp. 26 – 30). Clifton Park, NY: Thomson Delmar Learning.

Book description: The International Guide to Speech Acquisition is a comprehensive guide that is ideal for speech-language pathologists working with children from a wide variety of language backgrounds. Offering coverage on 12 English-speaking dialects and 24 languages other than English, you will find the information you need to identify children who are having speech difficulties and provide age-appropriate prevention and intervention targets.

Book information:
ISBN 13: 9781418053604
ISBN 10: 1418053600

Link to book

-- (Polka, L., Rvachew, S., & Mattock, K.) (2007). Experiential influences on speech perception and production in infancy. In E. Hoff & M. Shatz (Eds.). Blackwell Handbook of Language Development (pp. 153-172). Malden, MA: Blackwell Publishing

Book description: The Blackwell Handbook of Language Development provides a comprehensive treatment of the major topics and current concerns in the field; exploring the progress of 21st century research, its precursors, and promising research topics for the future.

    • Provides comprehensive treatments of the major topics and current concerns in the field of language development
    • Explores foundational and theoretical approaches
    • Focuses on the 21st century's research into the areas of brain development, computational skills, bilingualism, education, and cross-cultural comparison
    • Looks at language development in infancy through early childhood, as well as atypical development
    • Considers the past work, present research, and promising topics for the future.
    • Broad coverage makes this an excellent resource for graduate students in a variety of disciplines

 

Book information:
ISBN 13: 978-1405132534
ISBN 10: 1405132531

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E.) (2007). Móðurmál og tvítyngi (Mother tongue and bilingualism). In H. Ragnarsdóttir, E. Sigríður Jónsdóttir & M. Þorkell Bernharðsson (Eds.), Fjölmenning á Íslandi (Multiculturalism in Iceland) (pp. 101-128). Reykjavik, Iceland: Rannsóknastofa í fjölmenningarfræðum KHÍ & Háskólaútgáfan (College of Education Research Center on Multiculturalism, and University of Iceland Press).
-- (Thordardottir, E. & Namazi, M.) (2007), Specific language impairment in French-speaking children: Beyond grammatical morphology. Journal of Speech, Language, and Hearing Research, 50, 698-715.

Abstract:
Purpose: Studies on specific language impairment (SLI) in French have identified specific aspects of morphosyntax as particularly vulnerable. However, a cohesive picture of relative strengths and weaknesses characterizing SLI in French has not been established. In light of normative data showing low morphological error rates in the spontaneous language of French-speaking preschoolers, the relative prominence of such errors in SLI in young children was questioned.

Method: Spontaneous language samples were collected from 12 French-speaking preschool-age children with SLI, as well as 12 children with normal language development matched on age and 12 children with normal language development matched on mean length of utterance. Language samples were analyzed for length of utterance; lexical diversity and composition; diversity of grammatical morphology and morphological errors, including verb finiteness; subject omission; and object clitics.

Results: Children with SLI scored lower than age-matched children on all of these measures but similarly to the mean length of utterance–matched controls. Errors in grammatical morphology were very infrequent in all groups, with no significant group differences.

Conclusion: The results indicate that the spontaneous language of French-speaking children with SLI in the preschool age range is characterized primarily by a generalized language impairment and that morphological deficits do not stand out as an area of particular vulnerability, in contrast with the pattern found in English for this age group.

Link to article

-- (Thordardottir, E.) (2007). Effective intervention for specific language impairment. In E. Thordardottir (Ed.), Encyclopedia of Language and Literacy Development (pp. 1-8). London, ON: Canadian Language and Literacy Research Network. http/www.literacyencyclopedia.ca

2006

Shari Baum, Ph.D., Professor
Vincent Gracco, Ph.D., Associate Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Aasland, W., Baum, S., & McFarland, D.) (2006). Electropalatographic, acoustic, and perceptual data on adaptation to a palatal perturbation. Journal of the Acoustical Society of America, 119, 2372-2381.

Abstract: Exploring the compensatory responses of the speech production system to perturbation has provided valuable insights into speech motor control. The present experiment was conducted to examine compensation for one such perturbation-a palatal perturbation in the production of the fricative /s/. Subjects wore a specially designed electropalatographic (EPG) appliance with a buildup of acrylic over the alveolar ridge as well as a normal EPG palate. In this way, compensatory tongue positioning could be assessed during a period of target specific and intense practice and compared to nonperturbed conditions. Electropalatographic, acoustic, and perceptual analyses of productions of /asa/ elicited from nine speakers over the course of a one-hour practice period were conducted. Acoustic and perceptual results confirmed earlier findings, which showed improvement in production with a thick artificial palate in place over the practice period; the EPG data showed overall increased maximum contact as well as increased medial and posterior contact for speakers with the thick palate in place, but little change over time. Negative aftereffects were observed in the productions with the thin palate, indicating recalibration of sensorimotor processes in the face of the oral-articulatory perturbation. Findings are discussed with regard to the nature of adaptive articulatory skills.

Link to article

-- (Dwivedi, V., Philips, N., Lague-Beauvais, M., & Baum, S.) (2006). An electrophysiological study of mood, modal context, and anaphora. Brain Research, 1117, 135-153.

Abstract: We investigated whether modal information elicited empirical effects with regard to discourse processing. That is, like tense information, one of the linguistic factors shown to be relevant in organizing a discourse representation is modality, where the mood of an utterance indicates whether or not it is asserted. Event-related potentials (ERPs) were used in order to address the question of the qualitative nature of discourse processing, as well as the time course of this process. This experiment investigated pronoun resolution in two-sentence discourses, where context sentences either contained a hypothetical or actual Noun Phrase antecedent. The other factor in this 2 × 2 experiment was type of continuation sentence, which included or excluded a modal auxiliary (e.g., must, should) and contained a pronoun. Intuitions suggest that hypothetical antecedents followed by pronouns asserted to exist present ungrammaticality, unlike actual antecedents followed by such pronouns. Results confirmed the grammatical intuition that the former discourse displays anomaly, unlike the latter (control) discourse. That is, at the Verb position in continuation sentences, we found frontal positivity, consistent with the family of P600 components, and not an N400 effect, which suggests that the anomalous target sentences caused a revision in discourse structure. Furthermore, sentences exhibiting modal information resulted in negative-going waveforms at other points in the continuation sentence, indicating that modality affects the overall structural complexity of discourse representation.

Link to article

-- (Shah, A. & Baum, S.) (2006). Perception of lexical stress by brain-damaged individuals: Effects on lexical-semantic activation. Applied Psycholinguistics, 27, 143-156.

Abstract: A semantic priming, lexical-decision study was conducted to examine the ability of left- and right-brain damaged individuals to perceive lexical-stress cues and map them onto lexical–semantic representations. Correctly and incorrectly stressed primes were paired with related and unrelated target words to tap implicit processing of lexical prosody. Results conformed with previous studies involving implicit perception of lexical stress, in that the left-hemisphere damaged individuals showed preserved sensitivity to lexical stress patterns as indicated by priming patterns mirroring those of the normal controls. An increased sensitivity to the varying stress patterns of the primes was demonstrated by the right-hemisphere damaged patient group, however. Results are discussed in relation to current theories of prosodic lateralization, with a particular focus on the nature of task demands in lexical stress perception studies.

Link to article

-- (Shah, A., Baum, S., & Dwivedi, V.) (2006). Neural substrates of linguistic prosody: Evidence from syntactic disambiguation in the productions of brain-damaged patients. Brain & Language, 96, 78-89.

Abstract: The present investigation focussed on the neural substrates underlying linguistic distinctions that are signalled by prosodic cues. A production experiment was conducted to examine the ability of left- (LHD) and right- (RHD) hemisphere-damaged patients and normal controls to use temporal and fundamental frequency cues to disambiguate sentences which include one or more Intonational Phrase level prosodic boundaries. Acoustic analyses of subjects' productions of three sentence types-parentheticals, appositives, and tags-showed that LHD speakers, compared to RHD and normal controls, exhibited impairments in the control of temporal parameters signalling phrase boundaries, including inconsistent patterns of pre-boundary lengthening and longer-than-normal pause durations in non-boundary positions. Somewhat surprisingly, a perception test presented to a group of normal native listeners showed listeners experienced greatest difficulty in identifying the presence or absence of boundaries in the productions of the RHD speakers. The findings support a cue lateralization hypothesis in which prosodic domain plays an important role.

Link to article

-- (Sundara., M., Polka, L., & Baum, S.) (2006). Production of coronal stops by simultaneous bilingual adults. Bilingualism: Language & Cognition, 9, 97-114.

Abstract: This study investigated acoustic-phonetics of coronal stop production by adult simultaneous bilingual and monolingual speakers of Canadian English (CE) and Canadian French (CF). Differences in the phonetics of CF and CE include voicing and place of articulation distinctions. CE has a two-way voicing distinction (in syllable initial position) contrasting shortand long-lag VOT; coronal stops in CE are described as alveolar. CF also has a two-way voicing distinction, but contrasting lead and short-lag VOT; coronal stops in CF are described as dental. Acoustic analyses of stop consonants for both VOT and dental/alveolar place of articulation are reported. Results indicate that simultaneous bilingual as well as monolingual adults produce language-specific differences, albeit not in the same way, across CF and CE for voicing and place. Similarities and differences between simultaneous bilingual and monolingual adults are discussed to address phonological organization in simultaneous bilingual adults.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Tremblay, P. & Gracco, V. L.) (2006). Contribution of the frontal lobe to externally and internally specified verbal responses: fMRI evidence. NeuroImage, 33, 947-957.

Abstract: It has been suggested that within the frontal cortex there is a lateral to medial shift in the control of action, with the lateral premotor area (PMA) involved in externally specified actions and the medial supplementary motor areas (SMA) involved in internally specified actions. Recent brain imaging studies demonstrate, however, that the control of externally and internally specified actions may involve more complex and overlapping networks involving not only the PMA and the SMA, but also the pre-SMA and the lateral prefrontal cortex (PFC). The aim of the present study was to determine whether these frontal regions are differentially involved in the production of verbal responses, when they are externally specified and when they are internally specified. Participants engaged in three overt speaking tasks in which the degree of response specification differed. The tasks involved reading aloud words (externally specified), or generating words aloud from narrow or broad semantic categories (internally specified). Using fMRI, the location and magnitude of the BOLD activity for these tasks was measured in a group of ten participants. Compared with rest, all tasks activated the primary motor area and the SMA-proper, reflecting their common role in speech production. The magnitude of the activity in the PFC (Brodmann area 45), the left PMAv and the pre-SMA increased for word generation, suggesting that each of these three regions plays a role in internally specified action selection. This confirms previous reports concerning the participation of the pre-SMA in verbal response selection. The pattern of activity in PMAv suggests participation in both externally and internally specified verbal actions.

Link to article

Dr. Marc Pell
PELL, M. (Cheang, H.S. & Pell, M.D.) (2006). A study of humour and communicative intention following right hemisphere stroke. Clinical Linguistics & Phonetics, 20 (6), 447-462.

Abstract: This research provides further data regarding non-literal language comprehension following right hemisphere damage (RHD). To assess the impact of RHD on the processing of non-literal language, ten participants presenting with RHD and ten matched healthy control participants were administered tasks tapping humour appreciation and pragmatic interpretation of non-literal language. Although the RHD participants exhibited a relatively intact ability to interpret humour from jokes, their use of pragmatic knowledge about interpersonal relationships in discourse was significantly reduced, leading to abnormalities in their understanding of communicative intentions (CI). Results imply that explicitly detailing CI in discourse facilitates RHD participants' comprehension of non-literal language.

Link to article

-- (Pell, M.D.) (2006). Judging emotion and attitudes from prosody following brain damage. Progress in Brain Research, 156, 307-321.

Abstract: Research has long indicated a role for the right hemisphere in the decoding of basic emotions from speech prosody, although there are few data on how the right hemisphere is implicated in processes for understanding the emotive "attitudes" of a speaker from prosody. We describe recent clinical studies that compared how well listeners with and without focal right hemisphere damage (RHD) understand speaker attitudes such as "confidence" or "politeness," which are signaled in large part by prosodic features of an utterance. We found that RHD listeners as a group were abnormally sensitive to both the expressed confidence and expressed politeness of speakers, and that these difficulties often correlated with impairments for understanding basic emotions from prosody in many RHD individuals. Our data emphasize a central role for the right hemisphere in the ability to appreciate emotions and speaker attitudes from prosody, although the precise source of these social-pragmatic deficits may arise in different ways in the context of right hemisphere compromise.

Link to article

-- (Pell, M.D. Cheang, H.S., & Leonard, C.L.) (2006). The impact of Parkinson’s disease on vocal prosodic communication from the perspective of listeners. Brain and Language, 97 (2), 123-134.

Abstract: An expressive disturbance of speech prosody has long been associated with idiopathic Parkinson's disease (PD), but little is known about the impact of dysprosody on vocal-prosodic communication from the perspective of listeners. Recordings of healthy adults (n=12) and adults with mild to moderate PD (n=21) were elicited in four speech contexts in which prosody serves a primary function in linguistic or emotive communication (phonemic stress, contrastive stress, sentence mode, and emotional prosody). Twenty independent listeners naive to the disease status of individual speakers then judged the intended meanings conveyed by prosody for tokens recorded in each condition. Findings indicated that PD speakers were less successful at communicating stress distinctions, especially words produced with contrastive stress, which were identifiable to listeners. Listeners were also significantly less able to detect intended emotional qualities of Parkinsonian speech, especially for anger and disgust. Emotional expressions that were correctly recognized by listeners were consistently rated as less intense for the PD group. Utterances produced by PD speakers were frequently characterized as sounding sad or devoid of emotion entirely (neutral). Results argue that motor limitations on the vocal apparatus in PD produce serious and early negative repercussions on communication through prosody, which diminish the social-linguistic competence of Parkinsonian adults as judged by listeners.

Link to article

-- (Pell, M.D.) (2006). Implicit recognition of vocal emotions in native and non-native speech. In R. Hoffman and H. Mixdorff (Eds.), Speech Prosody 3rd International Conference Proceedings (pp. 62-64).

Abstract: There is evidence for both cultural-specificity and 'universality' in how listeners recognize vocal expressions of emotion from speech. This paper summarizes some of the early findings using the Facial Affect Decision Task which speak to the implicit processing of vocal emotions as inferred from "emotion priming" effects on a conjoined facial expression. We provide evidence that English listeners register the emotional meanings of prosody when processing sentences spoken by native (English) as well as non-native (Arabic) speakers who encoded vocal emotions in a culturallyappropriate manner. As well, we discuss the timecourse for activating emotion-related knowledge in a native and nonnative language which may differ due to cultural influences on vocal emotion expression.

Link to article

-- (Pell, M.D.) (2006). Cerebral mechanisms for understanding emotional prosody in speech. Brain and Language, 96 (2), 221-234.

Abstract: Hemispheric contributions to the processing of emotional speech prosody were investigated by comparing adults with a focal lesion involving the right (n = 9) or left (n = 11) hemisphere and adults without brain damage (n = 12). Participants listened to semantically anomalous utterances in three conditions (discrimination, identification, and rating) which assessed their recognition of five prosodic emotions under the influence of different task- and response-selection demands. Findings revealed that right- and left-hemispheric lesions were associated with impaired comprehension of prosody, although possibly for distinct reasons: right-hemisphere compromise produced a more pervasive insensitivity to emotive features of prosodic stimuli, whereas left-hemisphere damage yielded greater difficulties interpreting prosodic representations as a code embedded with language content.

Link to article

-- (Monetta, L. & Pell, M.D.) (2006). La maladie de Parkinson et les déficits pragmatiques et prosodiques du langage. Fréquences: revue de l'ordre des orthophonistes et audiologistes du Québec, 18, 27-29.
Dr. Linda Polka
POLKA, L. (Rvachew, S., Mattock, K., Polka, L. & Menard, L.) (2006). Developmental and cross-linguistic variation in the infant vowel space: The case of Canadian English and Canadian French. Journal of the Acoustical Society of America, 120, 2250-2259.

Abstract: This article describes the results of two experiments. Experiment 1 was a cross-sectional study designed to explore developmental and cross-linguistic variation in the vowel space of 10- to 18-month-old infants, exposed to either Canadian English or Canadian French. Acoustic parameters of the infant vowel space were described (specifically the mean and standard deviation of the first and second formant frequencies) and then used to derive the grave, acute, compact, and diffuse features of the vowel space across age. A decline in mean F1 with age for French-learning infants and a decline in mean F2 with age for English-learning infants was observed. A developmental expansion of the vowel space into the high-front and high-back regions was also evident. In experiment 2, the Variable Linear Articulatory Model was used to model the infant vowel space taking into consideration vocal tract size and morphology. Two simulations were performed, one with full range of movement for all articulatory paramenters, and the other for movement of jaw and lip parameters only. These simulated vowel spaces were used to aid in the interpretation of the developmental changes and cross-linguistic influences on vowel production in experiment 1.

Link to article

-- (Sundara, M., Polka, L., & Baum, S.) (2006). Production of coronal stops by simultaneous bilingual adults. Bilingualism: Language & Cognition, 9, 97-114.

Abstract: This study investigated acoustic-phonetics of coronal stop production by adult simultaneous bilingual and monolingual speakers of Canadian English (CE) and Canadian French (CF). Differences in the phonetics of CF and CE include voicing and place of articulation distinctions. CE has a two-way voicing distinction (in syllable initial position) contrasting shortand long-lag VOT; coronal stops in CE are described as alveolar. CF also has a two-way voicing distinction, but contrasting lead and short-lag VOT; coronal stops in CF are described as dental. Acoustic analyses of stop consonants for both VOT and dental/alveolar place of articulation are reported. Results indicate that simultaneous bilingual as well as monolingual adults produce language-specific differences, albeit not in the same way, across CF and CE for voicing and place. Similarities and differences between simultaneous bilingual and monolingual adults are discussed to address phonological organization in simultaneous bilingual adults.

Link to article

-- (Ilari, B., & Polka, L.) (2006). Music cognition in early infancy: Infant’s preferences and long-term memory for Ravel. International Journal of Music Education, 24, 7-20.

Abstract: Listening preferences for two pieces, Prelude and Forlane from Le tombeau de Couperin by Maurice Ravel (1875-1937), were assessed in two experiments conducted with 8-month-old infants, using the Headturn Preference Procedure (HPP). Experiment 1 showed that infants, who have never heard the pieces, could clearly make a distinction between the Prelude and Forlane when the latter are played in multiple (i.e. orchestral) but not single (i.e. piano) timbres. In Experiment 2 infants were exposed repeatedly to one of the two piano pieces over a 10-day period. Concurrent with previous studies, results suggested that babies can recognize a familiar piece after a 2-week delay. Implications for early childhood music education are outlined at the end of the article.

Link to article

-- (Sundara, M. Polka, L. & Genesee, F.) (2006). Language experience facilitates discrimination of /d – ð/ in monolingual and bilingual acquisition of English. Cognition, 100, 369-388.

Abstract: To trace how age and language experience shape the discrimination of native and non-native phonetic contrasts, we compared 4-year-olds learning either English or French or both and simultaneous bilingual adults on their ability to discriminate the English /d-Image/ contrast. Findings show that the ability to discriminate the native English contrast improved with age. However, in the absence of experience with this contrast, discrimination of French children and adults remained unchanged during development. Furthermore, although simultaneous bilingual and monolingual English adults were comparable, children exposed to both English and French were poorer at discriminating this contrast when compared to monolingual English-learning 4-year-olds. Thus, language experience facilitates perception of the English /d-Image/ contrast and this facilitation occurs later in development when English and French are acquired simultaneously. The difference between bilingual and monolingual acquisition has implications for language organization in children with simultaneous exposure.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Mattock, K., Polka, L., & Menard, L.) (2006). Developmental and cross-linguistic variation in the infant vowel space: The case of Canadian English and Canadian French. Journal of the Acoustical Society of America, 120 (4), 2250-2259.

Abstract: This article describes the results of two experiments. Experiment 1 was a cross-sectional study designed to explore developmental and cross-linguistic variation in the vowel space of 10- to 18-month-old infants, exposed to either Canadian English or Canadian French. Acoustic parameters of the infant vowel space were described !specifically the mean and standard deviation of the first and second formant frequencies" and then used to derive the grave, acute, compact, and diffuse features of the vowel space across age. A decline in mean F1 with age for French-learning infants and a decline in mean F2 with age for English-learning infants was observed. A developmental expansion of the vowel space into the high-front and high-back regions was also evident. In experiment 2, the Variable Linear Articulatory Model was used to model the infant vowel space taking into consideration vocal tract size and morphology. Two simulations were performed, one with full range of movement for all articulatory paramenters, and the other for movement of jaw and lip parameters only. These simulated vowel spaces were used to aid in the interpretation of the developmental changes and cross-linguistic influences on vowel production in experiment 1.

Link to article

-- (Rvachew, S. & Savage, R.) (2006). Preschool foundations of early reading acquisition. Pediatrics and Child Health, 11, 589-593.

Abstract: The present paper describes research on the skills and processes associated with word and text reading acquisition in preschool children and during the first years of school. The aim is to provide an overview that gives a sense of the important milestones in language and literacy acquisition. A comparison of children's performances against these milestones may thus guide effective intervention for health professionals, parents and other professionals. Also summarized and explored are the role of speech perception and production, grammatical and syntactic skills, and metacognitive skills, including phonological awareness.

Link to article

-- (Rvachew, S.) (2006). Longitudinal predictors of implicit phonological awareness skills. American Journal of Speech-Language Pathology, 15, 165-176.

Abstract:
PURPOSE: The purpose of this study was to examine the longitudinal predictive relationships among variables that may contribute to poor phonological awareness skills in preschool-age children with speech-sound disorders.

METHOD: Forty-seven children with speech-sound disorders were assessed during the spring of their prekindergarten year and again at the end of their kindergarten year. Hierarchical multiple regression analysis was used to examine relationships among the children's prekindergarten and kindergarten performance on measures of speech perception, vocabulary, articulation, and phonological awareness skills in order to verify a proposed developmental ordering of these variables during this 1-year period.

RESULTS: Prekindergarten speech perception skills and receptive vocabulary size each explained unique variance in phonological awareness at the end of kindergarten. Prekindergarten articulation abilities did not predict unique variance in phonological awareness a year later. Prekindergarten speech perception skills also explained unique variance in articulation skills at the end of kindergarten.

CONCLUSIONS: Maximizing children's vocabulary and speech perception skills before they begin school may be an important strategy for ensuring that children with speech-sound disorders begin school with age-appropriate speech and phonological awareness abilities.

Link to article

-- (Savage, R., Blair, R., & Rvachew, S.) (2006). Rimes are not necessarily favored by prereaders: Evidence from meta- and epilinguistic phonological tasks. Journal of Experimental Child Psychology, 94, 183-205.

Abstract: This article explores young children’s facility in phonological awareness tasks requiring either the detection or the articulation of head, coda, onset, and rime subsyllabic units shared in word pairs. Data are reported from 70 nonreading children and 21 precocious readers attending preschools. Prereading children were able to articulate shared heads, codas, and onsets, although rimes rarely were articulated. Precocious readers were able to articulate shared rimes, but articulation performance was still most accurate for onsets and codas. Rimes and heads were equally accessible in the detection task and were identified more often than onsets and codas (nonreaders) and codas (readers). It is concluded that the articulation advantage for nonrime units cannot simply reflect early reading instruction. This disjoint pattern of phonological awareness in detection and production tasks does not support Goswami’s phonological status hypothesis. Results may instead reflect quite distinct influences on epilinguistic and metalinguistic phonological development.

Link to article

-- (Rvachew, S., Ohberg, A., & Savage, R.) (2006). Young children’s responses to maximum performance tasks: Preliminary data and recommendations. Journal of Speech-Language Pathology and Audiology. 30 (1), 6-13.

Abstract: The purpose of this study was to examine the ability of 4- to 6-year-old children with typical speech to perform certain maximum performance tasks, with a view to developing diagnostic criteria for identifying dyspraxia and dysarthria in this age group. Twenty children were asked to prolong [a], [mama], [fl, [8], and [z] for as long as theycould. They were also asked torepeatthesyllables [pal, [ta], and [ka] and the trisyllabic sequence [pataka] as fast they could. The children's responses to the prolongation tasks were highly variable within and across children. U sing traditional elicitation methods, these measurements do not appear to be good potential indicators of dysarthria or dyspraxia in this age group. In contrast, repetition rates were much more stable within and across children. All but one child repeated monosyllables at a rate of at least 3.4 syllables per second. Every child achieved a correct repetition of [pataka] at a rate of at least 3.4 syllables per second. Recommendations for interpreting young children's performance on these tasks are provided.

Link to article

-- (Rvachew, S. & Grawburg, M.) (2006). Correlates of phonological awareness in preschoolers with speech sound disorders. Journal of Speech, Language, and Hearing Research. 49, 74-87.

Abstract:
PURPOSE: The purpose of this study was to examine the relationships among variables that may contribute to poor phonological awareness (PA) skills in preschool-aged children with speech sound disorders (SSD).

METHOD: Ninety-five 4- and 5-year-old children with SSD were assessed during the spring of their prekindergarten year. Linear structural equation modeling was used to compare the fit of 2 models of the possible relationships among PA, speech perception, articulation, receptive vocabulary, and emergent literacy skills.

RESULTS: Half the children had significant difficulty with speech perception and PA despite demonstrating receptive language skills within or above the average range. The model that showed the best fit to the data indicated that speech perception is a pivotal variable that has a direct effect on PA and an indirect effect that is mediated by vocabulary skills. Articulation accuracy did not have a direct impact on PA. Emergent literacy skills were predicted by PA abilities.

CONCLUSIONS: Children with SSD are at greatest risk of delayed PA skills if they have poor speech perception abilities and/or relatively poor receptive vocabulary skills. Children with SSD should receive assessments of their speech perception, receptive vocabulary, PA, and emergent literacy skills.

Link to article

-- (Rvachew, S.) (2006). Effective interventions for the treatment of speech sound disorders. In Language and Literacy Encyclopedia. Canadian Language and Literacy Research Network. http://www.softwaregroup.ca/encyclopedia/
Dr. Karsten Steinhauer
STEINHAUER, K. (Steinhauer, K.) (2006) How dynamic is second language acquisition? Applied Psycholinguistics, 27 (1), 92-95.

Abstract: Clahsen and Felser (CF) present a thought-provoking article that is likely to have a strong impact on the field, in particular, on developmental psycholinguistics and second language (L2) acquisition research. Unlike the majority of previous work on language acquisition that focused on “competence,” that is, the knowledge basis underlying grammar, CF emphasize the need to approach language acquisition with psycholinguistic measures of processing. Based primarily on behavioral and electrophysiological on-line data, they argue that language acquisition in early first language (L1) and late L2 follows different patterns.

Link to article

-- (Mah, J., Steinhauer, K. & Goad, H. (2006). The Trouble with /h/: Evidence from ERPs. In M. Grantham O’Brien, C. Shea, and J. Archibald (Eds.), Proceedings of the 8th Generative Approaches to Second Language Acquisition Conference (pp. 80-87). Somerville, MA: Cascadilla Proceedings Project.
Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordadottir, E., Rothenberg, A., Rivard, M.-E., & Naves, R.) (2006). Bilingual assessment: Can overall proficiency be estimated from separate measurement of two languages? Journal of Multilingual Communication Disorders, 4 (1), 1-21.

Abstract: It is generally recommended that bilingual children be assessed in both of their languages. However, specific procedures for such bilingual assessment and for interpretation of the results are lacking. Normally developing French – English bilingual preschool-age children were compared to monolingual children (n = 28) on expressive and receptive measures of vocabulary and syntax. Results indicated that when measured in one language only, as well as when measured by combination measures such as conceptual vocabulary, which attempt to include both languages, bilingual children may score significantly lower than monolingual peers in various aspects of language. However, the extent of the difference may depend on a number of factors, including amount of bilingual exposure, relative proficiency in the two languages, as well as language specific factors, or the specific language combination being learned by the children.

Link to article

-- (Thordardottir, E.) (2006). Language intervention from a bilingual mindset. The ASHA Leader, 11 (10), 6-7, 20-21.
-- (Webster, R., Erdos, C., Evans, K., Majnemer, A., Kehayia, E., Elin Thordardottir, Evans, A., & Shevell, M.) (2006). The clinical spectrum of developmental language impairment in school-age children: Language, cognitive and motor findings. Pediatrics, 118 (5), 1541-1549.

Abstract:
OBJECTIVE: Our goal was to evaluate detailed school-age language, nonverbal cognitive, and motor development in children with developmental language impairment compared with age-matched controls.

METHODS: Children with developmental language impairment or normal language development (controls) aged 7 to 13 years were recruited. Children underwent language assessment (Clinical Evaluation of Language Fundamentals-4, Peabody Picture Vocabulary-3, Goldman-Fristoe Test of Articulation-2), nonverbal cognitive assessment (Wechsler Intelligence Scale for Children-IV), and motor assessment (Movement Assessment Battery for Children). Exclusion criteria were nonverbal IQ below the 5th percentile or an acquired language, hearing, autistic spectrum, or neurologic disorder.

RESULTS: Eleven children with developmental language impairment (7:4 boys/girls; mean age: 10.1 +/- 0.8 years) and 12 controls (5:7 boys/girls; mean age: 9.5 +/- 1.8 years) were recruited. Children with developmental language impairment showed lower mean scores on language (Clinical Evaluation of Language Fundamentals-4--developmental language impairment: 79.7 +/- 16.5; controls: 109.2 +/- 9.6; Goldman-Fristoe Test of Articulation-2--developmental language impairment: 94.1 +/- 10.6; controls: 104.0 +/- 2.8; Peabody Picture Vocabulary-3--developmental language impairment: 90.5 +/- 13.8; controls: 100.1 +/- 11.6), cognitive (Wechsler Intelligence Scale for Children-IV--developmental language impairment: 99.5 +/- 15.5; controls: 113.5 +/- 11.9), and motor measures (Movement Assessment Battery for Children percentile--developmental language impairment: 12.7 +/- 16.7; controls: 66.1 +/- 30.6) and greater discrepancies between cognitive and language scores (Wechsler Intelligence Scale for Children-IV/Clinical Evaluation of Language Fundamentals-4--developmental language impairment: 17.8 +/- 17.8; controls: 1.2 +/- 12.7). Motor impairment was more common in children with developmental language impairment (70%) than controls (8%).

CONCLUSIONS: Developmental language impairment is characterized by a broad spectrum of developmental impairments. Children identified on the basis of language impairment show significant motor comorbidity. Motor assessment should form part of the evaluation and follow-up of children with developmental language impairment.

Link to article

2005

Shari Baum, Ph.D., Professor
Vincent Gracco, Ph.D., Associate Professor
Rachel Mayberry, Ph.D., Associate Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Grindrod, C. & Baum, S.) (2005). Hemispheric contributions to lexical ambiguity resolution in a discourse context: Evidence from individuals with unilateral left and right hemisphere lesions. Brain & Cognition, 57, 70-83.

Abstract: In the present study, a cross-modal semantic priming task was used to investigate the ability of left-hemisphere-damaged (LHD) nonfluent aphasic, right-hemisphere-damaged (RHD) and non-brain-damaged (NBD) control subjects to use a discourse context to resolve lexically ambiguous words. Subjects first heard four-sentence discourse passages ending in ambiguous words and after an inter-stimulus interval (ISI) of either 0 or 750 ms, made lexical decisions on first- or second-meaning related visual targets. NBD control subjects, at the 0 ms ISI, only activated contextually appropriate meanings, though significant effects, as a group, were only seen in second-meaning biased contexts. Surprisingly, at the 750 ms ISI, these subjects activated both appropriate and inappropriate meanings in first-meaning biased contexts. With respect to the LHD nonfluent aphasic patients, the majority activated first meanings regardless of context at the 0 ms ISI, though effects for the group were not significant. At the 750 ms ISI, these patients again activated first meanings regardless of context, with significant effects for the group only seen in first-meaning biased contexts. With regard to the RHD patients, the majority activated second meanings regardless of context at the 0 ms ISI and first meanings regardless of context at the 750 ms ISI, though, as a group, the effects were not significant. In light of our previous findings (Grindrod & Baum, 2003, submitted), the present data are interpreted as supporting the notion that damage to the left hemisphere disrupts either lexical access processes or the time course of lexical activation, whereas damage to the right hemisphere impairs the use of context and leads to activation of ambiguous word meanings based on meaning frequency.

Link to article

-- (Klepousniotou, E. & Baum, S.) (2005). Processing homonymy and polysemy: Effects of sentential context and time-course following unilateral brain damage. Brain & Language, 95, 365-382.

Abstract: The present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD), and normal control individuals to access, in sentential biasing contexts, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., "punch"), metonymies (e.g., "rabbit"), and metaphors (e.g., "star"). Furthermore, the predictions of the "suppression deficit" and "coarse semantic coding" hypotheses, which have been proposed to account for RH language function/dysfunction, were tested. Using an auditory semantic priming paradigm, ambiguous words were incorporated in dominant- or subordinate-biasing sentence-primes followed after a short (100 ms) or long (1,000 ms) interstimulus interval (ISI) by dominant-meaning-related, subordinate-meaning-related or unrelated target words. For all three types of ambiguous words, both the effects of context and ISI were obvious in the performance of normal control subjects, who showed multiple meaning activation at the short ISI, but eventually, at the long ISI, contextually appropriate meaning selection. Largely similar performance was exhibited by the LHD non-fluent aphasic patients as well. In contrast, RHD patients showed limited effects of context, and no effects of the time-course of processing. In addition, although homonymous and metonymous words showed similar patterns of activation (i.e., both meanings were activated at both ISIs), RHD patients had difficulties activating the subordinate meanings of metaphors, suggesting a selective problem with figurative meanings. Although the present findings do not provide strong support for either the "coarse semantic coding" or the "suppression deficit" hypotheses, they are viewed as being more consistent with the latter, according to which RH damage leads to deficits suppressing alternative meanings of ambiguous words that become incompatible with the context.

Link to article

-- (Klepousniotou, E. & Baum, S.) (2005). Unilateral brain damage effects on processing homonymous and polysemous words. Brain & Language, 93, 308-326.

Abstract: Using an auditory semantic priming paradigm, the present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD) and normal control individuals to access, out of context, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., "punch"), metonymies (e.g., "rabbit"), and metaphors (e.g., "star"). In addition, the study tested certain predictions of the "suppression deficit" and "coarse semantic coding" hypotheses that have been proposed to account for the linguistic deficits typically observed after RH damage. Homonymous, metonymous, and metaphorical words were used as primes followed after a short (100 ms) or a long (1000 ms) inter-stimulus interval (ISI) by dominant-meaning-related, subordinate-meaning-related or unrelated target words. No significant group effects were found, and for both ISIs, dominant- and subordinate-related targets were facilitated relative to unrelated control targets for the homonymy and metonymy conditions. In contrast, for the metaphor condition, only targets related to the dominant meaning were facilitated. These findings provide only partial support for the "suppression deficit" hypothesis and no support for the "coarse semantic coding" hypothesis (as interpreted herein) indicating that patients with focal LH or RH damage can access the multiple meanings of ambiguous words and exhibit processing abilities comparable to those of older normal control subjects, at least at the single-word level.

Link to article

-- (Leonard, C. & Baum, S.) (2005). “The ability of individuals with right-hemisphere damage to use context under conditions of focused and divided attention. Journal of Neurolinguistics, 18, 427-441.

Abstract: A word-monitoring task was conducted with a group of right-hemisphere-damaged (RHD) patients and a group of nonbrain-damaged control (NC) participants under three attention conditions—isolation, focused attention, and divided attention—to address the hypothesis that individuals with RHD experience difficulty in the use of contextual information under conditions that tax processing resources. Following Leonard et al. [Leonard, C. L., Baum, S. R., & Pell, M. D. (2001). The effect of compressed speech on the ability of right-hemisphere-damaged patients to use context. Cortex 37, 327–344], monitoring targets were embedded in three types of sentence contexts: normal, semantically anomalous, and random word order. Results revealed that, under all three attention conditions, monitoring latencies for the RHD patients paralleled those of the NC participants, revealing sensitivity to contextual manipulations. These findings support those of Leonard et al. and suggest that individuals with RHD are, indeed, able to use certain types of contextual information in language processing even under conditions of reduced processing resources. The results are discussed in relation to potential processing distinctions between structural and nonstructural contexts.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Max, L., & Gracco, V. L.) (2005). Coordination of oral and laryngeal movements in the perceptually fluent speech of adults who stutter. Journal of Speech, Language, and Hearing Research, 48, 524-542

Abstract: This work investigated whether stuttering and nonstuttering adults differ in the coordination of oral and laryngeal movements during the production of perceptually fluent speech. This question was addressed by completing correlation analyses that extended previous acoustic studies by others as well as inferential analyses based on the within-subject central tendency and variability of acoustic and physiological indices of oral-laryngeal control and coordination. Stuttering and nonstuttering adults produced the target /p/ as the medial consonant in C1V1#C2V2C3 sequences (C=consonant; V=vowel or diphthong; #=word boundary) embedded in utterances differing in length and location of the target movements. No between-groups differences were found for across- or within-subject correlations between acoustic measures of stop gap and voice onset time (VOT). However, the acoustic data did show longer durations for devoicing interval and VOT in the stuttering versus nonstuttering individuals, in the absence of a difference for a proportional measure specifically reflecting oral-laryngeal relative timing. Analyses of combined kinematic and electroglottographic data revealed that the stuttering individuals' speech was also characterized by (a) longer durations from bilabial closing movement onset and peak velocity to V1 vocal fold vibration offset and (b) greater within-subject variability for dependent variables that were physiological indices of devoicing interval and VOT, but again no between-groups differences were found for specific indices of oral-laryngeal relative timing. Overall, findings suggest that, for the production of voiceless bilabial stops in perceptually fluent speech, stuttering and nonstuttering adults differ in the duration of intervals defined by events within as well as across the oral and laryngeal subsystems, but the groups show similar patterns of relative timing for the involved oral and laryngeal movements.

Link to article

-- (Gracco, V.L., Tremblay, P, & Pike, G. B.) (2005). Imaging speech production using fMRI. NeuroImage, 26, 294-301.

Abstract: Human speech is a well-learned, sensorimotor, and ecological behavior ideal for the study of neural processes and brain-behavior relations. With the advent of modern neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the potential for investigating neural mechanisms of speech motor control, speech motor disorders, and speech motor development has increased. However, a practical issue has limited the application of fMRI to issues in spoken language production and other related behaviors (singing, swallowing). Producing these behaviors during volume acquisition introduces motion-induced signal changes that confound the activation signals of interest. A number of approaches, ranging from signal processing to using silent or covert speech, have attempted to remove or prevent the effects of motion-induced artefact. However, these approaches are flawed for a variety of reasons. An alternative approach, that has only recently been applied to study single-word production, uses pauses in volume acquisition during the production of natural speech motion. Here we present some representative data illustrating the problems associated with motion artefacts and some qualitative results acquired from subjects producing short sentences and orofacial nonspeech movements in the scanner. Using pauses or silent intervals in volume acquisition and block designs, results from individual subjects result in robust activation without motion-induced signal artefact. This approach is an efficient method for studying the neural basis of spoken language production and the effects of speech and language disorders using fMRI.

Link to article

Dr. Marc Pell
PELL, M. (Pell, M.D.) (2005). Prosody-face interactions in emotional processing as revealed by the facial affect decision task. Journal of Nonverbal Behavior, 29, (4), 193-215.

Abstract: Previous research employing the facial affect decision task (FADT) indicates that when listeners are exposed to semantically anomalous utterances produced in different emotional tones (prosody), the emotional meaning of the prosody primes decisions about an emotionally congruent rather than incongruent facial expression (Pell, M. D., Journal of Nonverbal Behavior, 29, 45–73). This study undertook further development of the FADT by investigating the approximate timecourse of prosody–face interactions in nonverbal emotion processing. Participants executed facial affect decisions about happy and sad face targets after listening to utterance fragments produced in an emotionally related, unrelated, or neutral prosody, cut to 300, 600, or 1000 ms in duration. Results underscored that prosodic information enduring at least 600 ms was necessary to presumably activate shared emotion knowledge responsible for prosody–face congruity effects.

Link to article

-- (Pell, M.D.) (2005). Effects of cortical and subcortical brain damage on the processing of emotional prosody. Interspeech 2005 Proceedings, Lisbon, Portugal, 1777-1780

Abstract: Cortical and subcortical contributions to the processing of emotional speech prosody were evaluated by testing adults with single focal lesions involving the right hemisphere (n=9), adults with basal ganglia damage in idiopathic Parkinson's disease (n=21), and healthy aging adults (n=33). Participants listened to semantically-anomalous utterances in two conditions (identification, rating) which assessed their recognition of five prosodic emotions. Findings confirmed that both right hemisphere and basal ganglia pathology were associated with impaired comprehension of prosody, although possibly for distinct reasons: right hemisphere compromise produced a more pervasive insensitivity to emotive features of prosodic stimuli, whereas basal ganglia disease produced a milder and more quantitative impairment on these tasks. The implications of these findings for differentiating cortical and subcortical mechanisms involved in prosody processing are considered.

Link to article

-- (Pell, M.D. & Leonard, C.L.) (2005). Facial expression decoding in early Parkinson’s disease. Cognitive Brain Research, 23 (2-3), 327-340.

Abstract: The ability to derive emotional and non-emotional information from unfamiliar, static faces was evaluated in 21 adults with idiopathic Parkinson's disease (PD) and 21 healthy control subjects. Participants' sensitivity to emotional expressions was comprehensively assessed in tasks of discrimination, identification, and rating of five basic emotions: happiness, (pleasant) surprise, anger, disgust, and sadness. Subjects also discriminated and identified faces according to underlying phonemic (“facial speech”) cues and completed a neuropsychological test battery. Results uncovered limited evidence that the processing of emotional faces differed between the two groups in our various conditions, adding to recent arguments that these skills are frequently intact in non-demented adults with PD [R. Adolphs, R. Schul, D. Tranel, Intact recognition of facial emotion in Parkinson's disease, Neuropsychology 12 (1998) 253–258]. Patients could also accurately interpret facial speech cues and discriminate the identity of unfamiliar faces in a normal manner. There were some indications that basal ganglia pathology in PD contributed to selective difficulties recognizing facial expressions of disgust, consistent with a growing literature on this topic. Collectively, findings argue that abnormalities for face processing are not a consistent or generalized feature of medicated adults with mild-moderate PD, prompting discussion of issues that may be contributing to heterogeneity within this literature. Our results imply a more limited role for the basal ganglia in the processing of emotion from static faces relative to speech prosody, for which the same PD patients exhibited pronounced deficits in a parallel set of tasks [M.D. Pell, C. Leonard, Processing emotional tone from speech in Parkinson's disease: a role for the basal ganglia, Cogn. Affect. Behav. Neurosci. 3 (2003) 275–288]. These diverging patterns allow for the possibility that basal ganglia mechanisms are more engaged by temporally-encoded social information derived from cue sequences over time.

Link to article

-- (Pell, M.D. (2005). Nonverbal emotion priming: evidence from the ‘facial affect decision task’. Journal of Nonverbal Behavior, 29 (1), 45-73.

Abstract: Affective associations between a speaker's voice (emotional prosody) and a facial expression were investigated using a new on-line procedure, the Facial Affect Decision Task (FADT). Faces depicting one of four 'basic' emotions were paired with utterances conveying an emotionally-related or unrelated prosody, followed by a yes/no judgement of the face as a 'true' exemplar of emotion. Results established that prosodic characteristics facilitate the accuracy and speed of decisions about an emotionally congruent target face, supplying empirical support for the idea that information about discrete emotions is shared across major nonverbal channels. The FADT represents a promising tool for future on-line studies of nonverbal processing in both healthy and disordered individuals.

Link to article

Dr. Linda Polka
POLKA, L. (Polka, L. & Rvachew, S.) (2005). The impact of otitis media with effusion on infant phonetic perception. Infancy, 8, 101-117.

Abstract: The effect of prior otitis media with effusion (OME) or current middle ear effusion (MEE) on phonetic perception was examined by testing infants’ discrimination of boo and goo syllables in 2 test sessions. Middle ear function was assessed following each perception test using tympanometry. Perceptual performance was compared across 3 infant groups: (a) history-negative, infants with normal middle ear function who had never received medical treatment for OME; (b) history-positive, infants with normal middle ear function who received medical treatment for prior episodes of OME; and (c) MEE, infants presenting tympanograms indicating middle ear effusion on the day of testing. History-negative infants performed significantly better than MEE infants in both test sessions. History-negative infants also performed significantly better than history-positive infants in the 2nd test session. Findings suggest that OME has a negative impact on infant phonetic discrimination that may persist even after middle ear function has returned to normal.

Link to article

-- (Sundara, M. Polka, L. & Genesee, F.) (2005) Language experience facilitates discrimination of /d- / in monolingual and bilingual acquisition of English. Cognition, 1-20.

Abstract: To trace how age and language experience shape the discrimination of native and non-native phonetic contrasts, we compared 4-year-olds learning either English or French or both and simultaneous bilingual adults on their ability to discriminate the English /d-[delta]/ contrast. Findings show that the ability to discriminate the native English contrast improved with age. However, in the absence of experience with this contrast, discrimination of French children and adults remained unchanged during development. Furthermore, although simultaneous bilingual and monolingual English adults were comparable, children exposed to both English and French were poorer at discriminating this contrast when compared to monolingual English-learning 4-year-olds. Thus, language experience facilitates perception of the English /d-[delta]/ contrast and this facilitation occurs later in development when English and French are acquired simultaneously. The difference between bilingual and monolingual acquisition has implications for language organization in children with simultaneous exposure.

Link to article

-- (Ilari, & Polka, L.) (2005) Infants’ preferences for musical timbre and texture: A report from two experiments. Early Childhood Connections 11 (1), 29-30.
-- (Mattock, K., Rvachew, S. & Polka, L.) (2005) Cross-linguistic influences on infant babbling. Canadian Acoustics, 33, 78-79.
Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Hodge, M., & Ohberg, A.) (2005) Obtaining and interpreting maximum performance tasks from children: A tutorial. Journal of Speech-Language Pathology and Audiology, 29, 146-156.

Abstract: The diagnosis of motor speech disorders in children can be aided by the use and interpretation of measures of maximum performance tasks. These tasks include measuring how long a vowel can be sustained or how fast syllables can be repeated. This tutorial provides a rationale for including these measures in assessment protocols for children with speech sound disorders. Software developed to motivate children to cooperate with these procedures and to expedite recording of sound prolongations and syllable repetitions is described. Procedures for obtaining maximum performance measures from digital sound file recordings are illustrated followed by a discussion of how these measures may aid in clinical diagnosis.

Link to article

-- (Rvachew, S.) (2005) Stimulability and treatment success. Topics in Language Disorders, 25, 207-219.

Abstract: This article addresses 2 questions of importance to the treatment of speech sound disorders: (1) When selecting treatment targets, is it best to begin with the most or the least stimulable potential phoneme targets? (2) When treating unstimulable phonemes, which treatment procedures will result in the best outcome? A summary of the findings from 3 randomized controlled trials is provided. In these studies, outcomes were generally better when stimulable targets were treated; however, outcomes for unstimulable targets were improved by including phonemic perception training alongside phonetic placement procedures in the treatment program. The clinician must take final responsibility for judging the applicability of these research findings to each individual case. Clinical decisions should be made after discussing the known benefits and risks of any given treatment practice with the client and/or the client's family.

Link to article

-- (Polka, L. & Rvachew, S.) (2005) The impact of otitis media with effusion on infant phonetic perception. Infancy, 8, 101-117.

Abstract: The effect of prior otitis media with effusion (OME) or current middle ear effusion (MEE) on phonetic perception was examined by testing infants’ discrimination of boo and goo syllables in 2 test sessions. Middle ear function was assessed following each perception test using tympanometry. Perceptual performance was compared across 3 infant groups: (a) history-negative, infants with normal middle ear function who had never received medical treatment for OME; (b) history-positive, infants with normal middle ear function who received medical treatment for prior episodes of OME; and (c) MEE, infants presenting tympanograms indicating middle ear effusion on the day of testing. History-negative infants performed significantly better than MEE infants in both test sessions. History-negative infants also performed significantly better than history-positive infants in the 2nd test session. Findings suggest that OME has a negative impact on infant phonetic discrimination that may persist even after middle ear function has returned to normal.

Link to article

-- (Rvachew, S., Creighton, D., Feldman, N., & Sauve, R.) (2005). Vocal development of infants with very low birth weight. Clinical Linguistics & Phonetics, 19, 275-294.

Abstract: This study describes the vocal development of infants born with very low birth weights (VLBW). Samples of vocalizations were recorded from three groups of infants when they were 8, 12 and 18 months of age: preterm VLBW infants with bronchopulmonary dysplasia (BPD), preterm VLBW infants without BPD, and healthy full-term infants. Infants with BPD produced significantly smaller canonical syllable ratios than the full-term infants throughout the period of study. Premature VLBW infants who did not suffer from BPD produced relatively little canonical babble at 8 months of age, but were performing within the range of the full-term infants at 18 months of age. At 18 months of age, the infants with BPD were reported to have significantly smaller expressive vocabulary sizes than the healthier preterm and full-term infants.

Link to article

-- (Rvachew, S., Gaines, B., Cloutier, G., & Blanchet, N.) (2005) Productive morphology skills of children with speech delay. Journal of Speech-Language Pathology and Audiology, 29, 83-89.

Abstract: Children’s use of the plural, possessive, and regular third person singular morphemes was investigated in relation to their ability to produce the /s/ and /z/ phonemes. Twenty-three 4-year-old children with delayed expressive phonological abilities but average receptive vocabulary skills were asked to retell stories. All but 3 of the children omitted these morphemes more frequently than would be expected given their chronological age. Omission of the /s/ and /z/ phonemes occurred more frequently in inflected than uninflected words. Inclusion of the plural and third person singular morpheme was significantly correlated with mean length of utterance in words but was not significantly correlated with production accuracy for the /s/ and /z/ phonemes in uninflected words.

Link to article

-- (Rvachew, S.) (2005) The importance of phonetic factors in phonological intervention. In A. G. Kamhi, & K. E. Pollock, (Eds.), Phonological Disorders in Children: Assessment and Intervention (pp. 175-188). Baltimore, Maryland: Paul Brookes Publishers.

Book description: This one-of-a-kind resource presents a wide range of expert opinions about phonological disorders in children, allowing readers to understand and compare diverse approaches to assessment and intervention, choose the ones that will work best, and use their new knowledge to make decisions during clinical interventions.

Book information:
ISBN 1-55766-784-5

Link to book

-- (Mattock, K., Rvachew, S. & Polka, L.) (2005) Cross-linguistic influences on infant babbling. Canadian Acoustics, 33, 78-79.
Dr. Elin Thordardottir
THORDARDOTTIR, E. (Lattermann, C., Shenker, R. & Thordardottir, E.) (2005). Progression of language complexity during treatment with the Lidcombe program for early stuttering intervention. American Journal of Speech Language Pathology, 14, 242-253.

Abstract: The Lidcombe Program is an operant treatment for early stuttering. Outcomes indicate that the program is effective; however, the underlying mechanisms leading to a successful reduction of stuttering remain unknown. The purpose of this study was to determine whether fluency achieved with the Lidcombe Program was accompanied by concomitant reduction of utterance length and decreases in linguistic complexity. Standardized language tests were administered pretreatment to 4 male preschool children. Spontaneous language samples were taken 2 weeks prior to treatment, at Weeks 1, 4, 8, and 12 during treatment, and 6 months after the onset of treatment. Samples were analyzed for mean length of utterance (MLU), percentage of simple and complex sentences, number of different words (NDW), and percentage of syllables stuttered. Analysis revealed that all participants presented with language skills in the average and above average range. The children achieved an increase in stutter-free speech accompanied by increases in MLU, percentage of complex sentences, and NDW. For these preschool children who stutter, improved stutter-free speech during treatment with the program appeared to be achieved without a decrease in linguistic complexity. Theoretical and clinical implications are discussed.

Link to article

-- (Kay-Raining Bird, E., Cleave, P., Trudeau, N., Thordardottir, E., Sutton, A. & Thorpe, A.) (2005). The language abilities of bilingual children with Down Syndrome. American Journal of Speech Language pathology, 14, 187-199.

Abstract: Children with Down syndrome (DS) have cognitive disabilities resulting from trisomy 21. Language-learning difficulties, especially expressive language problems, are an important component of the phenotype of this population. Many individuals with DS are born into bilingual environments. To date, however, there is almost no information available regarding the capacity of these individuals to acquire more than 1 language. The present study compared the language abilities of 8 children with DS being raised bilingually with those of 3 control groups matched on developmental level: monolingual children with DS (n=14), monolingual typically developing (TD) children (n=18), and bilingual TD children (n=11). All children had at least 100 words in their productive vocabularies but a mean length of utterance of less than 3.5. The bilingual children spoke English and 1 other language and were either balanced bilinguals or English-dominant. English testing was completed for all children using the following: the Preschool Language Scale, Third Edition; language sampling; and the MacArthur Communicative Development Inventories (CDI). Bilingual children were also tested in the second language using a vocabulary comprehension test, the CDI, and language sampling. Results provided evidence of a similar profile of language abilities in bilingual children as has been documented for monolingual children with DS. There was no evidence of a detrimental effect of bilingualism. That is, the bilingual children with DS scored at least as well on all English tests as their monolingual DS counterparts. Nonetheless, there was considerable diversity in the second-language abilities demonstrated by these individuals with DS. Clinical implications are addressed.

Link to article

-- (Thordardottir, E.) (2005). Language intervention from a bilingual mindset. Perspectives on Language Learning and Education, 12 (2), 17-22.
-- (Thordardottir, E.) (2005). Early lexical and syntactic development in Quebec French and English: Implications for cross-linguistic and bilingual assessment. International Journal of Language and Communication Disorders, 40, 243-278.

Abstract:
BACKGROUND: Although a number of studies have been conducted on normal acquisition in French, systematic methods for analysis of French and normative group data have been lacking.

AIMS: To develop a systematic method for the analysis of language samples in Quebec French, and to provide preliminary normative data on early lexical and syntactic development in French with a comparison with English.

METHODS & PROCEDURES: Language samples were collected for groups of monolingual French- and English-speaking children (n=39, age range 21-47 months) with normal language development. Coding conventions for French were developed based on similar principles as English SALT conventions. However, due to structural differences between the languages, coding of inflectional morphology was considerably more complex in French than in English.

OUTCOMES & RESULTS: The French procedure provided developmentally sensitive measures of lexical and syntactic development, including mean length of utterance in morphemes and in words, and number of different words, and should be an important addition to the assessment procedures available for French. Cross-linguistic similarities and differences were noted in the language sample measures. Although the same elicitation context was used in the English and the French language samples, and the analysis methods were designed to rest on similar principles across languages, systematic differences emerged such that the French-speaking children exhibited a higher mean length of utterance, but smaller vocabulary sizes. Differences were also noted in error patterns, with much lower error rates occurring in samples of the French-speaking children.

CONCLUSIONS: The findings have important implications for language assessment involving cross-linguistic comparisons, such as occurs in the assessment of bilingual children, and in the matching of participants in cross-linguistic studies. Given differences in the mean length of utterance and vocabulary scores across the languages, the finding of the same mean length of utterance or vocabulary obtained in the two languages for a given bilingual child or for monolingual speakers of the two languages does not imply equivalent levels of language development in the two languages.

Link to article

2004

Shari Baum, Ph.D., Professor
Jeanne Claessen, M.A.
Martha Crago, Ph.D, Professor
Vincent Gracco, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (2004). Prosodic deficits. In Kent (Ed.), MIT Encyclopedia of Communication Sciences & Disorders. Cambridge, MA: MIT Press.
Jeanne Claessen
CLAESSEN, J. (2004). A 2:1 clinical practicum, incorporating reciprocal peer coaching, clinical reasoning, and self- and peer evaluation. Journal of Speech-Language Pathology and Audiology, 28 (4), 156-165.

Abstract: The paper reports on the development and implementation of an innovative approach utilized in a clinical practicum involving speech-language pathology graduate students. A 2:1 student-to-clinical educator ratio supervision model was employed. This means that one clinical educator supervises two students simultaneously. The reciprocal peer coaching approach to peer learning was applied. This clinical practicum model further incorporated principles from research on clinical reasoning. There was also concomitant emphasis on the development of self- and peer-evaluation skills, which the author had already promoted in the clinical education of speech-language pathology students. The paper then goes on to describe how this framework was applied t otheclinicalpracticum that two students undertook jointly in two pediatricsettings, with a different clinical educator in each setting. This particular 2: 1 student -to-clinical educatorratio supervision model is recommended to clinical educators interested in implementing innovative teaching strategies; they may consequently obtain a higher degree of satisfaction when supervising students. University programs may adopt this model in their in-house clinics or encourage clinical educators external to the program to use it in their settings.

Link to article

Dr. Martha Crago
CRAGO, M. (Paradis, J. & Crago, M.) (2004). Comparing L2 and SLI grammars in child French: Focus on DP. In P. Prevost & J. Paradis (Eds.), The acquisition of French in different contexts: Focus on functional categories (pp 89-108). Amsterdam, NL: John Benjamins.

Book description: This volume is a collection of studies by some of the foremost researchers of French acquisition in the generative framework. It provides a unique perspective on cross-learner comparative research in that each chapter examines the development of one component of the grammar (functional categories) across different contexts in French learners: i.e. first language acquisition, second language acquisition, bilingual first language acquisition and specifically-language impaired acquisition. This permits readers to see how similar issues and morphosyntactic properties can be investigated in a range of various acquisition situations, and in turn, how each context can contribute to our general understanding of how these morphosyntactic properties are acquired in all learners of the same language. This state-of-the-art collection is enhanced by an introductory chapter that provides background on current formal generative theory, as well as a summary and synthesis of the major trends emerging from the individual studies regarding the acquisition of different functional categories across different learner contexts in French.

Book information:
ISBN 978 90 272 5291 3
ISBN 978 1 58811 455 6

Link to book

-- (Genesee, F., Paradis, J., & Crago, M. (2004 ). Dual language development and disorders: A handbook on bilingualism and second language learning. Baltimore, MD: Brookes.

Book description: This book provides a "comprehensive and up-to-date synthesis" of the current knowledge about normal and impaired bilingual and second language acquisition. Typical dual language development varies greatly from monolingual development, and professionals must understand these differences to successfully diagnose and treat dual language learners with language delays and disorders. The book divides dual langauge learners into two types: bilingual children, who have learned two languages from infancy, and second language learners, children who learn a second language after significant progress has been made in the first language. The book is divided into three sections: Foundations, which includes definitions, the influence of culture, and the cognitive aspects of dual language learning; Understanding Bilingual and Second Language Acquisition, which examines research and theory, discusses second language acquisition, and explores school issues; and Clinical Implications, which discusses assessment and intervention issues and a synthesis. Eight case studies, with children representing the various types of dual language learners, are introduced in Chapter 1 and reoccur throughout the book.

Book information:
ISBN-10: 1557666865
ISBN-13: 978-1557666864

Link to book

Dr. Vincent Gracco
GRACCO, V. (Max, L., Guenther, F. H., Gracco, V. L., Ghosh, S. S., & Wallace, M. E.). (2004) Unstable or insufficiently activated internal models and feedback-biased motor control as sources of dysfluency: A theoretical model of stuttering. Contemporary Issues in Communication Sciences and Disorders, 31, 105-122.

Abstract: This article presents a theoretical perspective on stuttering based on numerous findings regarding speech and nonspeech neuromotor control in individuals who stutter in combination with recent empirical data and theoretical models from the literature on the neuroscience of motor control. Specifically, this perspective on stuttering relies heavily on recent work regarding feedforward and feedback control schemes; the formation, consolidation, and updating of inverse and forward internal models of the motor systems; and cortical, subcortical, and cerebellar activation patterns during speech and nonspeech motor tasks. Against this background, we propose that stuttering may result when producing speech (a) with unstable or insufficiently activated internal models or (b) with a motor strategy that is weighted too much toward afferent feedback control. We discuss how these two hypotheses can account for the specific dysfluencies that form the primary characteristics of stuttering, and we suggest that the hypotheses are compatible with several of the phenomena associated with the disorder (e.g., age of onset, fluency-enhancing conditions, treatment effects). For one of the hypotheses, we also describe a computer simulation implemented in the DIVA (directions into velocities of articulators) model-a neural network model of the central control of speech movements.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Nowak, M., & Cloutier, G.) (2004) Effect of phonemic perception training on the speech production and phonological awareness skills of children with expressive phonological delay. American Journal of Speech-Language Pathology, 13, 250-263.

Abstract: Children with expressive phonological delays often possess poor underlying perceptual knowledge of the sound system and show delayed development of segmental organization of that system. The purpose of this study was to investigate the benefits of a perceptual approach to the treatment of expressive phonological delay. Thirty-four preschoolers with moderate or severe expressive phonological delays received 16 treatment sessions in addition to their regular speech-language therapy. The experimental group received training in phonemic perception, letter recognition, letter-sound association, and onset-rime matching. The control group listened to computerized books. The experimental group showed greater improvements in phonemic perception and articulatory accuracy but not in phonological awareness in comparison with the control group.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2004). Tvítyngi et ekkert til að óttast. Talfræðingurinn, 18, 5-7.
Dr. Karsten Steinhauer
STEINHAUER, K. (Meyer, M., Steinhauer, K., Alter, K., Friederici, A.D. & von Cramon, D.Y. (2004). Brain activity varies with modulation of dynamic pitch variance in sentence melody. Brain and Language, 89, (2), 277-289.

Abstract: Fourteen native speakers of German heard normal sentences, sentences which were either lacking dynamic pitch variation (flattened speech), or comprised of intonation contour exclusively (degraded speech). Participants were to listen carefully to the sentences and to perform a rehearsal task. Passive listening to flattened speech compared to normal speech produced strong brain responses in right cortical areas, particularly in the posterior superior temporal gyrus (pSTG). Passive listening to degraded speech compared to either normal or flattened speech particularly involved fronto-opercular and subcortical (Putamen, Caudate Nucleus) regions bilaterally. Additionally the Rolandic operculum (premotor cortex) in the right hemisphere subserved processing of neat sentence intonation. As a function of explicit rehearsing sentence intonation we found several activation foci in the left inferior frontal gyrus (Broca’s area), the left inferior precentral sulcus, and the left Rolandic fissure. The data allow several suggestions: First, both flattened and degraded speech evoked differential brain responses in the pSTG, particularly in the planum temporale (PT) bilaterally indicating that this region mediates integration of slowly and rapidly changing acoustic cues during comprehension of spoken language. Second, the bilateral circuit active whilst participants receive degraded speech reflects general effort allocation. Third, the differential finding for passive perception and explicit rehearsal of intonation contour suggests a right fronto-lateral network for processing and a left fronto-lateral network for producing prosodic information. Finally, it appears that brain areas which subserve speech (frontal operculum) and premotor functions (Rolandic operculum) coincidently support the processing of intonation contour in spoken sentence comprehension.

Link to article

2003

Shari Baum, Ph.D., Professor
Martha Crago, Ph.D, Professor
Vincent Gracco, Ph.D., Associate Professor
Rachel Mayberry, Ph.D., Associate Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Assistant Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Assistant Professor

Dr. Shari Baum
BAUM, S. (Aasland, W. & Baum, S.) (2003). “Temporal parameters as cues to phrasal boundaries: A comparison of processing by left-hemisphere-damaged and right-hemisphere- damaged individuals,” Brain & Language, 87, 385-399.

Abstract: Two experiments were conducted to examine the ability of left- (LHD) and right-hemisphere-damaged (RHD) patients and normal controls to use temporal cues in rendering phrase grouping decisions. The phrase "pink and black and green" was manipulated to signal a boundary after "pink" or after "black" by altering pre-boundary word durations and pause durations at the boundary in a stepwise fashion. Stimuli were presented to listeners auditorily along with a card with three alternative groupings of colored squares from which to select the presented alternative. Results revealed that normal controls were able to use both temporal cues to identify the intended grouping. In contrast, LHD patients required longer than normal pause durations to consistently identify the intended grouping, suggesting a higher than normal threshold for perception of temporal prosodic cues. Surprisingly, the RHD patients exhibited great difficulty with the task, perhaps due to the limited acoustic cues available in the stimuli.

Link to article

-- (Baum, S.) (2003). “Age differences in the influence of metrical structure on phonetic identification,” Speech Communication, 39, 231-242.

Abstract: Two phonetic identification experiments were conducted with two groups of participants: a young adult group and an older adult group. In Experiment 1, subjects were required to make voiced–voiceless decisions for initial alveolar stop consonants to stimuli along two voice onset time (VOT) continua—one ranging from “di’gress” to “ti’gress” and the other from “’digress” to “’tigress” (i.e., in one continuum, the voiced endpoint was consistent with the word’s stress pattern while in the other continuum, the voiceless endpoint was consistent with the word’s stress pattern). Results revealed that both groups of participants were influenced by the stress pattern of the stimuli, but stress seemed to override VOT cues for a large number of the older individuals. To confirm that the effect was not simply due to a lexical influence, a follow-up experiment utilized two word–nonword continua (“diamond–tiamond” and “diming–timing”) to examine the magnitude of lexical effects in these subject groups. Typical lexical status effects emerged for both young and older adults which were smaller than the effects of stress pattern found in Experiment 1. The findings are discussed with respect to the role of prosodic context in language processing in aging.

Link to article

-- Baum, S. & Blumstein, S. (2003). “Psycholinguistics: approaches to neurolinguistics.” In Frawley (Ed.), International Encyclopedia of Linguistics (Second Edition). Oxford: Oxford Univ. Press.
-- (Baum, S. & Dwivedi, V.) (2003). “Sensitivity to prosodic structure in left- and right-hemisphere-damaged individuals,” Brain & Language, 87, 278-289.

Abstract: An experiment was conducted in order to determine whether left- (LHD) and right-hemisphere-damaged (RHD) patients exhibit sensitivity to prosodic information that is used in syntactic disambiguation. Following the work of Marslen-Wilson, Tyler, Warren, Grenier, and Lee, and Lee (1992)1992, a cross-modal lexical decision task was performed by LHD and RHD subjects, as well as by adults without brain pathology (NC). Subjects listened to sentences with attachment ambiguities with either congruent or incongruent prosody, while performing a visual lexical decision task. Results showed that each of the unilaterally damaged populations differed from each other, as well as from the NCs in terms of sensitivity regarding prosodic cues. Specifically, the RHD group was insensitive to sentence prosody as a whole. This was in contrast to the LHD patients, who responded to the prosodic manipulation, but in the unexpected direction. Results are discussed in terms of current hypotheses regarding the hemispheric lateralization of prosodic cues.

Link to article

-- (Grindrod, C. & Baum, S. (2003). “Sensitivity to local sentence context information in lexical ambiguity resolution: Evidence from left- and right-hemisphere-damaged individuals,” Brain & Language, 85, 502-523.

Abstract: Using a cross-modal semantic priming paradigm, the present study investigated the ability of left-hemisphere-damaged (LHD) nonfluent aphasic, right-hemisphere-damaged (RHD) and non-brain-damaged (NBD) control subjects to use local sentence context information to resolve lexically ambiguous words. Critical sentences were manipulated such that they were either unbiased, or biased toward one of two meanings of sentence-final equibiased ambiguous words. Sentence primes were presented auditorily, followed after a short (0 ms) or long (750 ms) interstimulus interval (ISI) by the presentation of a first- or second-meaning related visual target, on which subjects made a lexical decision. At the short ISI, neither patient group appeared to be influenced by context, in sharp contrast to the performance of the NBD control subjects. LHD nonfluent aphasic subjects activated both meanings of ambiguous words regardless of context, whereas RHD subjects activated only the first meaning in unbiased and second-meaning biased contexts. At the long ISI, LHD nonfluent aphasic subjects failed to show evidence of activation of either meaning, while RHD individuals activated first meanings in unbiased contexts and contextually appropriate meanings in second-meaning biased contexts. These findings suggest that both left (LH) and right hemisphere (RH) damage lead to deficits in using local contextual information to complete the process of ambiguity resolution. LH damage seems to spare initial access to word meanings, but initially impairs the ability to use context and results in a faster than normal decay of lexical activation. RH damage appears to initially disrupt access to context, resulting in an over-reliance on frequency in the activation of ambiguous word meanings.

Link to article

-- (Nicholson, K., Baum, S., Kilgour, A., Koh, C., Munhall, K., & Cuddy, L.) (2003). "Impaired processing of prosodic and musicalpatterns after right hemisphere damage,” Brain & Cognition, 52, 382-389.

Abstract: The distinction between the processing of musical information and segmental speech information (i.e., consonants and vowels) has been much explored. In contrast, the relationship between the processing of music and prosodic speech information (e.g., intonation) has been largely ignored. We report an assessment of prosodic perception for an amateur musician, KB, who became amusic following a right-hemisphere stroke. Relative to matched controls, KB’s segmental speech perception was preserved. However, KB was unable to discriminate pitch or rhythm patterns in linguistic or musical stimuli. He was also impaired on prosodic perception tasks (e.g., discriminating statements from questions). Results are discussed in terms of common neural mechanisms that may underlie the processing of some aspects of both music and speech prosody.

Link to article

Dr. Martha Crago
CRAGO, M. (Brophy, A.E. & Crago, M.) (2003). Variation in instructional discourse features: Cultural or Linguistic? Evidence from Inuit and non-Inuit Teachers of Nunavik. Anthropology and Education Quarterly, 34 (4), 1-25.

Abstract: This article examines discourse features in the instructional interactions of eight Inuit and six non-lnuit teachers of Inuit children in northern Québec. Significant differences existed between these two groups of teachers in their use of Initiation-Response-Evaluation (IRE) routines, nomination format, and teacher response to student initiations. The research distinguishes cultural factors from factors related to second language teaching. Findings suggest the cultural variability of discourse features that have significant ramifications for teacher judgments regarding students' academic and communicative competence.

Link to article

-- (Paradis, J., Crago, M., Genesee, F., & Rice, M.) (2003). French-English bilingual children with specific language impairment: How do they compare with their monolingual peers. Journal of Speech Language and Hearing Research, 36, 113-127

Abstract: The goal of this study was to determine whether bilingual children with specific language impairment (SLI) are similar to monolingual age mates with SLI, in each language. Eight French-English bilingual children with SLI were compared to age-matched monolingual children with SLI, both English and French speaking, with respect to their use of morphosyntax in language production. Specifically, using the extended optional infinitive (EOI) framework, the authors examined the children's use of tense-bearing and non-tense-bearing morphemes in obligatory context in spontaneous speech. Analyses revealed that the patterns predicted by the EOI framework were borne out for both the monolingual and bilingual children with SLI: The bilingual and monolingual children with SLI showed greater accuracy with non-tense than with tense morphemes. Furthermore, the bilingual and monolingual children with SLI had similar mean accuracy scores for tense morphemes, indicating that the bilingual children did not exhibit more profound deficits in the use of these grammatical morphemes than their monolingual peers. In sum, the bilingual children with SLI in this study appeared similar to their monolingual peers for the aspects of grammatical morphology examined in each language. These bilingual-monolingual similarities point to the possibility that SLI may not be an impediment to learning two languages, at least in the domain of grammatical morphology.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Max, L., Caruso, A. J., & Gracco, V. L.) (2003). Kinematic analyses of speech, orofacial nonspeech, and finger movements suggest generalized differences in neuromotor control between stuttering and nonstuttering individuals. Journal of Speech Language and Hearing Research. 46, 215-232.

Abstract: This work investigated the hypothesis that neuromotor differences between individuals who stutter and individuals who do not stutter are not limited to the movements involved in speech production. Kinematic data were obtained from gender- and age-matched stuttering (n = 10) and nonstuttering (n = 10) adults during speech movements, orofacial nonspeech movements, and finger movements. All movements were performed in 4 conditions differing in sequence length and location of the target movement within the sequence. Results revealed statistically significant differences between the stuttering and nonstuttering individuals on several measures of lip and jaw closing (but not opening) movements during perceptually fluent speech. The magnitude of these differences varied across different levels of utterance length (larger differences during shorter utterances) and across different locations of the target movement within an utterance (larger differences close to the beginning). Results further revealed statistically significant differences between the stuttering and nonstuttering groups in finger flexion (but not extension) movement duration and peak velocity latency. Overall, findings suggest that differences between stuttering and nonstuttering individuals are not confined to the sensorimotor processes underlying speech production or even movements of the orofacial system in general. Rather, it appears that the groups show generalized differences in the duration of certain goal-directed movements across unrelated motor systems.

Link to article

-- (Max, L., Gracco, V. L., Guenther, F., Vincent, I., & Wallace, M.) (2003). A sensorimotor model of stuttering: Insights from the neuroscience of motor control. In A. Packman, A. Meltzer & H.M.F. Peters (Eds.), Proceedings of the 4th World Congress on Fluency Disorders, Nijmegen, The Netherlands: University of Nijmegen Press.
-- (Max, L., Gracco, V., & Caruso, A. (2003). Kinematic event sequencing in stuttering adults: Speech, orofacial, and finger movements. In A. Packman, A. Meltzer & H.M.F. Peters (Eds.), Proceedings of the 4th World Congress on Fluency Disorders. Nijmegen, The Netherlands: University of Nijmegen Press.
Dr. Rachel Mayberry
MAYBERRY, R. (Mayberry, R. I. & Lock, E.) (2003). Age constraints on first versus second language acquisition: Evidence for linguistic plasticity and epigenesis. Brain and Language, 87, 2003, 369-383.

Abstract: Does age constrain the outcome of all language acquisition equally regardless of whether the language is a first or second one? To test this hypothesis, the English grammatical abilities of deaf and hearing adults who either did or did not have linguistic experience (spoken or signed) during early childhood were investigated with two tasks, timed grammatical judgement and untimed sentence to picture matching. Findings showed that adults who acquired a language in early life performed at near-native levels on a second language regardless of whether they were hearing or deaf or whether the early language was spoken or signed. By contrast, deaf adults who experienced little or no accessible language in early life performed poorly. These results indicate that the onset of language acquisition in early human development dramatically alters the capacity to learn language throughout life, independent of the sensory-motor form of the early experience.

Link to article

-- Mayberry, R. I. (2003). Beyond babble: Early linguistic experience and language learning ability. In G. Spaai, H. van der Stege & H. de Ridder-Sluiter (Eds.), Vijfig jaar NSDSK: met een Knipoog naar de toekomst (pp. 39-46). Utrecht, Lemma.
Dr. Marc Pell
PELL, M. (Pell, M.D. & Leonard, C.L.) (2003). Processing emotional tone from speech in Parkinson’s disease: a role for the basal ganglia. Cognitive, Affective, & Behavioral Neuroscience, 3 (4), 275-288.

Abstract: In this study, individuals with Parkinson's disease were tested as a model for basal ganglia dysfunction to infer how these structures contribute to the processing of emotional speech tone (emotional prosody). Nondemented individuals with and without Parkinson's disease (n = 21/group) completed neuropsychological tests and tasks that required them to process the meaning of emotional prosody in various ways (discrimination, identification, emotional feature rating). Individuals with basal ganglia disease exhibited abnormally reduced sensitivity to the emotional significance of prosody in a range of contexts, a deficit that could not be attributed to changes in mood, emotional-symbolic processing, or estimated frontal lobe cognitive resource limitations in most conditions. On the basis of these and broader findings in the literature, it is argued that the basal ganglia provide a critical mechanism for reinforcing the behavioral significance of prosodic patterns and other temporal representations derived from cue sequences (Lieberman, 2000), facilitating cortical elaboration of these events.

Link to article

Dr. Linda Polka
POLKA, L. (Polka, L. & Bohn, O-S.) (2003). Asymmetries in Vowel Perception” Speech Communication 41, 221-231.

Abstract: Asymmetries in vowel perception occur such that discrimination of a vowel change presented in one direction is easier compared to the same change presented in the reverse direction. Although such effects have been repeatedly reported in the literature there has been little effort to explain when or why they occur. We review studies that report asymmetries in vowel perception in infants and propose that these data indicate that babies are predisposed to respond differently to vowels that occupy different positions in the articulatory/acoustic vowel space (defined by F1-F2) such that the more peripheral vowel within a contrast serves as a reference or perceptual anchor. As such, these asymmetries reveal a language-universal perceptual bias that infants bring to the task of vowel discrimination. We present some new data that support our peripherality hypothesis and then compare the data on asymmetries in human infants with findings obtained with birds and cats. This comparison suggests that asymmetries evident in humans are unlikely to reflect general auditory mechanisms. Several important directions for further research are outlined and some potential implications of these asymmetries for understanding speech development are discussed.

Link to article

-- (Polka, L. & Sundara, M.) (2003). Word segme ntation in monolingual and bilingual infant learners of English and French. Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, Spain, 1021-1024.

Abstract: Word segmentation skills emerge during infancy, but it is unclear to what extent this ability is shaped by experience listening to a specific language or language type. This issue was explored by comparing segmentation of bi-syllabic words in monolingual and bilingual 7.5-month-old learners of French and English. In a native-language condition, monolingual infants segmented bi-syllabic words with the predominant stress pattern of their native language. Monolingual French infants also segmented in a different dialect of French, whereas both monolingual groups failed in a cross-language test, i.e. English infants failed to segment in French and vice versa. These findings support the hypothesis that word segmentation is shaped by infant sensitivity to the rhythmic structure of their native language. Our finding that bilingual infants segment bi-syllabic words in two native languages at the same age as their monolingual peers shows that dual language exposure does not delay the emergence of this skill.

Link to article

-- (Escudero , P. & Polka, L.) (2003). A cross-language study of vowel categorization and vowel acoustics: Canadian English versus Canadian French. Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, Spain, 861-864

Abstract: We show the perception of Canadian French (CF) vowels by Canadian English (CE) listeners and test a cue-weighting hypothesis to explain the attested assimilation patterns. Five CF vowels, /i, y, u, e, æ/, and three allophonic variants, [I, Y, U], were examined. The listeners completed a native-language identification task with goodness of fit judgments. We found that most French vowels were identified as more than one vowel category in English. Acoustic analyses of the vowel tokens revealed that the multiple mappings occur because the English listeners paid attention to both spectral and durational cues when identifying the French vowels. We claim that English listeners use the cue-weighting strategies of their first language when approaching a foreign-vowel identification task. We predict the specific problems that CE speakers will face when learning to categorize CF vowels and the possible solutions that they should entertain.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Ohberg, A., Grawburg, M., & Heyding, J.) (2003). Phonological awareness and phonemic perception in 4-year-old children with delayed expressive phonology skills. American Journal of Speech-Language Pathology, 12, 463- 471.

Abstract: The purpose of this study was to compare the phonological awareness abilities of 2 groups of 4-year-old children: one with normally developing speech and language skills and the other with moderately or severely delayed expressive phonological skills but age-appropriate receptive vocabulary skills. Each group received tests of articulation, receptive vocabulary, phonemic perception, early literacy, and phonological awareness skills. The groups were matched for receptive language skills, age, socioeconomic status, and emergent literacy knowledge. The children with expressive phonological delays demonstrated significantly poorer phonemic perception and phonological awareness skills than their normally developing peers. The results suggest that preschool children with delayed expressive phonological abilities should be screened for their phonological awareness skills even when their language skills are otherwise normally developing.

Link to article

-- (Rvachew, S. & Nowak, M.) (2003). Clinical outcomes as a function of target selection strategy: A response to Morrisette and Gierut. Journal of Speech-Language and Hearing Research, 46, 386-389.
-- (Rvachew, S.) (2003). Computer applications and treatment outcomes. Perspectives on Language Learning and Education, 10 (1), 17-20.
-- (Rvachew, S.) (2003). Factors Related to the Development of Phonological Awareness Skills. In B. Beachley, A. Brown & F. Conlin (Eds.), Proceedings of the 27th annual Boston University Conference on Language Development (pp. 686-691). Boston University Press.
Dr. Karsten Steinhauer
STEINHAUER, K. (Steinhauer, K.) (2003). Electrophysiological correlates of prosody and punctuation. Brain and Language, Volume 86 (1), 142-164.

Abstract: Psycholinguistic models of sentence parsing are primarily based on reading rather than auditory processing data. Moreover, both prosodic information and its potential orthographic equivalent, i.e., punctuation, have been largely ignored until recently. The unavailability of experimental online methods is one likely reason for this neglect. Here I give an overview of six event-related brain potential (ERP) studies demonstrating that the processing of both prosodic boundaries in natural speech and commas during silent reading can determine syntax parsing immediately. In ERPs, speech boundaries and commas reliably elicit a similar online brain response, termed the Closure Positive Shift (CPS). This finding points to a common mechanism, suggesting that commas serve as visual triggers for covert phonological phrasing. Alternative CPS accounts are tested and the relationship between the CPS and other ERP components, including the P600/SPS, is addressed.

Link to article

2002

Shari Baum, PhD, Professor
Martha Crago, PhD, Associate Professor
Vince Gracco, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Susan Rvachew, PhD, Assistant Professor
Elin Thordardottir, PhD, Assistant Professor

Dr. Shari Baum
BAUM, S. (Baum, S.) Sensitivity to sub-syllabic constituents in brain-damaged patients: Evidence from word games. Brain and Language, v. 83, 2002, pp. 237-248.

Abstract: Two experiments were conducted to examine whether left- (LHD) and right-hemisphere-damaged (RHD) patients exhibit sensitivity to sub-syllabic constituents (i.e., onsets and codas) in the generation of nonwords, using a word games paradigm adapted from. Four groups of individuals (including LHD fluent and nonfluent aphasic patients, RHD patients and normal controls) were trained to add syllables to monosyllabic CVC nonwords either after the initial consonant (Experiment 1) or prior to the final consonant (Experiment 2) to create bisyllabic nonwords. Experimental stimuli consisting of CCVC or CVCC nonwords tested whether participants would preserve or split the onset and coda constituents in producing the novel bisyllabic nonwords. Results revealed that the majority of subjects demonstrated sensitivity to the sub-syllabic constituents, preserving the onsets and codas. The fluent aphasic patients exhibited a greater than normal tendency to split the onset and coda constituents; however, the small number of individuals in that group whose data met inclusion criteria limits the conclusions that may be drawn from these findings. The results are discussed in relation to theories of phonological deficits in aphasia.

Link to article

-- (Baum, S.) Word recognition in individuals with left and right hemisphere damage: The role of lexical stress. Applied Psycholinguistics, v.23, 202, pp.233-246.

Abstract: Lexical stress patterns appear to be important in word recognition processes in normal individuals. The present investigation employed a lexical decision task to assess whether left (LHD) and right hemisphere damaged (RHD) patients are similarly sensitive to stress patterns in lexical access. The results confirmed that individuals without brain damage are influenced by stress patterns, as indicated by increased lexical decision latencies to incorrectly stressed word and nonword stimuli. The data for the LHD patients revealed an effect of stress for real word targets only, whereas the reaction time data for the RHD patients as a group showed no significant influence of stress pattern. However, there was a great deal of individual variability in performance. The latency and error rate findings suggest that LHD patients and non-brain-damaged individuals are both sensitive to lexical stress in word recognition, but the LHD patients are more likely to treat incorrectly stressed items as nonwords. The results are discussed in relation to theories of the hemispheric lateralization of prosodic processing and the role of lexical stress in word recognition.

Link to article

-- (Baum, S.) Consonant and vowel discrimination by brain-damaged individuals: Effects of phonological segmentation. Journal of Neurolinguistics, v.15, 2002, pp. 447-461.

Abstract: Two ‘same–different’ discrimination tasks were conducted to explore consonant voicing and vowel discrimination abilities in groups of left- and right-hemisphere-damaged individuals and normal controls. Stimuli were manipulated such that for half of each set, segmentation of the syllable was required for a discrimination decision; for the other half of the stimuli, no phonological segmentation was required. Results revealed impaired consonant voicing and vowel discrimination in a group of left-hemisphere-damaged non-fluent aphasic participants. Discrimination accuracy for groups of fluent aphasic participants and right-hemisphere-damaged participants fell between those of the non-fluent aphasic participants and the normal controls on both tasks. The findings are suggestive of a role for left frontal lobe regions in phonological segmentation, but remain inconclusive on this issue. The results are considered in relation to models of the neural bases and cerebral lateralization of speech perception processes.

Link to article

-- (Nicholson, K., Baum, S., Cuddy, L., Munhall, K.) A case of impaired auditory and visual speech prosody perception after right hemisphere damage. Neurocase, v.8, 2002, pp. 314-322.

Abstract: It is well established that vision plays a role in segmental speech perception, but the role of vision in prosodic speech perception is less clear. We report on the difficulties in prosodic speech perception encountered by KB after a right hemisphere stroke. In addition to musical deficits, KB was suspected of having impaired auditory prosody perception. As expected, KB was impaired on two prosody perception tasks in an auditory-only condition. We also examined whether the addition of visual prosody cues would facilitate his performance on these tasks. Unexpectedly, KB was also impaired on both tasks under visual-only and audio-visual conditions. Thus, there was no evidence that KB could integrate auditory and visual prosody information or that he could use visual cues to compensate for his deficit in the auditory domain. In contrast, KB was able to identify segmental speech information using visual cues and to use these visual cues to improve his performance when auditory segmental cues were impoverished. KB was also able to integrate audio-visual segmental information in the McGurk effect. Thus, KB's visual deficit was specific to prosodic speech perception and, to our knowledge, this is the first reported case of such a deficit.

Link to article

Dr. Martha Crago
CRAGO, M. (Crago, M., Paradis, J.) Two of a Kind? Commonalities and Variation in Languages and Language Learners. Language competence across populations: Towards a definition of Specific Language Impairment. In Y. Levy and J. Schaeffer (Eds.), Mahwah, NJ: Lawrence Erlbaum Associates, 2002, pp. 97-110.
Dr. Vincent Gracco
GRACCO, V.L. (Löfqvist, A., Gracco, V.L.) Control of Oral Closure in lingual stop consonant production. Journal of the Acoustical Society of the Acoustical Society of America, v.111(6), 2002, pp. 2811-2827.

Abstract: Previous work has shown that the lips are moving at a high velocity when the oral closure occurs for bilabial stop consonants, resulting in tissue compression and mechanical interactions between the lips. The present experiment recorded tongue movements in four subjects during the production of velar and alveolar stop consonants to examine kinematic events before, during, and after the stop closure. The results show that, similar to the lips, the tongue is often moving at a high velocity at the onset of closure. The tongue movements were more complex, with both horizontal and vertical components. Movement velocity at closure and release were influenced by both the preceding and the following vowel. During the period of oral closure, the tongue moved through a trajectory of usually less than 1 cm; again, the magnitude of the movement was context dependent. Overall, the tongue moved in forward–backward curved paths. The results are compatible with the idea that the tongue is free to move during the closure as long as an airtight seal is maintained. A new interpretation of the curved movement paths of the tongue in speech is also proposed. This interpretation is based on the principle of cost minimization that has been successfully applied in the study of hand movements in reaching.

Link to article

-- (Shaiman, S., Gracco, V.L.) Task-specific sensorimotor interactions in speech production. Experimental Brain Research, v.146, pp. 411-418.

Abstract: Speaking involves the activity of multiple muscles moving many parts (articulators) of the vocal tract. In previous studies, it has been shown that mechanical perturbation delivered to one moving speech articulator, such as the lower lip or jaw, results in compensatory responses in the perturbed and other non-perturbed articulators, but not in articulators that are uninvolved in the specific speech sound being produced. These observations suggest that the speech motor control system may be organized in a task-specific manner. However, previous studies have not used the appropriate controls to address the mechanism by which this task-specific organization is achieved. A lack of response in a non-perturbed articulator may simply reflect the fact that the muscles examined were not active. Alternatively, there may be a specific gating of somatic sensory signals due to task requirements. The present study was designed to address the nature of the underlying sensorimotor organization. Unanticipated mechanical loads were applied to the upper lip during the "p" in "apa" and "f" in "afa" in six subjects. Both lips are used to produce "p", while only the lower lip is used for "f". For "apa", both upper lip and lower lip responses were observed following upper lip perturbation. For "afa", no upper lip or lower lip responses were observed following the upper lip perturbation. The differential response of the lower lip, which was phasically active during both speech tasks, indicates that the neural organization of these two speech tasks differs not only in terms of the different muscles used to produce the different movements, but also in terms of the sensorimotor interactions within and across the two lips.

Link to article

Dr. Rachel Mayberry
MAYBERRY, R.I., (Mayberry, R.I., Lock, E., Kazmi, H.) Linguistic ability and early language exposure. Nature, v.417, 2002, pp. 38.

Abstract: For more than 100 years, the scientific and educational communities have thought that age is critical to the outcome of language learning, but whether the onset and type of language experienced during early life affects the ability to learn language is unknown. Here we show that deaf and hearing individuals exposed to language in infancy perform comparably well in learning a new language later in life, whereas deaf individuals with little language experience in early life perform poorly, regardless of whether the early language was signed or spoken and whether the later language was spoken or signed. These findings show that language-learning ability is determined by the onset of language experience during early brain development, independent of the specific form of the experience.

Link to article

-- (Mayberry R.I.) Cognitive development of deaf children: The interface of language and perception in neuropsychology. In Child Neuropsychology, Volume 8, Part II of Handbook of Neuropsychology, S.J. Segalowitz & I. Rapin, eds., Amsterdam, Elsvier, 2002, pp. 71-107.
-- (Ducharme, D. & Mayberry, R.I.) Learning to read French: When does phonological decoding matter? Proceedings of the Boston University Conference on Language Development, Boston, MA, Vol. 1, pp. 187-198.
Dr. Marc Pell
PELL, M.D. (Pell, M.D.) Surveying emotional prosody in the brain. In B. Bel and I. Marlien, eds., Proceedings of Speech Prosody 2002 Conference, 11-13 April 2002. Aix-en-Provence: Laboratoire Parole et Langage, pp. 77-82.

Abstract: Research has long supported a pivotal right hemisphere contribution to the decoding of emotional prosody, although a broader network of cortical and subcortical structures is now thought to support different components of this functional system during input processing. This paper highlights important work implicating the basal ganglia in emotional prosody decoding, especially in reinforcing key affective stimulus properties necessary for higher-order interpretative processes. The role of the right hemisphere in elaborating emotional-prosodic stimuli is then considered in reference to presumed ‘functional’ and ‘auditory-perceptual’ capacities of constituent regions. A broader description of the right hemisphere’s jurisdiction in social-emotive behaviour is advocated to advance future work in this area, and a new paradigm to tap on-line comprehension of emotional prosody in clinical populations is described.

Link to article

-- (Pell, M.D.) Evaluation of nonverbal emotion in face and voice: some preliminary findings of new battery of tests. Brain and Cognition, v.48, pp. 499-504.

Abstract: This report describes some preliminary attributes of stimuli developed for future evaluation of nonverbal emotion in neurological populations with acquired communication impairments. Facial and vocal exemplars of six target emotions were elicited from four male and four female encoders and then prejudged by 10 young decoders to establish the category membership of each item at an acceptable consensus level. Representative stimuli were then presented to 16 additional decoders to gather indices of how category membership and encoder gender influenced recognition accuracy of emotional meanings in each nonverbal channel. Initial findings pointed to greater facility in recognizing target emotions from facial than vocal stimuli overall and revealed significant accuracy differences among the six emotions in both the vocal and facial channels. The gender of the encoder portraying emotional expressions was also a significant factor in how well decoders recognized specific emotions (disgust, neutral), but only in the facial condition.

Link to article

Dr. Linda Polka
POLKA, L. (Shahnaz, N. & Polka, L.) Distinguishing healthy from otoscelrotic ears: Effects of probe tone frequency on static immitance, Journal of the American Academy of Audiology, v.13, 2002, pp. 345-355.
Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Creighton, D., Feldman, N. & Sauve, R.) Acoustic-phonetic description of infant speech samples: coding reliability and related methodological issues. Acoustics Research Letters On-Line, v.3(1), 2002, pp. 24-28

Abstract: Two samples of speech-like vocalizations were recorded from each of 18 infants who were 8, 12, or 18 months of age at the time of recording. The two samples were recorded on different days, with less than one week between recordings. Each utterance was coded as belonging to one of several possible infraphonological categories, and canonical syllable ratios were determined for each sample. Syllables produced with abnormal phonation were identified. These coding procedures were completed independently by two raters. One sample from each infant was coded twice by the same rater. This sequence of multiple recordings and repeat analyses allowed for the determination of interrater, intrarater, and test-retest coding reliability. Kappa and intraclass correlation analyses revealed excellent reliability for all measures. © 2001 Acoustical Society of America.

Link to article

-- (Rvachew, S. & Andrews, E.) The influence of syllable position on children's production of consonants. Clinical Linguistics and Phonetics, v. 16, 2002, pp. 183-198.

Abstract: Two studies examined consonant production by 13 children with delayed phonological skills. Study 1 examined patterns of substitution errors in word-initial, word-final and intervocalic positions of two-syllable words with a strong-weak stress pattern. For phonemes that were misarticulated in at least one word position, intervocalic consonant production was most likely to be the same as the word-final consonant production, but different from the word-initial consonant production. Study 2 examined proportions of matches and mismatches for features in five positions of multisyllabic words: (1) syllable-initial, word-initial, (2) syllable-initial, within-word, (3) intervocalic before an unstressed syllable, (4) syllable-final, within-word, and (5) syllable-final, word-final. Significant variations in match ratios were observed as a function of syllable position. A number of different patterns of position-dependent errors were observed.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E (Thordardottir, E., & Weismer, E.,) Verb argument structure weakness in specific language impairment in relation to age and utterance length. Clinical Linguistics and Phonetics, v.16(4), 2002, pp. 233-250.

Abstract: In spite of the complexity of verb argument structure, argument structure errors are infrequent in the speech of children with specific language impairment (SLI). The study examined the spontaneous argument structure use of school-age children with SLI and with normal language (NL) (n = 100). The groups did not differ substantially in frequency of argument structure errors, particularly when pragmatic context was considered. However, children with SLI used significantly fewer argument types, argument structure types and verb alternations than age-matched children with NL. Further, significant differences between children with SLI and mean length of utterance-matched controls were found involving the use of three-place argument structures. The results show that children with SLI demonstrate mostly correct, but less sophisticated, verb argument structure use than NL peers, and that the difference is not merely attributable to production limitations such as utterance length. The possibility of incomplete argument structure representation is suggested.

Link to article

-- (Thordardottir, E., Chapman, R. & Wagner, L.) Complex sentence production by adolescents with Down syndrome. Applied Psycholinguistics, v.23, 2002, pp. 163-183.
-- (Thordardottir, E., Ellis Weismer, S., Evans, J.) Continuity in lexical and morphological development in Icelandic and English-speaking 2-year olds. First Language, v.22, 2002, pp. 3-28.

Abstract: Accounts of language development vary in whether they view lexical and grammatical development as being mediated by a single or by separate mechanisms. In a single mechanism account, only one system is required for learning words and extracting grammatical regularity based on similarities among stored items. A strong non-linear relationship between early lexical and grammatical development has been demonstrated in English and, more recently, in Italian supporting a single mechanism view (Caselli, Casadio & Bates 1999, Marchman & Bates 1994). The present study showed a comparable non-linear relationship between vocabulary size and the emergence of verb inflection and sentence complexity in two-year-old speakers of English and Icelandic, a highly inflected language. The study included 96 children within a narrow age range, but varying extensively in language proficiency, demonstrating continuity in lexical and grammatical development among children with typical language development as well as very precocious children and children with expressive language delay. Cross-linguistic differences were noted as well, suggesting that the Icelandic-speaking children required a larger critical mass of vocabulary items before grammatical regularity was detected. This is probably a result of the more complex inflectional system of the Icelandic language compared with English.

Link to article

-- (Thordardottir, E., Ellis Weismer, S.) Content mazes and filled pauses in the spontaneous speech of school age children with specific laguage impairment. Brain and Cognition, v.48, 2002, pp. 587-592.

Abstract: Linguistic nonfluencies known as mazes (filled pauses, repetitions, revisions, and abandoned utterances) have been used to draw inferences about processing difficulties associated with the production of language. In children with normal language development (NL), maze frequency in general increases with linguistic complexity, being greater in narrative than conversational contexts and in longer utterances. The same tendency has been found for children with specific language impairment (SLI). However, the frequency of mazes produced by children with NL and SLI has not been compared directly at equivalent utterance lengths in narration. This study compared the frequency of filled pauses and content mazes in narrative language samples of school-age children with SLI. The children with SLI used significantly more content mazes than the children with NL, but fewer filled pauses. Unlike content mazes, the frequency of filled pauses remained stable across samples of different utterance lengths among children with SLI. This indicates that filled pauses and content mazes have different origins and should not be analyzed or interpreted in the same way.

Link to article

-- (Ellis Weismer, S. & Thordardottir, E.) Cognition and language. In P. Accardo, B. Rogers, & A. Capute eds., Disorders of language Development (Ch. 2, pp. 21-37). Timonium, MD: New York Press, Inc.

Book information:
ISBN-10: 0912752718
ISBN-13: 978-0912752716

Link to book

2001

Shari Baum, PhD, Professor
Martha Crago, PhD, Associate Professor
Vince Gracco, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Susan Rvachew, PhD, Assistant Professor
Elin Thordardottir, PhD, Assistant Professor

Dr. Shari Baum
BAUM, S. (Baum, S.) Contextual influences on phonetic identification in aphasia: The effects of speaking rate and semantic bias. Brain & Language, v.76, 2001, pp. 266-281.

Abstract: Two experiments examined the influence of context on stop-consonant voicing identification in fluent and nonfluent aphasic patients and normal controls. Listeners were required to label the initial stop in a target word varying along a voice onset time (VOT) continuum as either voiced or voiceless ([b]/[p] or [d]/[t]). Target stimuli were presented in sentence contexts in which the rate of speech of the sentence context (Experiment 1) or the semantic bias of the context (Experiment 2) was manipulated. The results revealed that all subject groups were sensitive to the contextual influences, although the extent of the context effects varied somewhat across groups and across experiments. In addition, a number of patients in both the fluent and nonfluent aphasic groups could not consistently identify even endpoint stimuli, confirming phonetic categorization impairments previously shown in such individuals. Results are discussed with respect to the potential reliance by aphasic patients on higher level context to compensate for phonetic perception deficits.

Link to article

-- (Baum, S., Pell, M., Leonard, C., & Gordon, J.) Using prosody to resolve temporary syntactic ambiguities in speech production: Preliminary data on brain-damaged speakers. Clinical Linguistics & Phonetics, v.15, 2001, pp. 441-456.

Abstract: Left hemisphere brain lesions resulting in aphasia frequently produce impairments in speech production, including the ability to appropriately transmit linguistic distinctions through sentence prosody. The present investigation gathered preliminary data on how focal brain lesions influence one important aspect of prosody that has been largely ignored in the literature - the production of sentence-level syntactic distinctions that rely on prosodic alterations to disambiguate alternate meanings of a sentence. Utterances characterizing three distinct types of syntactic ambiguities (scope, prepositional phrase attachment, and noun phrase/sentential complement attachment) were elicited from individuals with unilateral left hemisphere damage (LHD), right hemisphere damage (RHD), and adults without brain pathology (NC). A written vignette preceding each ambiguous sentence target biased how the utterance was interpreted and produced. Recorded productions were analysed acoustically to examine parameters of duration (word length, pause) and fundamental frequency (F0) for key constituents specific to each of the ambiguity conditions. Results of the duration analyses demonstrated a preservation of many of the temporal cues to syntactic boundaries in both LHD and RHD patients. The two interpretations of sentences containing 'scope' and 'prepositional phrase attachment' ambiguities were differentiated by all speakers (including LHD and RHD patients) through the production of at least one critical temporal parameter that was consistent across the three groups. Temporal markers of sentences containing 'noun phrase/sentential complement attachment' ambiguities were not found to be encoded consistently within any speaker group and may be less amenable to experimental manipulation in this manner. Results of F0 analyses were far less revealing in characterizing different syntactic assignments of the stimuli, and coupled with other findings in the literature, may carry less weight than temporal parameters in this process. Together, results indicate that the ability to disambiguate sentences using prosodic variables is relatively spared subsequent to both LHD and RHD, although it is noteworthy that LHD patients did exhibit deficits regulating other temporal properties of the utterances, consistent with left hemisphere control of speech timing.

Link to article

-- (Gandour, J. & Baum, S.) Production of stress retraction by left- and right-hemisphere-damaged patients. Brain & Language, v.79, 2001, pp. 482-494.

Abstract: An acoustic-perceptual investigation of a phonological phenomenon in which stress is retracted in double-stressed words (e.g., thirTEEN vs THIRteen MEN) was undertaken to identify the locus of functional impairments in speech prosody. Subjects included left-hemisphere-damaged (LHD) and right-hemisphere-damaged (RHD) patients and nonneurological controls. They were instructed to read sentences containing double-stressed target words in the presence of a clause boundary or its absence. Whereas all three groups of subjects were capable of manipulating the acoustic parameters that signal a shift in stress, there were some differences between the performance of the patient groups and that of the normal controls. Further, stress production deficits were more severe in LHD aphasic patients than in RHD patients. LHD speakers exhibited deficits in the control of both temporal and F0 cues. Their F0 disturbance appears to be secondary to a primary deficit in temporal control at the phase or sentence level, as an increased number of continuation rises found for the LHD patients seemed to arise from lengthy pauses within sentences. Findings are highlighted to address the nature of breakdown in speech prosody and the competing views of prosodic lateralization.

Link to article

-- (Leonard, C., Baum, S., & Pell, M.) The effect of compressed speech on the ability of right-hemisphere-damaged individuals to use context. Cortex, v.37, 2001, pp. 327-344.

Abstract: The ability of RHD patients to use context under conditions of increased processing demands was examined. Subjects monitored for words in auditorily presented sentences of three context types-normal, semantically anomalous, and random, at three rates of speech normal, 70% compressed (Experiment 1) and 60% compressed (Experiment 2). Effects of semantics and syntax were found for the RHD and normal groups under the normal rate of speech condition. Using compressed rates of speech, the effect of syntax disappeared, but the effect of semantics remained. Importantly, and contrary to expectations, the RHD group was similar to normals in continuing to demonstrate an effect of semantic context under conditions of increased processing demands. Results are discussed relative to contemporary theories of laterality, based on studies with normals, that suggest that the involvement of the left versus right hemisphere in context use may depend upon the type of contextual information being processed.

Link to article

Dr. Martha Crago
CRAGO, M. (Taylor, D.M., McAlpine, L., Crago, M.) Toward full empowerment in Native education: Unanticipated challenges. The Canadian Journal of Native Education, 21(1), 2001, pp. 75-83.

Abstract: With the growing empowerment in Native education certain unanticipated consequences may arise which can threaten the full potential for Native visions of education. The more t he heritage culture is emphasized in Native education, the more distanced from mainstream education it becomes. The result is a series of unanticipated consequences that need t o be addressed. First, the question of differing standards becomes more salient and potentially difficult to resolve. Second, Native educators may come to lose sight of the unique aspects of their programs that they have fought so hard to achieve. We hope by raising these issues to facilitate the march toward a genuine Native vision of education.

Link to article

-- (Crago, M. & Allen, S.) Early finiteness in Inuktitut: The role of language structure and input. Language Acquisition, 9 (1), 2001 pp. 59-111.

Abstract: A stage of optional infinitive (OI) production has been identified in typically developing (TD) children learning languages that do not permit null subjects (Wexler (1994; 1998; 1999)), and this stage has been shown to be extended in at least English- and German-speaking children with specific language impairment (SLI; Rice, Noll, and Grimm (1997), Rice, Wexler, and Cleave (1995)). Although TD children learning null subject languages do not go through an OI stage (Bar-Shalom and Snyder (1997), Guasti (1993)), reports differ concerning whether children with SLI learning these languages go through this stage (Bortolini, Caselli, and Leonard (1997), Bottari, Cipriani, and Chilosi (1996)). In this article, we present evidence from Inuktitut, a null subject language not yet investigated with respect to OIs. We show that although TD children learning Inuktitut do not go through an OI stage, one child with SLI does go through an OI stage. In addition, the percentage of finite verb forms marked with an overt verbal inflection in Inuktitut child-directed speech (CDS) is strikingly high compared with that in English CDS. We discuss the implications of these results for theories of continuity, the initial stage of child grammar, and the effect of language structure and input on language acquisition.

Link to article

Dr. Rachel Mayberry
MAYBERRY, R.I. (Goldin-Meadow, S. & Mayberry, R. I). How do profoundly deaf children learn to read? Learning Disabilities Research and Practice, v. 16, 2001, pp. 221-228.

Abstract: Reading requires two related, but separable, capabilities: (1) familiarity with a language, and (2) understanding the mapping between that language and the printed word (Chamberlain & Mayberry, 2000; Hoover & Gough, 1990). Children who are profoundly deaf are disadvantaged on both counts. Not surprisingly, then, reading is difficult for profoundly deaf children. But some deaf children do manage to read fluently. How? Are they simply the smartest of the crop, or do they have some strategy, or circumstance, that facilitates linking the written code with language? A priori one might guess that knowing American Sign Language (ASL) would interfere with learning to read English simply because ASL does not map in any systematic way onto English. However, recent research has suggested that individuals with good signing skills are not worse, and may even be better, readers than individuals with poor signing skills (Chamberlain & Mayberry, 2000). Thus, knowing a language (even if it is not the language captured in print) appears to facilitate learning to read. Nonetheless, skill in signing does not guarantee skill in reading—reading must be taught. The next frontier for reading research in deaf education is to understand how deaf readers map their knowledge of sign language onto print, and how instruction can best be used to turn signers into readers.

Link to article

Dr. Marc Pell
PELL, M.D. (Pell, M.D.) Influence of emotion and focus location on prosody in matched statements and questions. Journal of the Acoustical Society of America, 109 (4), 2001, 1668-1680.

Abstract: Preliminary data were collected on how emotional qualities of the voice (sad, happy, angry) influence the acoustic underpinnings of neutral sentences varying in location of intra-sentential focus (initial, final, no) and utterance "modality" (statement, question). Short (six syllable) and long (ten syllable) utterances exhibiting varying combinations of emotion, focus, and modality characteristics were analyzed for eight elderly speakers following administration of a controlled elicitation paradigm (story completion) and a speaker evaluation procedure. Duration and fundamental frequency (f0) parameters of recordings were scrutinized for "keyword" vowels within each token and for whole utterances. Results generally re-affirmed past accounts of how duration and f0 are encoded on key content words to mark linguistic focus in affectively neutral statements and questions for English. Acoustic data on three "global" parameters of the stimuli (speech rate, mean f0, f0 range) were also largely supportive of previous descriptions of how happy, sad, angry, and neutral utterances are differentiated in the speech signal. Important interactions between emotional and linguistic properties of the utterances emerged which were predominantly (although not exclusively) tied to the modulation of f0; speakers were notably constrained in conditions which required them to manipulate f0 parameters to express emotional and nonemotional intentions conjointly. Sentence length also had a meaningful impact on some of the measures gathered.

Link to article

Dr. Linda Polka
POLKA, L. (Bohn, O-S. & Polka, L.) Target spectral, dynamic spectral and temporal cues in infant perception of German vowels, Journal of the Acoustical Society of America, v.110, 2001, pp. 504-515.

Abstract: Previous studies of vowel perception have shown that adult speakers of American English and of North German identify native vowels by exploiting at least three types of acoustic information contained in consonant-vowel-consonant (CVC) syllables: target spectral information reflecting the articulatory target of the vowel, dynamic spectral information reflecting CV- and -VC coarticulation, and duration information. The present study examined the contribution of each of these three types of information to vowel perception in prelingual infants and adults using a discrimination task. Experiment 1 examined German adults' discrimination of four German vowel contrasts (see text), originally produced in /dVt/ syllables, in eight experimental conditions in which the type of vowel information was manipulated. Experiment 2 examined German-learning infants' discrimination of the same vowel contrasts using a comparable procedure. The results show that German adults and German-learning infants appear able to use either dynamic spectral information or target spectral information to discriminate contrasting vowels. With respect to duration information, the removal of this cue selectively affected the discriminability of two of the vowel contrasts for adults. However, for infants, removal of contrastive duration information had a larger effect on the discrimination of all contrasts tested.

Link to article

-- (Polka, L., Colontonio, C. & Sundara, M.) A cross-language comparison of /d/-/ð/ perception: Evidence for a new developmental pattern, Journal of Acoustical Society of America, v.109, 2001, 2190-2201.

Abstract: Previous studies have shown that infants perceptually differentiate certain non-native contrasts at 6–8 months but not at 10–12 months of age, whereas differentiation is evident at both ages in infants for whom the test contrasts are native. These findings reveal a language-specific bias to be emerging during the first year of life. A developmental decline is not observed for all non-native contrasts, but it has been consistently reported for every contrast in which language effects are observed in adults. In the present study differentiation of English /d–ð/ by English- and French-speaking adults and English- and French-learning infants at two ages (6–8 and 10–12 months) was compared using the conditioned headturn procedure. Two findings emerged. First, perceptual differentiation was unaffected by language experience in the first year of life, despite robust evidence of language effects in adulthood. Second, language experience had a facilitative effect on performance after 12 months, whereas performance remained unchanged in the absence of specific language experience. These data are clearly inconsistent with previous studies as well as predictions based on a conceptual framework proposed by Burnham [Appl. Psycholing. 7, 201–240 (1986)]. Factors contributing to these developmental patterns include the acoustic properties of /d–ð/, the phonotactic uniqueness of English /ð/, and the influence of lexical knowledge on phonetic processing. © 2001 Acoustical Society of America.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S. & Nowak, M.) The effect of target selection strategy on sound production learning. Journal of Speech-Language and Hearing Research, v. 44, 2001, 610-623.

Abstract: In this study, 48 children with moderate or severe delays in phonological ability received treatment for four phonemes, selected in accordance with either traditional or nontraditional target-selection criteria. Children who received treatment for phonemes that are early developing and associated with greater productive phonological knowledge showed greater progress toward acquisition of the target sounds than did children who received treatment for late-developing phonemes that were associated with little or no productive phonological knowledge. Between-group differences in generalization learning were not observed. Child enjoyment of therapy did not differ between groups, but parental satisfaction with treatment progress was greater for children in the traditional group than for children in the nontraditional group.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E., Ellis Weismer, S.) High-frequency verbs and verb diversity in the spontaneous speech of school-age children with specific language impairment. International Journal of Language and Communication Disorders, v. 36, (2001) 221-244.

Abstract: Low verb diversity and heavy reliance on a small set of high-frequency 'general all purpose (GAP)' verbs have been reported to characterize specific language impairment (SLI) in preschool children. However, discrepancies exist about the severity of this deficit, particularly in whether these children's verb diversity is commensurate with their MLU level and whether verb diversity is more severely affected than general lexical diversity. Conflicting findings have been reported regarding the use of GAP verbs. This relatively large (n = 100) study extended the investigation of lexical diversity and high-frequency verb use to school-age children with SLI and NL peers and examined a particular hypothesis concerning the role of high-frequency verbs in language development. No differences were found between groups in general lexical diversity or verb diversity in samples of a set number of tokens. The results did not suggest that verb diversity constitutes an area of specific deficit in spontaneous production for children with SLI. SLI and NL groups were indistinguishable in high-frequency verb use. Extensive use of high-frequency verbs by both groups indicates that their use is part of normal development. Results are reported that support the hypothesis that high-frequency verbs act as prototypes for major meaning categories, permitting semantic and syntactic simplification with minimal losses in information value.

Link to article

2000

Shari Baum, PhD, Professor
Martha Crago, PhD, Associate Professor
Vince Gracco, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
James McNutt, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Susan Rvachew, PhD, Assistant Professor
Elin Thordardottir, PhD, Assistant Professor

Dr. Shari Baum
BAUM, S. (Béland, R., Peretz, I., Baum, S., & Valdois, S.) La sphère auditivo- vocale. In Seron & Van der Linden, (Eds.), Traité de neuropsychologie clinique. Tome 1:L'Evaluation en neuropsychologie, 2000, Marseille, France: Solal.

Book information:
ISBN : 2-905580-90-9

Link to book

-- (Baum, S. & Leonard, C.) The role of sound and spelling in auditory word recognition: Further evidence from brain-damaged patients. Aphasiology, v.14, 2000, pp.1055-1063.

Abstract: This follow-up investigation explored the effects of phonological and orthographic relatedness on auditory lexical access in left- and right-hemisphere-damaged individuals. Participants listened to prime-target pairs that shared word-initial phonology (e.g., definite-deaf), initial orthography (e.g., logic-log), both initial phonology and orthography (e.g., message-mess), or were unrelated (e.g., castle-green), presented at two different inter-stimulus intervals. All groups of subjects demonstrated facilitation of lexical decision latencies due to the combined influence of both orthography and phonology, confirming earlier findings concerning rime relations. The findings are briefly discussed in relation to the neural representation of formal lexical codes.

Link to article

-- (Baum, S. & McFarland, D.) Individual differences in speech adaptation to an artificial Palate. Journal of the Acoustical Society of America, v.107, 2000, pp. 3572-3575.

Abstract: This preliminary investigation examined the ability of individual speakers to adapt to a structural perturbation to the oral environment in the production of [s]. In particular, the experiment explored whether previous evidence of relatively quick adaptation subsequent to intensive practice would be replicated, whether vowel environment would influence the degree of adaptation, whether adaptive strategies would carry over to normal productions and/or similar sounds (i.e., cause negative aftereffects), and whether adaptive strategies developed during the practice phase could be recalled 1 h later. Results of acoustic and perceptual analyses generally revealed improvement after practice, few consistent effects of vowel context, few negative aftereffects, and an absence of quick recall of adaptive strategies. Moreover, extensive individual differences were found in both the degree of initial perturbation and the extent of adaptation. Implications of the results for issues in speech adaptation are briefly discussed. © 2000 Acoustical Society of America.

Link to article

Dr. Martha Crago
CRAGO, M. (Paradis, J. & Crago, M.) Tense and Temporality: A Comparison Between Children Learning a Second Language and Children With SLI. Journal of Speech Language and Hearing Research, v 43, 2000, pp. 834-848.

Abstract: This study compares the morphosyntax of children with SLI to the morphosyntax of children acquiring a second language (L2) to determine whether the optional infinitive phenomenon (M. Rice, K. Wexler, & P. Cleave, 1995; K. Wexler, 1994) is evident in both learner groups and to what extent cross-learner similarities exist. We analyzed spontaneous production data from French-speaking children with SLI, English-speaking L2 learners of French, and French-speaking controls, all approximately 7 years old. We examined the children's use of tense morphology, temporal adverbials, agreement morphology, and distributional contingencies associated with finiteness. Our findings indicate that the use of morphosyntax by children with SLI and by L2 children has significant similarities, although certain specific differences exist. Both the children with SLI and the L2 children demonstrate optional infinitive effects in their language use. These results have theoretical and clinical relevance. First, they suggest that the characterization of the optional infinitive phenomenon in normal development as a consequence of very early neurological change may be too restrictive. Our data appear to indicate that the mechanism underlying the optional infinitive phenomenon extends to normal (second) language learning after the primary acquisition years. Second, they indicate that tense-marking difficulty may not be an adequate clinical marker of SLI when comparing children with impairment to both monolingual and bilingual peers. A more specific clinical marker would be more effective in diagnosing disordered populations in a multilingual context.

Link to article

Dr. Rachel Mayberry
MAYBERRY, R. I. (Chamberlain, C., Morford, J. & Mayberry, R. I.) (eds.) Language Acquisition by Eye, Mahwah, NJ, Lawrence Erlbaum and Associates, 2000, pp. xiii, 276.

Book information:
ISBN-10: 0805829377
ISBN-13: 978-0805829372

Link to book

-- (Marentette, P. & Mayberry, R. I.) Principles for an Emerging Phonological System: A Case Study of Acquisition of ASL, in Language Acquisition by Eye, C. Chamberlain, J. Morford & R. I. Mayberry, eds., Mahwah, NJ, Lawrence Erlbaum and Associates, 2000, pp. 71-90.

Book information:
ISBN-10: 0805829377
ISBN-13: 978-0805829372

Link to book

-- (Morford, J. P. & Mayberry, R. I.), A Reexamination of "Early Exposure" and Its Implications for Language Acquisition by Eye, in Language Acquisition by Eye, C. Chamberlain, J. Morford & R. I. Mayberry, eds., Mahwah, NJ, Lawrence Erlbaum and Associates, 2000, pp. 111-128.

Book information:
ISBN-10: 0805829377
ISBN-13: 978-0805829372

Link to book

-- (Chamberlain, C. & Mayberry, R. I.), Theorizing about the Relationship between ASL and Reading, in Language Acquisition by Eye, C. Chamberlain, J. Morford & R. I. Mayberry, eds., Mahwah, NJ, Lawrence Erlbaum and Associates, 2000, pp. 221-260.

Book information:
ISBN-10: 0805829377
ISBN-13: 978-0805829372

Link to book

-- (Mayberry, R. I. & Jaques, J.), Gesture Production During Stuttered Speech: Insights into the Nature of Gesture-Speech Integration, in, Language and Gesture, D. McNeill, ed., Cambridge, Cambridge University Press, 2000, pp. 199-213.

Book information:
Online ISBN: 9780511620850
Hardback ISBN: 9780521771665
Paperback ISBN: 9780521777612

Link to chapter

-- (Mayberry, R. I. & Nicholadis, E.) Gesture reflects language development: Evidence from bilingual children. Current Directions in Psychological Science, v. 9, 2000, 192-196.

Abstract: There is a growing awareness that language and gesture are deeply intertwined in the spontaneous expression of adults. Although some research suggests that children use gesture independently of speech, there is scant research on how language and gesture develop in children older than 2 years. We report here on a longitudinal investigation of the relation between gesture and language development in French-English bilingual children from 2 to 3 1/2 years old. The specific gesture types of iconics and beats correlated with the development of the children's two languages, whereas pointing types of gestures generally did not. The onset of iconic and beat gestures coincided with the onset of sentencelike utterances separately in each of the children's two languages. The findings show that gesture is related to language development rather than being independent from it. Contrasting theories about how gesture is related to language development are discussed.

Link to article

Dr. Marc Pell
Pell M.D. (Leonard,C., Baum S.R., & Pell M.D.) Context use by right-hemisphere-damaged individuals under a compressed speech condition. Brain and Cognition, 2000, v 43, 315-319.

Abstract: The effect of increased processing demands on context use by RHD individuals was examined using a word-monitoring task. Subjects were required to monitor for a target word in sentences that were either normal, semantically anomalous, or both syntactically and semantically anomalous. Stimuli were presented at two rates of speech--normal and compressed to 70% of normal. Contrary to expectations, the RHD group performed similar to normals in demonstrating an effect of context at both rates of speech. Results are discussed relative to recent studies of normal brain functioning that suggest that the involvement of the LH versus the RH in context use depends upon the type of contextual information being processed.

Link to article

1999

Shari Baum, PhD, Associate Professor
Martha Crago, PhD, Associate Professor
Vince Gracco, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
James McNutt, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Elin Thordardottir, PhD, Assistant Professor

Dr. Shari Baum
Baum, S. Compensation for jaw fixation by aphasic patients under conditions of increased articulatory demands: A follow-up study. Aphasiology, v. 13, 1999, pp. 513-527.

Abstract: This investigation explored the ability of eight non-fluent aphasic patients and 10 normal control speakers to compensate for fixation of the jaw by a bite block in the productionof vowels and fricative consonants.The articulatory demands were increased relative to production of isolated syllables by eliciting stimuli five times in succession at a rapid rate of speech. Acoustic analyses of the vowels and fricatives revealed comparable patterns in both speaker groups, demonstrating (incomplete) compensation for the bite-block perturbation. The results confirm earlier findings of relatively normal articulatory compensation in non-fluent aphasic patients, extending them to conditions of increased articulatory demands. Implications for the role of left hemisphere cortical structures in speech adaptation are briefly considered.

Link to article

-- (Baum, S. & Boyczuk, J.) Speech timing subsequent to brain damage: Effects of utterance length and complexity. Brain and Language, v. 67, 1999, pp. 30-45.

Abstract: Acoustic analyses of syllable durations were conducted in order to address several hypotheses concerning deficits in the control of speechtimingsubsequent to focal brain damage. Groups of nonfluent and fluent aphasics, right-hemisphere-damaged patients, and normal controls produced monosyllabic root syllables in medial and final position in the context of short and long sentences and syntactically simple and complex sentences. Durations of the target syllable as a proportion of the utterance were compared across contexts and groups. Somewhat surprisingly, the results revealed relatively normal temporal patterns in all subject groups, with the main exception emerging for the nonfluent aphasic patients who failed to demonstrate normal phrase-final lengthening effects. Implications of the findings for theories of temporal control in brain-damaged patients are considered.

Link to article

-- (Baum, S. & Leonard, C) Automatic versus Strategic effects of phonology and orthography on auditory lexical access in brain-damaged patients as a function of interstimulus interval. Cortex, v. 35, 1999, pp.647-660.

Abstract: The influence of both phonological and orthographic information on auditorylexicalaccess was examined in left- and right-hemisphere-damaged individuals using alexical decision paradigm. Subjects were presented with prime-target pairs that were either phonologically related (tooth-youth), orthographically related (touch-couch), both phonologically and orthographically related (blood-flood), or unrelated (bill-tent), at two inter-stimulus intervals (ISI) – 100 ms and 750 ms – to tap more automaticversus more strategic processing. All groups demonstrated effects of orthography at both ISIs (facilitory at 100 ms ISI and inhibitory at 750 ms ISI), supporting the findings by Leonard and Baum (1997) that effects of orthography emerge independent of site of brain damage and suggesting that orthographic effects in auditory word recognition tend to be largely strategic. A facilitory effect of phonology was also found for all groups at both ISIs. The findings are discussed in relation to theories of lexical activation in brain-damaged individuals.

Link to article

-- (Baum, S. & Pell, M.) The neural bases of prosody: Insights from lesion studies and neuroimaging. Aphasiology, v.13, 1999, pp. 581-608.

Abstract: Temporal discrimination thresholds (TDT) for recognition of paired sensory (tactile, auditory and visual) stimuli given over a wide range of time intervals were assessed in 44 patients with Parkinson's disease (PD) and 20 age-matched normal subjects. A significant increment in TDT for all three sensory modalities was found in PD patients compared with controls. This abnormality was greatly attenuated for about 2 h by a single levodopa/carbidopa (250/25 mg) tablet. A significant correlation was found between disease severity as assessed clinically and TDT. Patients with more severe PD had higher TDT values. The study of the peripheral median nerve and cortical somatosensory evoked potential recovery curves following double electrical stimulation of the index finger showed no differences between patients and control subjects, nor changes from off to on motor state which could explain the findings. These results indicate the existence of an abnormality of timing mechanisms in PD.

Link to article

-- (Boyczuk, J. & Baum, S.) The influence of neighborhood density on phonetic categorization in aphasia. Brain and Language, v.67, 1999, pp. 46-70.

Abstract: The present study examined the contribution of lexically based sources of information to acoustic–phonetic processing in fluent and nonfluent aphasic subjects and age-matched normals. To this end, two phonetic identification experiments were conducted which required subjects to label syllable-initial bilabial stop consonants varying along a VOT continuum as either /b/ or /p/. Factors that were controlled included the lexical status (word/nonword) and neighborhooddensity values corresponding to the two possible syllable interpretations in each set of stimuli. Findings indicated that all subject groups were influenced by both lexical status and neighborhooddensity in making phoneticcategorizations. Results are discussed with respect to theories of acoustic–phonetic perception and lexical access in normal and aphasic populations.

Link to article

Dr. Vince Gracco
Gracco, V.L., & Munhall, K.G. Nerophysiology of Speech Production. In Fabbro, F. (Ed) Concise Encyclopedia of Language Pathology. Amsterdam, The Netherlands: Elsevier Science Publishers, 1999.

Book information:
ISBN: 9780080431512

Link to book

-- (Löfqvist, A., & Gracco, V.L.) Interarticulator programming in VCV sequences lip and movements. Journal of the Acoustical Society of America, v. 105 (3), 1999, pp. 1864-1876.

Abstract: This study examined the temporal phasing of tongue and lip movements in vowel–consonant–vowel sequences where the consonant is a bilabial stop consonant /p, b/ and the vowels one of /i, a, u/; only asymmetrical vowel contexts were included in the analysis. Four subjects participated. Articulatory movements were recorded using a magnetometer system. The onset of the tongue movement from the first to the second vowel almost always occurred before the oral closure. Most of the tongue movement trajectory from the first to the second vowel took place during the oral closure for the stop. For all subjects, the onset of the tongue movement occurred earlier with respect to the onset of the lip closing movement as the tongue movement trajectory increased. The influence of consonant voicing and vowel context on interarticulator timing and tongue movement kinematics varied across subjects. Overall, the results are compatible with the hypothesis that there is a temporal window before the oral closure for the stop during which the tongue movement can start. A very early onset of the tongue movement relative to the stop closure together with an extensive movement before the closure would most likely produce an extra vowel sound before the closure.

Link to article

Dr. Rachel Mayberry
Mayberry, R.I., (Nicholadis, E., Mayberry, R. & Genesee, F.) Gesture and early bilingual development. Developmental Psychology, v. 35, 1999, pp. 514-526.

Abstract: The relationship between speech and gestural proficiency was investigated longitudinally (from 2 years to 3 years 6 months, at 6-month intervals) in 5 French-English bilingual boys with varying proficiency in their 2 languages. Because of their different levels of proficiency in the 2 languages at the same age, these children's data were used to examine the relative contribution of language and cognitive development to gestural development. In terms of rate of gesture production, rate of gesture production with speech, and meaning of gesture and speech, the children used gestures much like adults from 2 years on. In contrast, the use of iconic and beat gestures showed differential development in the children's 2 languages as a function of mean length of utterance. These data suggest that the development of these kinds of gestures may be more closely linked to language development than other kinds (such as points). Reasons why this might be so are discussed.

Link to article

Dr. Marc Pell
Pell, M.D. The temporal organization of affective and non-affective speech in patients with right-hemisphere infarcts. Cortex, v. 35 (4), 1999, pp. 455-477.

Abstract: To evaluate the righthemisphere's role in encoding speech prosody, an acoustic investigation of timing characteristics was undertaken in speakers with and without focal right-hemisphere damage (RHD) following cerebrovascular accident. Utterances varying along different prosodic dimensions (emphasis, emotion) were elicited from each speaker using a story completion paradigm, and measures of utterance rate and vowel duration were computed. Results demonstrated parallelism in how RHD and healthy individuals encoded the temporal correlates of emphasis in most experimental conditions. Differences in how RHD speakers employed temporal cues to specify some aspects of prosodic meaning (especially emotional content) were observed and corresponded to a reduction in the perceptibility of prosodic meanings when conveyed by the RHD speakers. Findings indicate that RHD individuals are most disturbed when expressing prosodic representations that vary in a graded (rather than categorical) manner in the speech signal (Blonder, Pickering, Heath et al., 1995; Pell, 1999a).

Link to article

-- (Pell, M.D.) Fundamental frequency encoding of linguistic and emotional prosody by right hemisphere-damaged speakers. Brain and Language, v. 69 (2), 1999, pp. 161-192.

Abstract: To illuminate the nature of the right hemisphere's involvement in expressive prosodic functions, a story completion task was administered to matched groups of right hemisphere-damaged (RHD) and nonneurological control subjects. Utterances which simultaneously specified three prosodic distinctions (emphatic stress, sentence modality, emotional tone) were elicited from each subject group and then subjected to acoustic analysis to examine various fundamental frequency (F(0)) attributes of the stimuli. Results indicated that RHD speakers tended to produce F(0) patterns that resembled normal productions in overall shape, but with significantly less F(0) variation. The RHD patients were also less reliable than normal speakers at transmitting emphasis or emotional contrasts when judged from the listener's perspective. Examination of the results across a wide variety of stimulus types pointed to a deficit in successfully implementing continuous aspects of F(0) patterns following right hemisphere insult.

Link to article

-- (Baum, S. & Pell, M.) The neural bases of prosody: Insights from lesion studies and neuroimaging. Aphasiology, v.13, 1999, pp. 581-608.

Abstract: Temporal discrimination thresholds (TDT) for recognition of paired sensory (tactile, auditory and visual) stimuli given over a wide range of time intervals were assessed in 44 patients with Parkinson's disease (PD) and 20 age-matched normal subjects. A significant increment in TDT for all three sensory modalities was found in PD patients compared with controls. This abnormality was greatly attenuated for about 2 h by a single levodopa/carbidopa (250/25 mg) tablet. A significant correlation was found between disease severity as assessed clinically and TDT. Patients with more severe PD had higher TDT values. The study of the peripheral median nerve and cortical somatosensory evoked potential recovery curves following double electrical stimulation of the index finger showed no differences between patients and control subjects, nor changes from off to on motor state which could explain the findings. These results indicate the existence of an abnormality of timing mechanisms in PD.

Link to article

-- (Pell, M.D.) Some acoustic correlates of perceptually 'flat affect' in right-hemisphere-damaged speakers. Brain and Cognition, v. 40 (1), 1999, pp. 219-223. (Short Paper).

Abstract: Data reported in an earlier study (Pell, e.g., 1997) of prosody production in 10 right-hemisphere-damaged (RHD) individuals (aged 31-83 yrs) were analyzed further to explore potential acoustic differences between 10 normal speakers and those perceived to be emotionally "flat." Results indicate that RHD patients were significantly less reliable at transmitting emotional meanings through prosody to a group of normal listeners than age-matched normal speakers. Furthermore, when Ss obtaining low emotional ratings were considered separately, perceptually "flat" speakers were shown to employ substantially fewer acoustic cues to emotional contrasts than normal speakers, a pattern consistent with the impressionistic data. (PsycINFO Database Record (c) 2000 APA, all rights reserved).

Link to article

1998

Shari Baum, PhD, Associate Professor
Martha Crago, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
James McNutt, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Elin Thordardottir, PhD, Assistant Professor

Dr. Shari Baum
BAUM, S. Anticipatory coarticulation in aphasia: effects of utterance complexity, Brain & Language,v. 63, 1998, pp. 357-380.

Abstract: The magnitude and extent of anticipatorycoarticulation were examined in groups of fluent and nonfluent aphasic patients and normal control subjects. One- and two-syllable target utterances were elicited at slow and fast rates of speech with or without a consonant intervening between the target consonant and vowel, and with or without a preceding schwa, to manipulate utterancecomplexity. Acoustic analyses (F2 and centroid frequencies) revealed that both groups of aphasic patients exhibited relatively normal patterns of anticipatorycoarticulation. However, small but significant differences among the groups emerged in certain conditions. Surprisingly, increased utterancecomplexity was not found to reduce coarticulatory effects to a greater degree in the nonfluent relative to the fluent aphasic group. Perceptual tests largely confirmed the acoustic analyses.

Link to article

-- The role of fundamental frequency and duration in the perception of linguistic stress by individuals with brain damage, Journal of Speech, Language & Hearing Research,v. 41, 1998, pp. 31-40.

Abstract: Two tests of the ability of individuals with left-hemisphere damage (LHD) and right-hemisphere damage (RHD) and non-brain-damaged participants to identify phonemic and emphatic stress contrasts were undertaken. From a set of naturally produced base stimuli, two additional stimulus sets were derived. In one, fundamental frequency (F0) cues to stress were neutralized, whereas in the other duration cues were effectively neutralized. Results demonstrated that individuals with LHD were unable to identify phonemic stress contrasts with better-than-chance accuracy; individuals with RHD performed worse than normal participants but significantly better than the patients with LHD--particularly with the original full-cue stimuli. All groups performed better on the emphatic stress subtest, with the scores of only the patients with LHD at chance level for the F0-neutralized stimuli. The findings are considered in relation to hypotheses concerning the hemispheric lateralization of prosodic processing, particularly with respect to a hypothesis that posits differential lateralization for specific acoustic parameters.

Link to article

-- (Leonard, C. & Baum, S.) On-line evidence for context use by right-brain-damaged patients, Journal of Cognitive Neuroscience, v. 10, 1998, pp. 499-508.

Abstract: The ability of right-brain-damaged (RBD) patients to use on-line contextual information in a word-monitoring task was examined. Subjects were required to monitor for target words in the contexts of both normal and semantically anomalous sentences. Similar to previous studies with normals (e.g., Marslen;Wilson & Tyler, 1980), the semantic integrity of the context was influential in the word-recognition process. Importantly, the RDB patients performed similarly to normals in showing context effects. These results were interpreted as substantiating the findings of Leonard, Waters, and Caplan (1997a, 1997b) that RBD patients do not present with a specific deficit in the use of contextual information. The results are discussed in terms of proposals that suggest that an impaired ability to use contextual information by RBD patients may be a function of increased processing demands.

Link to article

Dr. Martha Crago
CRAGO, M. (Hough-Eyamie, W. and Crago, M. B.) Three interactional portraits from Mohawk, Inuit, and White Canadian cultures. Selected papers from the VII International Congress for the Study of Child Language, A. Aksu-Koc, E. Erguvanli-Taylan, A. Sumru Ozsoy, and A. Küntay, eds., Istanbul, Turkey: University of Bogazici Press, 1998, pp. 124-139.
-- (Crago, M., Chen, C., and Genesee, F.) Power and deference: Decision-making in bilingual Inuit homes. Journal of Just and Caring Education, v. 4, no. 1, 1998, pp. 78-95.

Abstract: Parents in communities experiencing rapid language and culture change face particular discourse issues as they construct their homes' language and culture. This article discusses particular language decisions and influences faced by families from two Inuit communities in Arctic Quebec. In most homes, there were fluid boundaries with no conscious strategies for language use aimed at children expected to learn two languages

Link to article

-- (Crago, M. and Allen, S.) Issues of complexity in Inuktitut and English child-directed speech. Proceedings of the 29th Annual Child Language Research Forum, In E. Clark (Ed.), Stanford, CA: CSLI, 1998, pp. 37-46.
Dr. Rachel Mayberry
MAYBERRY, R. The critical period for language acquisition and the deaf child's language comprehension: A psycholinguistic approach. Bulletin d'Audiophonologie: Annales Scientifiques de L'Universite de Franche-Comte,v.15, 1998, pp. 349-358.

Abstract: The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.

Link to article

-- (Mayberry, R., Jaques, J. and DeDe, G.) What stuttering reveals about the development of the gesture-speech relationship. New Directions for Child Development, v. 79, 1998, pp. 77-87.

Abstract: (from the chapter) Examined 2 hypotheses regarding the nature of the gesture-speech relationship. One hypothesis is that gesture and speech are separate communication systems and that the links that exist between the 2 modes are governed by the requirements of speech expression. Secondly, the independent systems hypothesis holds that gesture is an auxiliary system with respect to speech and that it functions as an aid to speech during temporary or sporadic failures. An alternative hypothesis is that gesture and speech form an integrated system that functions as a single communication stream. Two studies are presented. In Exp 1, 6 adult chronic stutterers and 6 non-stutterers narrated events depicted in an animated cartoon to an unfamiliar and neutral listener. Upon analyses of speech disfluencies, differences were found in the frequency with which Ss produced stuttered disfluencies. Ss who stuttered produced half the number of gestures produced by controls. A second study was conducted to replicate and extend these finding in 2 11-yr-old males diagnosed with a severe level of chronic stuttering and 2 age and sex matched controls. While all the children gestured significantly less frequently than the adult Ss, control Ss grounded more total gestures than the children who stuttered.

Link to article

Dr. Marc Pell
PELL, M.D. Recognition of prosody following unilateral brain lesion: influence of functional and structural attributes of prosodic contours. Neuropsychologia, v. 36 (8), 1998, pp. 701-715.

Abstract: The perception of prosodic distinctions by adults with unilateral right- (RHD) and left-hemisphere (LHD) damage and subjects without brain injury was assessed through six tasks that varied both functional (i.e. linguistic/emotional) and structural (i.e. acoustic) attributes of a common set of base stimuli. Three tasks explored the subjects' ability to perceive local prosodic markers associated with emphatic stress (Focus Perception condition) and three tasks examined the comprehension of emotional-prosodic meanings by the same listeners (Emotion Perception condition). Within each condition, an initial task measured the subjects' ability to recognize each "type" of prosody when all potential acoustic features (but no semantic features) signalled the target response (Baseline). Two additional tasks investigated the extent to which each group's performance on the Baseline task was influenced by duration (D-Neutral) or fundamental frequency (F-Neutral) parameters of the stimuli within each condition. Results revealed that both RHD and LHD patients were impaired, relative to healthy control subjects, in interpreting the emotional meaning of prosodic contours, but that only LHD patients displayed subnormal capacity to perceive linguistic (emphatic) specifications via prosodic cues. The performance of the RHD and LHD patients was also selectively disturbed when certain acoustic properties of the stimuli were manipulated, suggesting that both functional and structural attributes of prosodic patterns may be determinants of prosody lateralization.

Link to article

Dr. Linda Polka
POLKA, L. (Werker, J. F., Shir, R. Desjardin, R. Pegg. J. E., Polka, L. , and Patterson, M.) Three methods for testing infant speech perception, in Perceptual Development: Visual, Auditory, and Speech Perception in Infancy, A. M. Slater (Ed.), London: UCL Press, 1998, pp. 389-420.

Book information:
ISBN-10: 0863778518
ISBN-13: 978-0863778513

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Elin T. Thordardottir & Ellis Weismer, S.) Mean length of utterance and other language sample measures in early Icelandic. First Language, v. 18, 1998, pp. 1-32.

Abstract: Adaptations of the widely used MLU measure have been developed in several languages. Such adaptations require numerous modifications, especially in languages that are highly inflected. This study involved the development of a systematic procedure for coding language samples from Icelandic toddlers. Results are reported in terms of mean length of utterance in morphemes (MLU), total vocabulary, total number of different words and type-token ratio (TTR). These measures are analogous to their English counterparts, though not directly comparable. The Icelandic MLU measure was found to be developmentally sensitive in the age range of the study, which included a cross- sectional sample of 36 children aged 15 to 36 months. MLU correlated more strongly with sentence complexity than did age. Consistent with studies in Dutch and Irish, MLU in morphemes was very highly correlated with MLU in words in normally developing children. This relationship remains to be tested in children with language impairments. A secondary goal of this study was to provide descriptive data on the early acquisition of inflectional morphology in Icelandic, derived from language sample analysis.

Link to article

1997

Shari Baum, PhD, Associate Professor
Martha Crago, PhD, Associate Professor
Tanya Gallagher, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
James McNutt, PhD, Associate Professor
Marc Pell, PhD, Assistant Professor
Linda Polka, PhD, Associate Professor
Gloria Waters, PhD, Associate Professor
Kenneth Watkin, PhD, Associate Professor

Dr. Shari Baum
BAUM, S. (Baum, S.) Phonological, semantic, and mediated priming in aphasia. Brain & Language, v. 60, 1997, pp. 347-359.

Abstract: An auditory lexical decision task was conducted to examine rhyme, semantic, and mediated priming in nonfluent and fluent aphasic patients and normal controls. Overall, monosyllabic word targets were responded to faster when preceded by rhyming word and nonword primes than unrelated primes. Similarly, semantically related primes facilitated lexical decisions to word targets. No evidence of mediated priming emerged. Results for individual subjects suggest differences in patterns across the subject groups. Implications of the findings for the integrity of lexical access in aphasic patients are considered.

Link to article

-- (Baum, S., Kim, J., & Katz, W.) Compensation for jaw fixation by aphasic patients. Brain & Language, v. 56, 1997, pp. 354-376.

Abstract: The ability to compensate for fixation of the jaw by a bite block was investigated in 6 nonfluent aphasics, 6 fluent aphasics, and 10 normal control subjects. Acoustic analyses of the vowels [i u a ae] and fricatives [s s] revealed substantial but incomplete compensation for the perturbation in all three subject groups. Perceptual identification scores and quality ratings by naive and phonetically trained listeners indicated poorer identification of the high vowels [i u] under compensatory conditions relative to normal production. Of particular interest was the fact that all three groups of subjects exhibited similar patterns of results. The findings suggest that any deficit in speech motor programming demonstrated by the nonfluent aphasic patients did not affect compensatory abilities. Results are discussed with respect to normal speech adaptation skills and the nature of articulatory breakdown in nonfluent aphasia.

Link to article

-- (Baum, S. & McFarland, D.) The development of speech adaptation to an artificial palate. Journal of the Acoustical Society of America, v. 102, 1997, pp. 2353-2359.

Abstract: An investigation of adaptation to palatal modification in [s] production was conducted using acoustic and perceptual analyses. The experiment assessed whether adaptation would occur subsequent to a brief period of intensive, target-specific practice. Productions of [sa] were elicited at five time intervals, 15 min apart, with an artificial palate in place. Between measurement intervals, subjects read [s]-laden passages to promote adaptation. Results revealed improvement in both acoustic and perceptual measures at the final time interval relative to the initial measurement period. Interestingly, the data also suggested changes to normal (unperturbed) articulation patterns during the same interval. Results are discussed in relation to the development of speech adaptation to a structural modification of the oral cavity.

Link to article

-- (Baum, S. & Pell, M.) Production of affective and linguistic prosody by brain-damaged patients. Aphasiology, v. 11, 1997, pp. 177-198.

Abstract: To test a number of hypotheses concerning the functional lateralization of speech prosody, the ability of unilaterally right-hemisphere-damaged (RHD), unilaterally left-hemisphere-damaged (LHD), and age-matched control subjects (NC) to produce linguistic and affective prosodic contrasts at the sentence level was assessed via acoustic analysis. Multiple aspects of suprasegmental processing were explored, including a manipulation of the type of elicitation task employed (repetition vs reading) and the amount of linguistic structure provided in experimental stimuli (stimuli were either speech-filtered, nonsensical, or semantically well formed). In general, the results demonstrated that both RHD and LHD patients were able to appropriately utilize the acoustic parameters examined (duration, fundamental frequency (F 0), amplitude) to differentiate both linguistic and affective sentence types in a manner comparable to NC speakers. Some irregularities in the global modulation of F 0 and amplitude by RHD speakers were noted, however. Overall, the present findings do not provide support for previous claims that the right hemisphere is specifically engaged in the production of affective prosody. Alternative models of prosodic processing are noted.

Link to article

-- (Baum, S., Pell, M., Leonard, C., & Gordon, J.) The ability of right- and left-hemisphere-damaged individuals to produce and interpret prosodic cues marking phrasal boundaries. Language and Speech, v. 40, 1997, pp. 313-330.

Abstract: Two experiments were conducted with the purpose of investigating the ability of right- and left-hemisphere-damaged individuals to produce and perceive the acoustic correlates to phrase boundaries. In the production experiment, the utterance pink and black and green was elicited in three different conditions corresponding to different arrangements of colored squares. Acoustic analyses revealed that both left- and right-hemisphere-damaged patients exhibited fewer of the expected acoustic patterns in their productions than did normal control subjects. The reduction in acoustic cues to phrase boundaries in the utterances of both patient groups was perceptually salient to three trained listeners. The perception experiment demonstrated a significant impairment in the ability of both left-hemisphere-damaged and right-hemisphere-damaged individuals to perceive phrasal groupings. Results are discussed in relation to current hypotheses concerning the cerebral lateralization of speech prosody.

Link to article

-- (Leonard, C. & Baum, S.) The influence of phonological and orthographic information on auditory lexical access in brain-damaged patients: A preliminary investigation. Aphasiology, v. 11, 1997, pp. 1031-1041.

Abstract: The effects of phonlogy and orthography on auditory lexical access were examined in fluent and non-fluent aphasics and right brain-damaged patients using an auditory lexical decision task. An effect of orthography independent of brain damage was suggested by the findings that, overall, responses were faster to words preceded by primes that were both phonologically and orthographically realted to the target than to those that were unrelated, whereas phonologically related primes alone did not facilitate reaction times. Responses were also slower relative to the unrealted condition to targets that were orthographically but not phonologically related to their primes. These results were interpreted as counter-evidence to the claim that orthographic effects are laterlized to the left hemisphere (Zecker et al. 1986). The results concerning the effect of phonology were equivocal.

Link to article

(Pell, M. & Baum, S.) Unilateral brain damage and the acoustic cues to prosody: Are prosodic comprehension deficits perceptually based? Brain & Language, v. 57, 1997, pp. 195-214.

Abstract: Stimuli from two previously presented comprehension tasks of affective and linguistic prosody (Pell & Baum, 1997) were analyzed acoustically and subjected to several discriminant function analyses, following Van Lancker and Sidtis (1992). An analysis of the errors made on these tasks by left-hemisphere-damaged (LHD) and right-hemisphere-damaged (RHD) subjects examined whether each clinical group relied on specific (and potentially different) acoustic features in comprehending prosodic stimuli (Van Lancker & Sidtis, 1992). Analyses also indicated whether the brain-damaged patients tested in Pell and Baum (1997) exhibited perceptual impairments in the processing of intonation. Acoustic analyses of the utterances reaffirmed the importance of F0 cues in signaling affective and linguistic prosody. Analyses of subjects' affective misclassifications did not suggest that LHD and RHD patients were biased by different sets of the acoustic features to prosody in judging their meaning, in contrast to Van Lancker and Sidtis (1992). However, qualitative differences were noted in the ability of LHD and RHD patients to identifylinguisticprosody, indicating that LHD subjects may be specifically impaired in decoding linguistically defined categorical features of prosodic patterns.

Link to article

-- (Pell, M. & Baum, S.) The ability to perceive and comprehend intonation in linguistic and affective contexts by brain-damaged adults. Brain & Language, v. 57, 1997, pp. 80-99.

Abstract: Receptive tasks of linguistic and affective prosody were administered to 9 right-hemisphere-damaged (RHD), 10 left-hemisphere-damaged (LHD), and 10 age-matched control (NC) subjects. Two tasks measured subjects' ability to discriminate utterances based solely on prosodic cues, and six tasks required subjects to identify linguistic or affective intonational meanings. Identification tasks manipulated the degree to which the auditory stimuli were structured linguistically, presenting speech-filtered, nonsensical, and semantically well-formed utterances in different tasks. Neither patient group was impaired relative to normals in discriminating prosodic patterns or recognizing affective tone conveyed suprasegmentally, suggesting that neither the LHD nor the RHD patients displayed a receptive disturbance for emotional prosody. The LHD group, however, was differentially impaired on linguistic rather than emotional tasks and performed significantly worse than the NC group on linguistic tasks even when semantic information biased the target response.

Link to article

Dr. Martha Crago
CRAGO, M. (Crago, M. & Westernoff, F.) CASLPA position paper on speech-language pathology and audiology in the multicultural, multilingual context. Journal of Speech-Language Pathology and Audiology, v. 21(3), 1997, pp. 223-224.
-- (Crago, M., Eriks-Brophy, A., Pesco, D., & McAlpine, L.) Culturally-based miscommunication in classroom interaction. Language, Speech, Hearing Services in the Schools, v. 28 (3), 1997, pp. 245-254.

Abstract: This article identifies a number of ways teachers and students can misunderstand and confuse each other with their language-based communications in the classroom. Cultural variations in the formats for teacher-led lessons and child-generated personal experience narratives are described, using research findings from Canadian Inuit and Algonquin communities. The importance of practitioners learning from miscommunications is stressed.

Link to article

-- (Crago, M., Allen, S., & Hough-Eyamie, W.) Exploring innateness through cultural and linguistic variation: An Inuit example. In M. Gopnik (Ed.), The biological basis of language, Oxford, UK: Oxford University Press, 1997, pp. 70-90.
-- (McAlpine, L., & Crago, M.) Who's important...here anyway? Co-constructing research across cultures. In H. Christiansen, L. Goulet, C. Krentz, & M. Macers (Eds.), Recreating relationships: Collaborative and educational reform, Buffalo, NY: State University of New York Press, 1997, pp. 105-115.

Book description:
ISBN10: 0-7914-3304-8
ISBN13: 978-0-7914-3304-1

Link to book

-- (Crago, M., & Allen, S.) Issues of complexity in Inuktitut and English child directed speech. In E. Clark (Ed.) Proceedings of the Stanford Child Language Research Forum, Stanford, CA: CSLI, 1997, pp. 37-46.
-- (Crago, M., & Allen, S.) Linguistic and cultural aspects of simplicity and complexity in Inuktitut child-directed speech. Proceedings of the Boston University Conference on Language Development, Sommerville, MA: Cascadilla Press, v.1, 1997, pp. 91-102.
Dr. Tanya Gallagher
GALLAGHER, T. (Gallagher, T.) National Initiatives in Treatment Outcomes Measurement. In C. Frattali (Eds.) Outcome Measurement in Speech-Language Pathology, New York, NY: Thieme Medical Publishers, 1997, pp. 527-557.

Book information:
ISBN (Americas): 9780865777187
ISBN (EUR, Asia, Africa, AUS): 9783131097316

Link to book

-- (Swigert, N., Baum, H. & Gallagher, T.) Outcomes and speech/language therapy, Rehab. Management: The Interdisciplinary Journal of Rehabilitation,1997, 130-133.
-- (Baum, H., Swigert, N. & Gallagher, T.) Treatment outcomes data for adults in health care environments. ASHA, v. 39 (1), 1997, pp. 26-31.

Abstract:

Link to article

-- (Gallagher, T. & Watkin, K.) 3-D Ultrasonic fetal neuroimaging and familial language disorders. In utero brain development. Journal of Neurolinguistics, v. 10 (2), 1997, pp. 187-201.

Abstract: In vivo brain development of four fetuses from 24 to 32 weeks gestational age (GA) were compared. One fetus had a positive history of familiallanguage impairment (+FLI) and three had negative histories of FLI (-FLI). All fetuses were boys and their mothers had low risk pregnancies. RHRHD ultrasonographic imaging was used to collect data at 24, 28 and 32 weeks GA. Volumes and growth rates for the left and right cerebral hemispheres and five subdivisions within each hemisphere were computed and compared. Results indicated that total brain volumes for all fetuses were within normal limits, but that patterns of growth among subdivisions of the inferior anterior and inferior medial regions of the hemispheres differed from 24 to 28 weeks GA. Limited growth was observed in the +FLI fetus compared with the -FLI fetuses in these regions of the left hemisphere during this period. Growth throughout development was more symmetrical between hemispheres for the -FLI fetuses. Results are consistent with hypotheses that +FLI fetuses experience an intrauterine environment that results in developmental differences among brain regions frequently associated with language performance. These results suggest that +FLI may involve genetic developmental timing code differences that place children at risk for later language learning problems.

Link to article

Dr. Rachel Mayberry
MAYBERRY, R.I. (Mayberry, R.I. and Shenker, R.C.) Gesture mirrors speech motor control in stutterers, in Speech Motor Production and Fluency Disorders, W. Hulstijn, H. Peters & P. van Lieshout (Eds.), Elsevier Science, 1997, pp. 183-190.

Abstract:
ISBN-10: 044482460X
ISBN-13: 978-0444824608

Link to book

Dr. Marc Pell
PELL, M. (Baum, S., Pell, M., Leonard, C., & Gordon, J.) The ability of right- and left-hemisphere-damaged individuals to produce and interpret prosodic cues marking phrasal boundaries. Language and Speech, v. 40, 1997, pp. 313-330.

Abstract: Two experiments were conducted with the purpose of investigating the ability of right- and left-hemisphere-damaged individuals to produce and perceive the acoustic correlates to phrase boundaries. In the production experiment, the utterance pink and black and green was elicited in three different conditions corresponding to different arrangements of colored squares. Acoustic analyses revealed that both left- and right-hemisphere-damaged patients exhibited fewer of the expected acoustic patterns in their productions than did normal control subjects. The reduction in acoustic cues to phrase boundaries in the utterances of both patient groups was perceptually salient to three trained listeners. The perception experiment demonstrated a significant impairment in the ability of both left-hemisphere-damaged and right-hemisphere-damaged individuals to perceive phrasal groupings. Results are discussed in relation to current hypotheses concerning the cerebral lateralization of speech prosody.

Link to article

(Pell, M. & Baum, S.) Unilateral brain damage and the acoustic cues to prosody: Are prosodic comprehension deficits perceptually based? Brain & Language, v. 57, 1997, pp. 195-214.

Abstract: Stimuli from two previously presented comprehension tasks of affective and linguistic prosody (Pell & Baum, 1997) were analyzed acoustically and subjected to several discriminant function analyses, following Van Lancker and Sidtis (1992). An analysis of the errors made on these tasks by left-hemisphere-damaged (LHD) and right-hemisphere-damaged (RHD) subjects examined whether each clinical group relied on specific (and potentially different) acoustic features in comprehending prosodic stimuli (Van Lancker & Sidtis, 1992). Analyses also indicated whether the brain-damaged patients tested in Pell and Baum (1997) exhibited perceptual impairments in the processing of intonation. Acoustic analyses of the utterances reaffirmed the importance of F0 cues in signaling affective and linguistic prosody. Analyses of subjects' affective misclassifications did not suggest that LHD and RHD patients were biased by different sets of the acoustic features to prosody in judging their meaning, in contrast to Van Lancker and Sidtis (1992). However, qualitative differences were noted in the ability of LHD and RHD patients to identifylinguisticprosody, indicating that LHD subjects may be specifically impaired in decoding linguistically defined categorical features of prosodic patterns.

Link to article

-- (Pell, M. & Baum, S.) The ability to perceive and comprehend intonation in linguistic and affective contexts by brain-damaged adults. Brain & Language, v. 57, 1997, pp. 80-99.

Abstract: Receptive tasks of linguistic and affective prosody were administered to 9 right-hemisphere-damaged (RHD), 10 left-hemisphere-damaged (LHD), and 10 age-matched control (NC) subjects. Two tasks measured subjects' ability to discriminate utterances based solely on prosodic cues, and six tasks required subjects to identify linguistic or affective intonational meanings. Identification tasks manipulated the degree to which the auditory stimuli were structured linguistically, presenting speech-filtered, nonsensical, and semantically well-formed utterances in different tasks. Neither patient group was impaired relative to normals in discriminating prosodic patterns or recognizing affective tone conveyed suprasegmentally, suggesting that neither the LHD nor the RHD patients displayed a receptive disturbance for emotional prosody. The LHD group, however, was differentially impaired on linguistic rather than emotional tasks and performed significantly worse than the NC group on linguistic tasks even when semantic information biased the target response.

Link to article

-- (Baum, S. & Pell, M.) Production of affective and linguistic prosody by brain-damaged patients. Aphasiology, v. 11, 1997, pp. 177-198.

Abstract: To test a number of hypotheses concerning the functional lateralization of speech prosody, the ability of unilaterally right-hemisphere-damaged (RHD), unilaterally left-hemisphere-damaged (LHD), and age-matched control subjects (NC) to produce linguistic and affective prosodic contrasts at the sentence level was assessed via acoustic analysis. Multiple aspects of suprasegmental processing were explored, including a manipulation of the type of elicitation task employed (repetition vs reading) and the amount of linguistic structure provided in experimental stimuli (stimuli were either speech-filtered, nonsensical, or semantically well formed). In general, the results demonstrated that both RHD and LHD patients were able to appropriately utilize the acoustic parameters examined (duration, fundamental frequency (F 0), amplitude) to differentiate both linguistic and affective sentence types in a manner comparable to NC speakers. Some irregularities in the global modulation of F 0 and amplitude by RHD speakers were noted, however. Overall, the present findings do not provide support for previous claims that the right hemisphere is specifically engaged in the production of affective prosody. Alternative models of prosodic processing are noted.

Link to article

Dr. Linda Polka
POLKA, L. (Werker, J.F., Polka, L. & Pegg, J.) "The conditioned headturn procedure as a method for testing infant speech perception" Early Development and Parenting, v. 6, 1997. Pp. 171-178.

Abstract: The purpose of this paper is to present and describe the Conditioned Head Turn procedure, with primary focus on its use as a method for testing infant speech perception. The paper begins with a brief history of the Conditioned Head Turn Procedure followed by a fairly detailed description of how the procedure is currently implemented. We then briefly outline the methods of analysis that are best suited for data obtained with the Conditioned Head Turn procedure. Next discussed are variations in the Conditioned Head Turn procedure when it is used with subjects of different ages. Then, some of the kinds of findings that have been revealed in the area of infant speech perception are presented to give the reader a sense of the range of questions that can be answered using this procedure. Following this, the strengths and limitations of the procedure are discussed frankly. We end with a presentation of new variations to the procedure that have been developed in recent years, and note how these new variations are expanding the range of questions the procedure can address. ©1997 John Wiley & Sons, Ltd.

Link to article

-- (Shahnaz, H. & Polka, L.) "Standard and multifrequency tympanometry in normal and otosclerotic ears" Ear and Hearing, v. 18, 1997, pp. 326-341.

Abstract:
OBJECTIVES: The primary goal of this study was to evaluate alternative tympanometric parameters for distinguishing normal middle ears from ears with otosclerosis. A secondary goal was to provide guidelines and normative data for interpreting multifrequency tympanometry obtained using the Virtual 310 immittance system.

DESIGN: Nine tympanometric measures were examined in 68 normal ears and 14 ears with surgically confirmed otosclerosis. No subjects in either group had a history of head trauma or otoscopic evidence of eardrum abnormalities. Two parameters, static admittance and tympanometric width, were derived from standard low-frequency tympanometry and two parameters, resonant frequency and frequency corresponding to admittance phase angle of 45 degrees (F45 degrees), were derived from multifrequency tympanometry.

RESULTS: Differences between normal and otosclerotic ears were statistically significant only for resonant frequency and F45 degrees. Group differences in resonant frequency were larger when estimated using positive tail, rather than negative tail, compensation. Group differences in both resonant frequency and F45 degrees were larger when estimated from sweep frequency (SF), rather than sweep pressure, tympanograms. Test performance analysis and patterns of individual test performance point to two independent signs of otosclerosis in the patient group; 1) an increase in the stiffness of the middle ear, best indexed by F45 degrees derived from SF recordings, and 2) a change in the dynamic response of the tympanic membrane/middle ear system to changes in ear canal pressure, best indexed by tympanometric width. Most patients were correctly identified by only one of these two signs. Thus, optimal test performance was achieved by combining F45 degrees derived from SF recordings and tympanometric width.

CONCLUSIONS: The findings confirm the advantage of multifrequency tympanometry over standard low-frequency tympanometry in differentiating otosclerotic and normal ears. Recommendations for interpreting resonant frequency and F45 degrees measures obtained using the Virtual Immittance system are also provided. In addition, the relationship among different tympanometric measures suggests a general strategy for combining tympanometric measures to improve the identification of otosclerosis.

Link to article

-- (Polka, L.) (1997). Review: Phonological Development: The origins of Language in the Child by Marilyn May Vihman. Journal of Phonetics, v. 25 (1), 1997, pp. 93-96.
Dr. Gloria Waters
WATERS, G.S. (Caplan, D., Waters, G.S., & Hildebrandt, N.) Syntactic determinants of sentence comprehension in aphasic patients in sentence-picture matching tests. Journal of Speech and Hearing Research, v. 40, 1997, pp. 542-555.

Abstract: The results of two studies of sentence comprehension in aphasic patients using sentence-picture matching tests are presented. In the first study, 52 aphasic patients were tested on 10 sentence types. Analysis of the number of correct responses per sentence type showed effects of syntactic complexity and number of propositions. Factor analysis yielded first factors that accounted for two-thirds of the variance in performance to which all sentence types contributed. Clustering analysis yielded groups of patients whose performances progressively deteriorated and in which performance was more affected by sentence types that were harder for the group overall. These results were very similar to those previously obtained using an enactment task. In the second study, 17 aphasic patients were tested on the same 10 sentence types using both sentence-picture matching and enactment tasks. Correlational analyses showed that performance on the two tests was significantly correlated across both subjects and sentences. The results provide data relevant to the determinants of the complexity of a sentence in auditory comprehension.

Link to article

-- (Waters, G.S. & Caplan, D.) Working memory and on-line sentence comprehension in patients with Alzheimer's disease. Journal of Psycholinguistic Research, v. 26, 1997, pp. 377-400.

Abstract: We examined the ability of patients with dementia of the Alzheimer's type (DAT) and normal controls to perform a sentence acceptability judgment task that required determining the referent for a reflexive pronoun. Performance on three different sentence types that differed in terms of syntactic complexity was assessed. Subjects performed the task alone and under two different dual-task conditions which required continuous, externally paced responses. DAT patients were more affected than controls by the dual-task conditions, but were not disproportionately impaired on the more complex sentence types. The failure of DAT patients to be disproportionately affected on the most complex sentence types in the dual-task conditions provides evidence for the separation of the processing resources that are used in sentence comprehension from those involved in other tasks.

Link to article

-- (Leonard, C., Waters, G.S. & Caplan, D.) The influence of contextual information on the resolution of ambiguous pronouns by younger and older adults. Applied Psycholinguistics v. 18, 1997, pp. 293-319.

Abstract: Two experiments were conducted with the purpose of investigating possible age effects on the abilities of older and younger adults to use contextual information to resolve ambiguous pronouns. In both experiments, subjects were presented with pairs of sentences (a leading sentence followed by a pronominal sentence) and were required to indicate the referent of the ambiguous pronoun. In both experiments, the older adults responded more slowly and were less accurate than the younger adults. However, both groups of subjects were equally influenced by the contextual information available, which was located in the leading sentence to aid in the resolution of the pronouns. Older adults did not demonstrate a specific impairment in the ability to use contextual information to resolve ambiguous pronouns. Nevertheless, agerelated difficulties in resolving pronouns may emerge, possibly as a function of an underspecified discourse model.

Link to article

-- (Leonard, C., Waters, G.S., & Caplan, D.) The use of contextual information related to general world knowledge by right brain-damaged individuals in pronoun resolution. Brain and Language, v. 57, 1997, pp. 343-359.

Abstract: This study investigated the ability of rightbrain-damagedindividuals (RBD) to use contextualinformation to resolve ambiguous pronouns. Subjects were presented with sentence pairs and required to resolve the ambiguous pronoun in the second sentence. Contrary to the prevailing view that RBD patients have difficulty using contextualinformation to integrate language, the RBD group demonstrated a normal pattern of response, demonstrating a sensitivity to the pragmatic information contained in the leading sentence. They responded more quickly to sentences with a pragmatically constrained preferred referent than to those sentences for which there was no preferred referent. As well, they chose the preferred referent significantly more often than the non-preferred referent. These results suggest that RBD patients can use contextualinformation at the level of a minimal discourse (i.e., two sentences).

Link to article

-- (Leonard, C., Waters, G.S., & Caplan, D.) The use of contextual information by right brain-damaged individuals in the resolution of ambiguous pronouns. Brain and Language, v. 57, 1997, pp. 309-342.

Abstract: Two experiments were conducted with the primary purpose of investigating the ability of right brain-damaged (RBD) individuals to use contextual information--at the level of the single sentence, in terms of the integration of information between clauses, and at the level of a minimal discourse (i.e., two sentences)--in the resolution of ambiguous pronouns. The investigation was extended to a group of left brain-damaged (LBD) and non-brain-damaged (NBD) individuals. Contrary to the prevailing view that RBD patients have difficulty in the use of contextual information to process language, both experiments were consistent in demonstrating that the RBD group was influenced by contextual information in a manner similar to that demonstrated by both the LBD and NBD groups.

Link to article

Dr. Kenneth Watkin
WATKIN, K.L. (Watkin, K.L. & Tan, S.L.) Basic Aspects of 3D Imaging. In Vitro Fertilization and Assisted Reproduction; Proceedings, 1997, pp. 285-291.
-- (Lu, E., Watkin, K.L.) Optimal Automatic Two Dimensional Segmentation of Ultrasonic Fetal Head, Canadian Medical and Biological Engineering Society; Proceedings, 1997, pp. 72-73.
-- (Tulandi T., Watkin, K.L., Tan, S.L.) Reproductive Performance and Three Dimensional Ultrasound Volume Determination of Polycystic Ovaries Following Laparoscopic Ovarian Drilling, International Journal of Fertility & Menopausal Studies, v. 42 (6), 1997, pp.436-440.

Abstract:
OBJECTIVE: To evaluate the changes in ovarian volume and the reproductive outcome after laparoscopic treatment of polycystic ovaries (PCOS) in clomiphene-resistant anovulatory women.

DESIGN: A prospective study of women undergoing laparoscopic treatment of polycystic ovaries. Ultrasound examination for three-dimensional (3D) volume determination was performed before and after surgery.

SETTING: University teaching hospital.

PATIENTS: Thirty-four women with polycystic ovarian syndrome who failed to ovulate with clomiphene citrate and who subsequently underwent laparoscopic ovarian drilling.

INTERVENTIONS: Laparoscopic ovarian drilling and three-dimensional ultrasound examination.

MAIN OUTCOME MEASURES: Cumulative probability of conception and changes in ovarian volume.

RESULTS: Ovulation rate after the procedure was 30/34 (88.2%). Using Life Table Analysis, the cumulative probability of conception at 12 months follow-up was 70% (median, 8.1 months). The preoperative ovarian volume was 12.2 +/- 1.8 cm3 and 1 week after surgery it was 13.6 +/- 1.5 cm3. The ovarian volume 3 weeks after surgery, 6.9 +/- 1.3 cm3, was significantly smaller than that before surgery.

CONCLUSIONS: Laparoscopic treatment of polycystic ovaries in women with clomiphene-resistant PCOS is associated with an ovulation rate of 88.2% and a cumulative pregnancy rate of 70% at 12 months. It appears also that laparoscopic ovarian drilling may result in a transient increase, with a subsequent significant reduction, in ovarian volume.

Link to article

-- (Miller, J,L,, Watkin, K.L.) Lateral Pharyngeal Wall Motion during Swallowing Using Real Time Ultrasound, Journal of Dysphagia, v.12, 1997, pp. 125-132.

Abstract: B-mode ultrasound imaging has been used primarily to detect temporal and spatial movements of the tongue during the oral preparatory and oral stages of swallowing. The purpose of this study was to investigate the application of M-mode (motion mode) ultrasound imaging as a method to quantify the duration and displacement of single regions along the lateral pharyngeal wall during swallows of two bolus volumes and during three swallow maneuvers (supraglottic, super-supraglottic and Mendelsohn maneuver). In 5 normal subjects, simultaneous B/M-mode images were captured at two regions along the lateral pharyngeal wall. Computer-assisted video analysis of each swallow sequence provided spatial coordinates and durational measures. Results indicated no significant differences in displacements of the lateral pharyngeal wall across bolus volumes, swallow maneuvers, or recording sites. Significant differences (p < 0.001) in lateral pharyngeal wall duration occurred as a function of volitional swallow maneuvers. Greater durations (p < 0.05) were found for the Mendelsohn and super-supraglottic swallow maneuvers. The data demonstrate that B/M-mode ultrasound imaging provides a simple, noninvasive method to visually examine movements of the lateral pharyngeal wall and may provide a clinical method for assessing the effects of direct swallowing therapies at the level of the mid-oropharynx.

Link to article

-- (Elahi, M.M., Lessard, M.L., Hakim, S., Watkin, K.L., Sampalis, J.) Ultrasound in the Assessment of Cranial Bone Thickness, Journal of Craniofacial Surgery, v. 8 (03), 1997, pp. 213-221.

Abstract: Preoperative knowledge of skull thickness before harvesting cranial bone grafts would be ideal to help minimize intracranial complications. Previous research has demonstrated regional variations in calvaria; however, accurate preoperative and intraoperative methods of skull thickness measurement are not available. The aim of this research represents the first attempt to examine the reliability of ultrasound to determine cranial bone thickness. Four previously studied calvarial sites were marked in 10 adult male cadaveric skulls. The individual points were insonified using an A-mode ultrasonic transducer operating in pulse-echo mode. The times of flight of the waves propagating in the bone samples were compared with caliper measurements. The mean difference in cranial bone thickness was 0.16 mm, with a standard deviation of 0.09 mm. Student's t-test failed to reveal any statistically significant differences between caliper and ultrasonic measurements (p = 0.569) and Pearson's correlation coefficient supported an extremely strong and positive relationship between the two modalities (r > 0.992). Multiple linear regression models predicted that calvarial thickness could be accurately predicted by ultrasound without consideration of cadaveric specimen or sampling point location (R2 = 0.988). The convergent values between ultrasonic and caliper measurements suggest that this modality can accurately and reliably determine skull thickness. A-mode ultrasound can have significant implications in guiding the harvest of in situ split cranial bone grafts, the placement of osseointegrated implants, skull anthropometrics, and related craniomaxillofacial applications.

Link to article

-- (Watkin, K.L., Miller, J.L.) Instrumental Procedures, in Sonies, B (Ed.), Dysphagia: A Continuum of Care, Gaithersbert: Aspen Publishers, 1997, pp. 171-196.

Book information:
ISBN-10: 0834207850
ISBN-13: 9780834207851

Link to book

-- (Gallagher, T.M., Watkin, K.L.) 3D Ultrasonic Fetal Neuroimaging and Familial Language Disorders: In Utero Brain Development, Journal of Neurolinguistics, v. 10 (2/3), 1997, pp. 187-201.

Abstract: In vivobraindevelopment of four fetuses from 24 to 32 weeks gestational age (GA) were compared. One fetus had a positive history of familiallanguage impairment (+FLI) and three had negative histories of FLI (-FLI). All fetuses were boys and their mothers had low risk pregnancies. RHRHD ultrasonographic imaging was used to collect data at 24, 28 and 32 weeks GA. Volumes and growth rates for the left and right cerebral hemispheres and five subdivisions within each hemisphere were computed and compared. Results indicated that total brain volumes for all fetuses were within normal limits, but that patterns of growth among subdivisions of the inferior anterior and inferior medial regions of the hemispheres differed from 24 to 28 weeks GA. Limited growth was observed in the +FLI fetus compared with the -FLI fetuses in these regions of the left hemisphere during this period. Growth throughout development was more symmetrical between hemispheres for the -FLI fetuses. Results are consistent with hypotheses that +FLI fetuses experience an intrauterine environment that results in developmental differences among brain regions frequently associated with language performance. These results suggest that +FLI may involve genetic developmental timing code differences that place children at risk for later language learning problems.

Link to article

1996

Shari Baum, PhD, Associate Professor
Martha Crago, PhD, Associate Professor
Tanya Gallagher, PhD, Associate Professor
Rachel I. Mayberry, PhD, Associate Professor
James McNutt, PhD, Associate Professor
Linda Polka, PhD, Assistant Professor
Gloria Waters, PhD, Associate Professor
Kenneth Watkin, PhD, Associate Professor

Dr. Shari Baum
BAUM, S. The processing of morphology and syntax in agrammatic aphasia: a test of the fast decay and slow activation hypotheses. Aphasiology, v. 10, 1996, pp.783-800.

Abstract: Two tasks were designed to test the hypothesis that the syntactic processing deficit of non-fluent agrammatic aphasic patients may be due to either the fast decay or slow activation of syntactic information. Eight non-fluent aphasics, 11 fluent aphasics, and 15 age-matched normal control subjects participated in two auditory lexical decision tasks as well as a grammaticality judgement task. In three types of sentence structures the sentence-final word created either a grammatical sentence or a violation of a particular syntactic rule or constraint. To examine possible deficits in computational speed, the interval between the sentence frame and the sentence-final target word was set at either 100 ms (short ISI) or 1000 ms (long ISI) in the lexical decision tasks. Increased reaction time to targets in ungrammatical sentences is indicative of sensitivity to syntactic violations. With fast decay of syntactic information, sensitivity would be predicted at short but not long ISIs. With slow activation, sensitivity would be expected at long but not short ISIs. Surprisingly, results indicated that all three groups of subjects demonstrated comparable patterns of sensitivity to grammaticality as reflected in increased latencies to target words in ungrammatical contexts. The findings do not provide support for either the fast decay or the slow activation hypothesis. Possible reasons for the unexpected findings are considered.

Link to article

-- Fricative production in aphasia: effect of speaking rate. Brain & Language, v. 52, 1996, pp.328-341.

Abstract: The perceptual adequacy of vowels, stop consonants, and fricatives produced under conditions of articulatory perturbation was explored. In a previous study [McFarland and Baum, J. Acoust. Soc. Am. 97, 1865-1873 (1995)], acoustic analyses of segments produced in two subtests (immediate compensation and postconversation) revealed small but significant changes in spectral characteristics of vowels and consonants under bite-block as compared to normal conditions. For the vowels only, adaptation increased subsequent to a period of conversation with the bite block in place, suggesting that compensation may develop over time and that consonants may require a longer period of adaptation. The present follow-up investigation examined whether the acoustic differences across conditions were perceptually salient. Ten listeners performed an identification and a quality rating task for stimuli from the earlier acoustic study. Results revealed reductions in identification scores and quality ratings for a subset of the vowels and consonants in the bite-block conditions relative to the normal condition in the immediate compensation subtest. In the postconversation subtest, quality ratings for the fricatives in the bite-block condition remained low as compared to those in the normal condition. Perceptual results are compared to the previous acoustic data gathered on these stimuli.

Link to article

-- (McFarland, D., Baum, S., & Chabot, C,) Speech compensation to structural modifications of the oral cavity. Journal of the Acoustical Society of America, v. 100, 1996, pp. 1093-1104.

Abstract: Acoustic and perceptual analyses of vowels, stops, and fricatives produced with and without an artificial palate were conducted. Recordings were made both immediately upon insertion of the palate and following a 15-min adaptation period. Results of the acoustic analyses revealed significant alterations in the fricative spectra under conditions of perturbation with fewer, if any, changes in the vowels and stop consonants. Perceptual data confirmed these patterns and provided evidence of possible improvements in compensation over time. The data are compared to our previous studies of speech sound articulation under bite-block conditions. Differences between adaptation to modifications of oral structure (artificial palate) and oral function (jaw fixation by a bite block) are considered.

Link to article

Dr. Martha Crago
CRAGO, M. (Allen, S., & Crago, M.) Early passives acquisition in Inuktitut. Journal of Child Language, v.23 (1), 1996, pp. 13-28.

Abstract: Passive structures are typically assumed to be one of the later acquired constructions in child language. English-speaking children have been shown to produce and comprehend their first simple passive structures productively by about age four and to master more complex structures by about age nine. Recent crosslinguistic data have shown that this pattern may not hold across languages of varying structures. This paper presents data from four Inuit children aged 2;0 to 3;6 that shows relatively early acquisition of both simple and complex forms of the passive. Within this age range children are productively producing truncated, full, action and experiential passives. Some possible reasons for this precociousness are explored including adult input and language structure.

Link to article

-- (Crago, M., & Allen, S.) Building the case for familial impairment in linguistic representation. In M. Rice (Ed.), Toward the genetics of language. Mahwah, N.J.: Lawrence Erlbaum Associates, 1996.

Book information:
ISBN-10: 0805816771
ISBN-13: 9780805816778

Link to article

-- Commentary: What genetics can and cannot learn from PET studies of phonology. In M. Rice (Ed.), Toward the genetics of language. Mahwah, N.J.: Lawrence Erlbaum Associates, 1996.

Book information:
ISBN-10: 0805816771
ISBN-13: 9780805816778

Link to book

-- (McAlpine, L.. Eriks-Brophy, A., & Crago, M.) (1996). Teaching beliefs in Mohawk classrooms: Issues of language and culture. Anthroplogy and Educational Quarterly, v. 27 (3), 1996, pp. 390-413.

Abstract: This study describes the teaching beliefs of three primary–level teachers (two Mohawkand one nonaboriginal) teaching in the same Mohawk community and analyzes the ways in which cultural identity and language impact on these beliefs. It is evident from this study that depicting teachers as belonging to specific cultural groups may inadequately represent the complexity and diversity of teachers in aboriginal classrooms. Individual personal histories nested in the sociohistorical issues of particular communities play an important role in creating teachers' identities within, as well as across, cultural groups. We need further careful examination of the diversity of teacher beliefs and biographies if we are not to trivialize such a complex issue.

Link to article

Dr. Tanya Gallagher
GALLAGHER, T. Social-interactional approaches to child language intervention. In J. Beitchman & M. Konstantareas (Eds.) Language Learning and Behavior Disorders: Emerging Perspectives, Cambridge: Cambridge University Press, 1996, pp. 418-435.

Book information:
ISBN-10: 0521472296
ISBN-13: 9780521472296

Link to book

Dr. Rachel Mayberry
MAYBERRY, R.I. (Yoshinaga-Itano, C., Snyder, L. & Mayberry, R.). How deaf and normally hearing students convey meaning within and between written sentences. Volta Review, v. 98, 1996, pp. 3-38.

Abstract: The compositions of 49 students (ages 10-14) with deafness or hearing impairments and 49 typical students were compared to investigate the frequency and proportional distribution of written-language variables. Differences were found between the strategies chosen by the students with deafness or hearing impairments in both syntax and semantics and those of their normally hearing peers.

Link to article

-- (Yoshinaga-Itano, C., Snyder, L. & Mayberry, R.). Can lexical/semantic skills differentiate deaf or hard-of-hearing readers and nonreaders? Volta Review, v. 98, 1996, pp. 39-62.

Abstract: Discusses three studies: (1) the effectiveness of semantic analyses of written narratives of students with hearing losses in determining language ability; (2) written-language characteristics of writers matched by reading abilities alone and matched by reading ability and age; and (3) written-language characteristics of writers with hearing impairments by method of communication.

Link to article

Dr. Linda Polka
POLKA, L. (Polka, L. & Bohn, O-S.) Across-language comparison of vowel perception in English-learning and German-learning infants. Journal of the Acoustical Society of America, v. 100, 1996, pp. 577-592.

Abstract: Studies of cross-language consonant discrimination have shown a shift from a language-general to a language-specific pattern during the first year of life. Recently, the same pattern of change was observed for English-speaking infants' discrimination of two non-native vowel contrasts (Polka and Werker, 1994). The present study was designed to provide a more direct assessment of language-specific influences on infant vowel contrast perception. In experiment 1 adults were tested on a German (non-English) contrast, /dut/ versus /dyt/, and an English (non-German) contrast, /d epsilon t/ versus /daet/. English and German adults discriminated both contrasts with high levels of accuracy in a categorial AXB task. However, results of an identification and rating task showed that, within each non-native vowel contrast, one vowel perceptually matched a native vowel category better than the other. In experiment 2 discrimination of /dut/ versus /dyt/ and /d epsilon t/ versus /daet/ was examined in English- and German-learning infants in two age groups (6-8 months and 10-12 months) using the conditioned headturn procedure. English and German infants did not differ in their discrimination of either contrast and there were no age differences in discrimination of either contrast for the German or for the English infants. However, in both language groups at both ages, there were clear differences in performance related to the direction in which the vowel change was presented to the infants. For the German contrast, discrimination was significantly poorer when the contrast changed from /dut/ to /dyt/. For the English contrast, discrimination was significantly poorer when the contrast changed from /daet/ to /d epsilon t/. The directional asymmetries observed here and in other infant vowel studies point to a language-universal perceptual pattern which suggests that vowels produced with extreme articulatory postures serve as perceptual attractors in infant vowel perception.

Link to article

Dr. Gloria Waters
WATERS, G. (Waters, G.S. & Caplan, D.) The capacity theory of sentence comprehension: Critique of Just and Carpenter (1992). Psychological Review, v 103, 1996, pp. 761-772.

Abstract: The authors review M.A. Just and P.A. Carpenter's (1992) "capacity" theory of sentence comprehension and argue that the data cited by Just and Carpenter in support of the theory are unconvincing and that the theory is insufficiently developed to explain or predict observed patterns of results. The article outlines an alternative to the capacity theory, according to which the unconscious, obligatory operations involved in assigning the syntactic structure of a sentence do not use the same working memory resource as that required for conscious, controlled verbally mediated processes.

Link to article

-- (Caplan, D. & Waters, G.S.) Syntactic processing in sentence comprehension under dual-task conditions in aphasic patients. Language and Cognitive Processes, v. 11, 1996, pp. 525-551.

Abstract: The functional architecture of the verbal processing resource system was studied by testing aphasic patients for their abilities to use syntactic structure in sentence comprehension in isolation and under dual-task conditions. Patients who showed evidence for a reduction in the resources available for syntactic processing in sentence comprehension were tested on a sentence-picture matching task that required syntactic processing in an o-interference condition and while recalling a series of digits that was one less than or equal to their span. Patients showed equivalent effects of syntactic complexity in comprehension in the three conditions; that is, the effect of syntactic complexity did not increase under digit load conditions. This result supports the conclusion that the processing resource system that underlies syntactic processing is substantially separate from the one that is used for some other verbally mediated functions.

Link to article

-- (Waters, G.S. & Caplan, D.) Processing resource capacity and the comprehension of garden path sentences. Memory and Cognition, v. 24, 1996, pp. 342-355.

Abstract: Three experiments explored the relationship between verbal working memory capacity and the comprehension of garden path sentences. In Experiment 1, subjects with high, medium, and low working memory spans made acceptability judgments about garden path and control sentences under whole sentence and rapid serial visual presentation (RSVP) conditions. There were no significant differences between subjects with different working memory spans in the comprehension of garden path sentences in either condition. In Experiments 2A and 2B, subjects with high and low working memory spans were tested on the same materials at three RSVP rates. There were no significant differences between subjects with different working memory spans in the magnitude of the effect of garden path sentences at any presentation rate. The results suggest that working memory capacity, as measured by the Daneman and Carpenter (1980) reading span task, is not a major determinant of individual differences in the processing of garden path sentences.

Link to article

-- (Waters, G.S. & Caplan, D.) The measurements of verbal working memory capacity and its relation to reading comprehension. Quarterly Journal of Experimental Psychology, v. 49A, 1996, pp. 51-79.

Abstract: Ninety-four subjects were tested on the Daneman and Carpenter (1980) reading span task, four versions of a related sentence span task in which reaction times and accuracy on sentence processing were measured along with sentence-final word recall, two number generation tasks designed to test working memory, digit span, and two shape-generation tasks designed to measure visual-spatial working memory. Forty-four subjects were retested on a subset of these measures at a 3-month interval. All subjects were tested on standard vocabulary and reading tests. Correlational analyses showed better internal consistency and test-retest reliability of the sentence span tasks than of the Daneman-Carpenter reading span task. Factor analysis showed no factor that could be related to a central verbal working memory; rotated factors suggested groupings of tests into factors that correspond to digit-related tasks, spatial tasks, sentence processing in sentence span tasks, and recall in sentence span tasks. Correlational analyses and regression analyses showed that the sentence processing component of the sentence span tasks was the best predictor of performance on the reading test, with a small independent contribution of the recall component. The results suggest that sentence span tasks are unreliable unless measurements are made of both their sentence processing and recall components, and that the predictive value of these tasks for reading comprehension abilities lies in the overlap of operations rather than in limitations in verbal working memory that apply to both.

Link to article

Dr. Kenneth Watkin
WATKIN, K. (Miller, J.L. & Watkin, K.L.) The Influence of Bolus Volume and Viscosity on Anterior Lingual Force During the Oral Stage of Swallowing. Journal of Dysphagia, v. 11, 1996, pp. 117-124.

Abstract: The influence of bolus volume and viscosity on the distribution of anterior lingual force during the oral stage of swallowing was investigated using a new force transducer technology. The maximum force amplitudes from 5 normal adults were measured simultaneously at the mid-anterior, right, and left lateral tongue margins during 10 volitional swallows of 5-, 10-, and 20-ml volumes of water, applesauce, and pudding. Results indicated significant increases in peak force amplitude as viscosity increased. Volume did not significantly influence maximum lingual force amplitudes. Individual subjects demonstrated consistent patterns of asymmetrical force distribution across the lingual margins tested. The results suggest that bolus-specific properties influence the mechanics of oral stage lingual swallowing. This finding has important clinical implications in the assessment and treatment of dysphagic individuals.

Link to article

-- (Watkin, K.L. & Miller, J.L.) Instrumental Procedures, a chapter in Sonies, B. (Ed.), Perspective on Dysphagia Treatment and Management Strategies, Gaithersberg: Aspen Publishers, 1996.