Publications

Please click on a year to view faculty research publications and click on an article name to view its abstract.

2017

Noémie Auclair-Ouellet, Ph.D., Assistant Professor
Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Associate Professor
Nicole Li-Jessen, Ph.D., Assistant Professor
Aparna Nadig, Ph.D., Associate Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Professor
Susan Rvachew, Ph.D., Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Professor

Dr. Noémie Auclair-Ouellet

Auclair-Ouellet, N., Lieberman, P., & Monchi, O. (2017). Contribution of language studies to the understanding of cognitive impairment and its progression over time in Parkinson’s disease. Neuroscience & Biobehavioral Reviews, 80, 657–672. Doi: 10.1016/j.neubiorev.2017.07.014

Dr. Shari Baum

Barbeau, E., Chai, X., Chen, J-K., Soles, J., Berken, J., Baum, S., Watkins, K., & Klein, C. 2017. “The role of the left inferior parietal lobule in second language learning: an intensive language training fMRI study,” Neuropsychologia, 98, 169-176.

Itzhak, I., Vingron, N., Baum, S., & Titone, D. 2017 "Bilingualism in the real world: How proficiency, emotion and personality in a second language impact communication in clinical and legal settings", Translational Issues in Psychological Science, 3, 48-65.

Dr. Meghan Clayards

Kim, D., Clayards, M., Goad, H. (2017) Individual differences in second language speech perception across tasks and contrasts. Linguistics Vanguard, 3:1, doi:10.1515/lingvan-2016-0025

Giannakopoulou, A., Brown, H., Clayards, M., Wonnacott, E. (2017) High or Low? Comparing high- and low-variability phonetic training in adult and child second language learners. PeerJ 5:e3209 doi:10.7717/peerj.3209

Bang, H.Y., Clayards, M., Goad, H. (2017) Compensatory strategies in the developmental patterns of English /s/: Gender and vowel context effects. Journal of Speech, Language and Hearing Research. Vol. 60, 571-591. doi:10.1044/2016_JSLHR-L-15-0381

Dr. Laura Gonnerman

Rvachew, S., Royle, P., Gonnerman, L., Stanké, B., Marquis, A., & Herbay, A. (2017). Development of a tool to screen risk of literacy delays in French-speaking children. Canadian Journal of Speech-Language Pathology and Audiology, 41, 321-340.

Dr. Nicole Li-Jessen

Seekhao, N., JaJa, J., Mongeau, L., & Li-Jessen, N. Y. (2017). In Situ Visualization for 3D Agent-Based Vocal Fold Inflammation and Repair Simulation. Supercomputing Frontiers and Innovations, 4(3), 68-79.

Li-Jessen, N. Y. K., Powell, M., Choi, A. J., Lee, B. J., & Thibeault, S. L. (2017). Cellular source and proinflammatory roles of high-mobility group box 1 in surgically injured rat vocal folds. Laryngoscope, 127(6), E193-e200.

Imaizumi, M., Li-Jessen, N. Y. K., Sato, Y., Yang, D. Y. & Thibeault, S. L. (2017). Retention of humaninduced pluripotent stem cells(hiPS) with injectable HA-hydrogels for vocal fold tissue

Dr. Aparna Nadig

Rvachew, S., Rees, K., Carolan, E. & Nadig, A. (2017). Improving emergent literacy with school-based shared reading: Paper versus eBooks. International Journal of Child-Computer Interaction, 12, 24-29.

Rees, K., Rvachew, S. & Nadig, A. (2017). Story-related discourse by parent-child dyads: A comparison of typically developing children and children with language impairments. International Journal of Child-Computer Interaction, 12, 16-23.

Nadig, A. & Mulligan, A. (2017). Intact non-word repetition and similar error patterns in language-matched children with autism spectrum disorders: A pilot study. Journal of Communication Disorders, 66, 13-21. https://doi.org/10.1016/j.jcomdis.2017.03.003

Gonzalez-Barrero, A. & Nadig, A. (2017). Verbal fluency in bilingual children with Autism Spectrum Disorders. Linguistic Approaches to Bilingualism, 7 (3-4), 460-475. Nadig http://dx.doi.org: 10.1075/lab.15023.gon

Dr. Marc Pell

Jiang, X., Sanford, R. & Pell, M.D. (2017). Neural systems for evaluating speaker (un)believability. Human Brain Mapping, 38, 3732-3729.

Liu, P., Rigoulot, S., & Pell, M.D. (2017). Cultural immersion alters emotion perception: Neurophysiological evidence from Chinese immigrants to Canada. Social Neuroscience, 12 (6), 685-700. doi: 10.1080/17470919.2016.1231713.

Jiang, X. & Pell, M.D. (2017). The sound of confidence and doubt. Speech Communication, 88, 106-126.

Schwartz, R. & Pell, M.D. (2017). When emotion and expression diverge: the social costs of Parkinson’s disease. Journal of Clinical and Experimental Neuropsychology, 39 (3), 211-230. Doi: 10.1080/13803395.2016.1216090.

Dr. Linda Polka

Masapollo, M., Polka, L., Molnar, M. & Ménard, L. (2017). Directional asymmetries reveal a universal bias in adult vowel perception, Journal of the Acoustical Society of America, 141 (4) 2857- 2869

Masapollo, M., Polka, L. & Ménard, L. (2017). A universal bias in adult vowel perception – by ear or by eye, Cognition, 166, 358-370

Dr. Susan Rvachew

Brosseau-Lapré, F., & Rvachew, S. (2017). Underlying manifestations of developmental phonological disorders in French-speaking pre-schoolers. Journal of Child Language, 44, 1337-1361.

Rvachew, S., Royle, P., Gonnerman, L., Stanké, B., Marquis, A., & Herbay, A. (2017). Development of a tool to screen risk of literacy delays in French-speaking children. Canadian Journal of Speech-Language Pathology and Audiology, 41, 321-340.

Rvachew, S. & Matthews, S. (2017). Demonstrating treatment efficacy using the single subject randomization design: Tutorial and demonstration. Journal of Communication Disorders, 67, 1-13.

McLeod, S., Verdon, S., & International Expert Panel on Multilingual Children's Speech (2017). Tutorial: Speech assessment for multilingual children who do not speak the same language(s) as the speech-language pathologist. American Journal of Speech-Language Pathology, 26, 691-708.

Rvachew, S. & Matthews, T. (2017). Using the Syllable Repetition Task to reveal underlying speech processes in Childhood Apraxia of Speech: A tutorial. Canadian Journal of Speech-Language Pathology and Audiology, 41(1), 106-126

Rees, K., Nadig, A., & Rvachew, S. (2017). Story-related discourse by parent-child dyads: A comparison of typically developing children with language impairments reading print books and e-books. International Journal of Child-Computer Interactions, 12, 16-23.

Rvachew, S., Rees, K., Carolan, E., & Nadig, A. (2017). Improving emergent literacy with school-based shared reading: paper versus ebooks. International Journal of Child-Computer Interactions, 12, 24-29.

Kucirkova, N. & Rvachew, S.(2017). Editorial. International Journal of Child-Computer Interaction. 12: 1-2.

Dr. Karsten Steinhauer

DePriest, J., Glushko, A., Steinhauer, K. & Koelsch, S. (2017). Language and music phrase boundary processing in Autism Spectrum Disorder: An ERP study. Scientific Reports 7, Article number: 14465. Doi: 10.1038/s41598‐017‐14538‐y

Kasparian, K., Vespignani, F., & Steinhauer, K. (2017). First language attrition induces changes in online morphosyntactic processing and reanalysis: An ERP study of number agreement in complex Italian sentences. Cognitive Science, 41(7), 1760‐1803. DOI: 10.1111/cogs.12450

Steinhauer, K., Drury, J.E., Royle, P. & Fromont, L.A. (2017). The priming of priming: Evidence that the N400 reflects context‐dependent post‐retrieval word integration in working memory. Neuroscience Letters, 651, 192‐197. DOI: https://doi.org/10.1016/j.neulet.2017.05.007

Kasparian, K. & Steinhauer, K. (2017). When the second language takes the lead: Neurocognitive processing changes in the first language of adult attriters. Frontiers in Psychology, 8, manuscript 389. DOI: https://doi.org/10.3389/fpsyg.2017.00389

White, E.J., Genesee, F., Titone, D. & Steinhauer, K. (2017) Phonological Processing in Late Second Language Learners: The Effects of Proficiency and Task. Bilingualism: Language and Cognition, 20 (1), 162–183. C Cambridge University Press DOI: http://dx.doi.org/10.1017/S1366728915000620

Dr. Elin Thordardottir

Elin Thordardottir (2017). Implementing Evidence Based Practice with limited evidence: The case of language intervention with Bilingual children. Revista de Logopedía, Foniatría y Audiología, 34 (4), 164-171.

 

2016

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Associate Professor
Vincent Gracco, Ph.D., Professor
Nicole Li-Jessen, Ph.D., Assistant Professor
Aparna Nadig, Ph.D., Associate Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Professor
Susan Rvachew, Ph.D., Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Professor

Dr. Shari Baum
Barbeau, E., Chai, X., Chen, J-K., Soles, J., Berken, J., Baum, S., Watkins, K., & Klein, C. (2016) “The role of the left inferior parietal lobule in second language learning: an intensive language training fMRI study,” Neuropsychologia. (doi: 10.1016/j.neuropsychologia.2016.10.003)
Blumstein, S. & Baum, S. (2016) “Neurobiology of speech production.” In Hickok & Small (Eds), Neurobiology of language. London, UK: Elsevier.
Bourguignon, N., Baum, S., & Shiller, D. (2016) “Please say what this word is – talker normalization alters the sensorimotor control of speech," Journal of Experimental Psychology: Human Perception and Performance, 42, 1039 –1047. (doi: 10.1037/xhp0000209)
Drury, J., Baum, S., Valeriote, H., & Steinhauer, K. (2016) “Punctuation and implicit prosody in silent reading: An ERP study investigating English garden-path sentences,” Frontiers in Psychology: Language Sciences, 7, 1375. doi: 10.3389/fpsyg.2016.01375
Haeuser, K., Titone, D., & Baum, S. (2016) “The role of the ventro-lateral prefrontal cortex in idiom comprehension: An rTMS study,” Neuropsychologia, 91, 360-370.
Mollaei, F., Shiller, D., Baum, S., & Gracco, V. (2016) “Sensorimotor control of vocal pitch and formant frequencies in Parkinson's disease,” Brain Research, 1646, 269–277.
Dr Meghan Clayards
No publications in 2016
Dr Laura Gonnerman
Kolne, K., Gonnerman, L., Marquis, A., Royle, P. & Rvachew, S. (2016) ”Teacher predictions of children’s spelling ability: What are they based on and how good are they?,” Language and Literacy, 18(1), 71-98.
Mollaei, F., Shiller, D., Baum, S., & Gracco, V. (2016) “Sensorimotor control of vocal pitch and formant frequencies in Parkinson's disease,” Brain Research, 1646, 269–277.
ROI 16128 Prédiction des habilités orthographiques par des habilités langage oral (PHOPHLO) (2016)
Dr Vincent Gracco (on leave)
Mollaei, F., Shiller, D., Baum, S., & Gracco, V. (2016) “Sensorimotor control of vocal pitch and formant frequencies in Parkinson's disease,” Brain Research, 1646, 269–277.
Dr Nicole Li-Jessen
Latifi, N., Heris, H. K., Thomson, S., Kazemirad, S., Taher, R., Sheibani, S., Li-Jessen, N. Y. K., Hojatollah, V. & Mongeau, L. (2016). A flow perfusion bioreactor system for vocal fold tissue engineering applications. Tissue Engineering. 22(9): 823-38.
Seekhao, N., Shung, C., JaJa, J., Mongeau, L. & Li-Jessen, N. Y. K. (2016). Real-time agent-based modeling simulation with in-situ visualization of complex biological systems - a case study on vocal fold inflammation and healing. IEEE Internal Workshop on High Performance Computational Biology, 463-472.
Yiu, E. E. M., Chan, K. M. K., Kwong, E., Li, N. Y. K., Ma, E. P. M., Tse, F. W., Lin, Z. X., Verdolini Abbott, K., & Tsang, R. (2016). Is acupuncture efficacious for treating phonotraumatic vocal pathologies? A randomized control trial. Journal of Voice. 30(5): 611-20.
Yiu, E. E. M., Chan, K. M. K., Li, N. Y. K., Tsang, R., Verdolini Abbott, K., Kwong, E., Ma, E. P. M., Tse, F. W., & Lin, Z. X.(2016). Wound healing effect of acupuncture for treating phonotraumatic vocal pathologies: cytokine study. Laryngoscope, 126(1), E18-22.
Dr Aparna Nadig
*Gonzalez-Barrero, A. & Nadig, A. (2016) “Verbal fluency in bilingual children with Autism Spectrum Disorders,” Linguistic Approaches to Bilingualism, http://dx.doi.org: 10.1075/lab.15023.gon
*Eberhardt, M. & Nadig, A. (2016) “Reduced sensitivity to context in language comprehension: A characteristic of Autism Spectrum Disorders or of poor structural language abilities?,” Research in Developmental Disabilities, Special Issue Autism Plus vs. Only. http://dx.doi.org/10.1016/j.ridd.2016.01.017
Nadig, A. & Bang, J. (2016) “Caregiver input: how does the linguistic environment of children with ASD compare to that of languagematched typically-developing children?,” In L. Naigles (Ed.), Innovative Investigations of Language in Autism Spectrum Disorder, APA/Walter de Gruyter.
* indicates trainee as first author
Dr Marc Pell
Schwartz, R. & Pell, M.D. (2016) “When emotion and expression diverge: the social costs of Parkinson’s disease,” Journal of Clinical and Experimental Neuropsychology. Doi: 10.1080/13803395.2016.1216090
*Jiang, X. & Pell, M.D. (2016) “The feeling of another’s knowing: how “mixed messages” in speech are reconciled,” Journal of Experimental Psychology: Human Perception and Performance, 42 (9), 1412-1428.
*Garrido-Vásquez, P., Pell, M.D., Paulmann, S., Sehm, B., & Kotz, S.A. (2016) “Impaired neural processing of dynamic faces in left-onset Parkinson's disease,” Neuropsychologia, 82, 123-133.
*Jiang, X. & Pell, M.D. (2016) “Neural responses towards a speaker’s feeling of (un)knowing,” Neuropsychologia, 81, 79-93. Doi: 10.1016/j.neuropsychologia.2015.12.008.
* indicates trainee as first author
Dr Linda Polka
*Nam, Y, & Polka, L. (2016) “The phonetic landscape in infant consonant perception is an uneven terrain,” Cognition, 155, 56-66. [htcp://dx.doi.org/1 0.1 0 16/j.cognition.2016.06.005]
Polka, L, *Orena, A., Sundara, M. & *Worrall, J. (2016) “Segmenting words from fluent speech during infancy - challenges and opportunities in a bilingual context,”, Developmental Science, 1-14 [doi: I0.1 I I l/desc.l2419]
*Kadam, M.A., *Orena, AI.• Theodore, R.M. & Polka, L. (2016) “Reading ability influences native and non-native voice recognition, even for unimpaired readers,” Journal of the Acoustical Society of America, 139 (I), EL6-12. [http://dx.doi.org/l0.1121/1.4937488]
* indicates trainee as first author
Dr Susan Rvachew
Rees, K., Nadig, A., & Rvachew, S. (2016) “Story-related discourse by parent-child dyads: A comparison of typically developing children with language impairments reading print books and e-books,” International Journal of Child-Computer Interactions, http://dx.doi.org/10.1016/j.ijcci.2017.01.001
Rvachew, S., Rees, K., Carolan, E., & Nadig, A. (2016) “Improving emergent literacy with school-based shared reading: paper versus ebooks,” International Journal of Child-Computer Interactions http://dx.doi.org/10.1016/j.ijcci.2017.01.002
Brosseau-Lapré, F., & Rvachew, S. (2016) “Underlying manifestations of developmental phonological disorders in French-speaking preschoolers,” Journal of Child Language, 1-25. doi:10.1017/S0305000916000556
Kolne, K., Gonnerman, L., Marquis, A., Royle, P. & Rvachew, S. (2016) “Teacher predictions of children’s spelling ability: What are they based on and how good are they?,” Language and Literacy, 18(1), 71-98.
Rees, K., Rvachew, S. & Nadig, A. (2016) “eBooks transform shared reading interactions between adults and children,” (pp. 141-155). In N. Kucirkova and G. Falloon (Eds.), Apps, Technology and Younger Learners, Taylor and Francis.
Rvachew, S. (2016) “Technology in early childhood education: overall commentary,” In S. Rvachew (Ed.), “Technology in Early Childhood Education, Encyclopedia on Early Childhood Development, CEECD.
Rvachew, S. (2016) “Technology in early childhood education: synthesis,” In S. Rvachew (Ed.). Technology in Early Childhood Education, Encyclopedia on Early Childhood Development, CEECD.
Dr Kirsten Steinhauer
Kasparian, K. & Steinhauer, K. (2016) “Confusing similar words: ERP correlates of lexical‐semantic processing in first language attrition and late second‐language acquisition,” Neuropsychologia, 93, 200‐ 217. DOI: tp://dx.doi.org/10.1016/j.neuropsychologia.2016.10.007
Kasparian, K., Vespignani, F., & Steinhauer, K. (2016) “First‐language attrition induces changes in online morphosyntactic processing and re‐analysis: An ERP study of number agreement in complex Italian sentences,” Cognitive Science, 1-44. DOI: 10.1111/cogs.12450
Mah, J., Goad, H. & Steinhauer, K. (2016) Using event‐related brain potentials to assess perceptibility: The case of French speakers and English [h],” Frontiers in Psychology – Language Sciences (7), 1469. DOI: 10.3389/fpsyg.2016.01469
Drury, J.E., Valeriote, H., Baum, S.R. & Steinhauer, K. (2016) “Punctuation and implicit prosody in silent reading: An ERP study investigating English garden‐path sentences,” Frontiers in Psychology – Language Sciences (7), 1375. [Special Topic on punctuation and covert prosody.] DOI: http://dx.doi.org/10.3389/fpsyg.2016.01375
Glushko, A., Steinhauer, K., DePriest, J. & Koelsch, S. (2016) “Neurophysiological correlates of musical and intonational phrasing: shared processing and effects of musical expertise,” PloS ONE: 11(5): e0155300.
Fromont, L. A., Royle, P., Perlitch, I., & Steinhauer, K. (2016). Re-evaluating the dynamics of phrase-structure processing using Event Related Potentials: the case of syntactic categories in French,” International Journal of Psychophysiology, 108, 86.
Royle, P., Drury, J. E., Perlitch, I., Fromont, L., & Steinhauer, K. (2016) “Stimulus lists can modulate semantic priming effects on the N400,” International Journal of Psychophysiology, 108, 89.
Dr Elin Thordardottir
Elin Thordardottir (2016) «Long versus short language samples: A clinical procedure for French language samples,» Canadian Journal of Speech Language Pathology and Audiology, 40, 176-197.
Elin Thordardottir (2016) «Morphological errors are not a sensitive marker of language impairment in Icelandic children age 4 to 14 years,» Journal of Communicative Disorders, 62, 82-100.
Elin Thordardottir (2016) «Typical language development and Primary Language Impairment in French-speaking children ,» In J. Patterson and B. Rodriguez, (Eds.), Multilingual perspectives on child language disorders Bristol, UK: Multilingual Matters.

2015

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Associate Professor
Vincent Gracco, Ph.D., Professor
Nicole Li-Jessen, Ph.D., Assistant Professor
Aparna Nadig, Ph.D., Associate Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Professor
Susan Rvachew, Ph.D., Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Professor

Dr. Shari Baum
Berken, J., Chen, J-K., Callahan, M., Gracco, V., Watkins, K., Baum, S., & Klein, D.(2015). Neural activation in speech production and reading aloud in native and non-native languages. Neuroimage, 112, 208-217.
Columbus, G., Sheikh, N., Côté-Lecaldare, M., Haeuser, K., Baum, S., & Titone, D. (2015). Individuals differences in executive control relate to familiar and unfamiliar metaphor processing: An eye movement study of sentence reading. Frontiers in Human Neuroscience, 8 (1057), 1-12.
Deschamps, I., Baum, S., & Gracco, V. (2015). Phonological processing in speech perception and speech production: what do sonority differences tell us? Brain and language, 149, 77–83.
Itzhak, I. & Baum, S. (2015) Misleading bias-driven expectations in referential processing and the facilitative role of contrastive accent. Journal of Psycholinguistic Research, 44, 623–650. doi: 10.1007/s10936-014-9306-6
Dr. Meghan Clayards
Clayards, M., Niebuhr, O., Gaskell, M.G. (2015). The time-course of auditory and language-specific mechanisms in compensation for sibilant assimilation. Attention, Perception & Psychophysics 77:1, 311-328. doi: 10.3758/s13414-014-0750-z.
Dr. Vincent Gracco
Berken J.A., Gracco V.L., et al. (2015). Neural activation in speech production and reading aloud in native and non-native languages. NeuroImage, 112, 208-217. PMCID: PMC Journal – In Process.
Beal, D., Lerch, J. Cameron, B., Henderson, R., Gracco, V.L., De Nil, L.F. (2015). The trajectory of gray matter development in Broca’s area is abnormal in people who stutte. Frontiers in Human Neuroscience, 9, 89-94. PMCID: PMC4347452.
Deschamps, I., Baum, S.R., Gracco V.L. (2015). Phonological processing in speech perception: what do sonority differences tell us? Brain & Language, 149, 77-83.
Berken, J.A., Gracco, V.L., Chen, J-K., Klein, D. (2015). The timing of language learning and its effects on brain structure. Brain Structure and Function. doi: 10.1007/s00429-015-1121-9
Ito, T., Ostry, D.J., Gracco, V.L. (2015). Somatosensory event-related potentials from orofacial skin stretch stimulation. Journal of Visualized Experiments, 106. e53621. doi: 10.3791/53621
Dr. Nicole Li-Jessen
Heris, H. K., Miri, A. K., Ghattamaneni, N. R., Li, N. Y. K., Thibeault, S. L., Wiseman, P. W., & Mongeau, L. (2015). Microstructural and mechanical characterization of scarred vocal folds. Journal of Biomechanics, 48(4), 708-711. PMID: 25648495.
Miri, A. K., Li, N. Y. K., Avazmohammadi, R., Thibeault, S. L., Mongrain, R., & Mongeau, L. (2015). S tudy of extracellular matrix in vocal fold biomechanics using a two-phase model. Biomechanics and Modeling in Mechanobiology, 14, 49-57.
Dr. Aparna Nadig
Bang, J. & NADIG, A. (2015). "Learning Language in Autism: Maternal Linguistic Input Contributes to Later Vocabulary, Autism Research," 8 (2), 214-223.

Abstract: It is well established that children with typical development (TYP) exposed to more maternal linguistic input develop larger vocabularies. We know relatively little about the linguistic environment available to children with autism spectrum disorders (ASD), and whether input contributes to their later vocabulary. Children with ASD or TYP and their mothers from English and French-speaking families engaged in a 10 min free-play interaction. To compare input, children were matched on language ability, sex, and maternal education (ASD n = 20, TYP n  = 20). Input was transcribed, and the number of word tokens and types, lexical diversity (D), mean length of utterances (MLU), and number of utterances were calculated. We then examined the relationship between input and children's spoken vocabulary 6 months later in a larger sample (ASD: n = 19, 50–85 months; TYP: n = 44, 25–58 months). No significant group differences were found on the five input features. A hierarchical multiple regression model demonstrated input MLU significantly and positively contributed to spoken vocabulary 6 months later in both groups, over and above initial language levels. No significant difference was found between groups in the slope between input MLU and later vocabulary. Our findings reveal children with ASD and TYP of similar language levels are exposed to similar maternal linguistic environments regarding number of word tokens and types, D, MLU, and number of utterances. Importantly, linguistic input accounted for later vocabulary growth in children with ASD.

Link to article

Nadig A, Seth, S. & Sasson, M. (2015). Global Similarities and Multifaceted Differences in the Production of Partner-Specific Referential Pacts by Adults with Autism Spectrum Disorders, Frontiers in Psychology, 6:1888.

Abstract: Over repeated reference conversational partners tend to converge on preferred terms or referential pacts. Autism spectrum disorders (ASD) are characterized by pragmatic difficulties that are best captured by unstructured tasks. To this end we tested adults with ASD who did not have language or intellectual impairments, and neurotypical comparison participants in a referential communication task. Participants were directors, describing unlexicalized, complex novel stimuli over repeated rounds of interaction. Group comparisons with respect to referential efficiency showed that directors with ASD demonstrated typical lexical entrainment: they became faster over repeated rounds and used shortened referential forms. ASD and neurotypical groups did not differ with respect to the number of descriptors they provided or the number of exchanges needed for matchers to identify figures. Despite these similarities the ASD group was slightly slower overall. We examined partner-specific effects by manipulating the common ground shared with the matcher. As expected, neurotypical directors maintained referential precedents when speaking to the same matcher but not with a new matcher. Directors with ASD were qualitatively similar but displayed a less pronounced distinction between matchers. However, significant differences emerged over time; neurotypical directors incorporated the new matcher’s contributions into descriptions, whereas directors with ASD were less likely to do so.

Link to article

Nadig, A., & Shaw, H. (2015). Acoustic marking of prominence: How do preadolescent speakers with and without high-functioning autism mark contrast in an interactive task? Language, Cognition and Neuroscience, 30 (1-2), 32-47.

Abstract: The acoustic correlates of discourse prominence have garnered much interest in recent adult psycholinguistics work, and the relative contributions of amplitude, duration and pitch to prominence have also been explored in research with young children. In this study, we bridge these two age groups by examining whether specific acoustic features are related to the discourse function of marking contrastive stress by preadolescent speakers, via speech obtained in a referential communication task that presented situations of explicit referential contrast. In addition, we broach the question of listener-oriented versus speaker-internal factors in the production of contrastive stress by examining both speakers who are developing typically and those with high-functioning autism (HFA). Diverging from conventional expectations and early reports, we found that speakers with HFA, like their typically developing peers (TYP), appropriately marked prominence in the expected location, on the pre-nominal adjective, in instructions such as “Pick up the BIG cup”. With respect to the use of specific acoustic features, both groups of speakers employed amplitude and duration to mark the contrastive element, whereas pitch was not produced selectively to mark contrast by either group. However, the groups also differed in their relative reliance on acoustic features, with HFA speakers relying less consistently on amplitude than TYP speakers, and TYP speakers relying less consistently on duration than HFA speakers. In summary, the production of contrastive stress was found to be globally similar across groups, with fine-grained differences in the acoustic features employed to do so. These findings are discussed within a developmental framework of the production of acoustic features for marking discourse prominence, and with respect to the variations among speakers with autism spectrum disorders that may lead to appropriate production of contrastive stress.

Link to article

Dr. Marc Pell
Pell, M.D., Rothermich, K., Liu, P., Paulmann, S., Sethi, S, & Rigoulot, S. (2015). Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody. Biological Psychology, 111, 14-25. doi: 10.1016/j.biopsycho.2015.08.008
Rothermich, K., & Pell, M.D. (2015). Introducing RISC: A new video inventory for testing social perception. PLoS ONE, 10(7): eo133902. doi: 10.1371/journal.pone.0133902
Liu, P., Rigoulot, S., & Pell, M.D. (2015). Cultural differences in on-line sensitivity to emotional voices: Comparing East and West. Frontiers in Human Neuroscience. http://dx.doi.org/10.3389/fnhum.2015.00311
Jiang, X. & Pell, M.D. (2015). On how the brain decodes vocal cues about speaker confidence. Cortex, 66, 9-34.
Jiang, X., Paulmann, S., Robin, J., & Pell, M.D.) (2015). More than accuracy: Nonverbal dialects modulate the time course of vocal emotion recognition across cultures. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 597-612. doi: 10.1037/xhp0000043.
Rigoulot, S., Pell, M.D., & Armony, J.L.) (2015). Time course of the influence of musical expertise on the processing of vocal and musical sounds. Neuroscience, 290, 175-184.
Liu, P., Rigoulot, S., & Pell, M.D.) (2015). Culture modulates the brain response to human expressions of emotion: electrophysiological evidence. Neuropsychologia, 67, 1-13.
Jiang, X. & Pell, M.D.) (2015) Neural responses towards a speaker’s feeling of (un)knowing. Neuropsychologia. http://dx.doi.org/10.1016/j.neuropsychologia.2015.12.008.
Dr. Linda Polka
POLKA, L., Bohn, O-S. & Weiss, D. J. (2015).  Commentary – "Revisiting vocal perception in non-human animals: A review of vowel discrimination, speaker voice recognition and speaker normalization," Frontiers in Psychology, Language Sciences, volume 6, article 941 doi: 10.3389

Abstract: Comparative research provides a unique window into our understanding of human vocal perception. We commend Kriengwatana, Escudero, and ten Cate (KEtC) for providing a much- needed review of this diverse literature. Their appraisal of three research areas highlights conceptual and empirical gaps, while also pointing to fruitful directions for future research. This commentary addresses the literature on asymmetries in vowel perception. In their review of this topic KEtC focus on vowel contrasts that have revealed directional asymmetries in infants and non-human animals. We offer some clarification with respect to these stimulus issues and highlight another aspect of this research landscape—the role of task demands—that must also guide future comparative investigations.

Link to article

Orena, A.J., Theodore, R. & POLKA, L. (2015) Language exposure facilitates talker learning prior to language comprehension, even in adults, Cognition, 143, 36-40. 

Abstract: Adults show a native language advantage for talker identification, which has been interpreted as evidence that phonological knowledge mediates talker learning. However, infants also show a native language benefit for talker discrimination, suggesting that sensitivity to linguistic structure due to systematic language exposure promotes talker learning, even in the absence of functional phonological knowledge or language comprehension. We tested this hypothesis by comparing two groups of monolingual-English adults on their ability to learn English and French voices. One group resided in Montréal with regular exposure to spoken French; the other resided in Storrs, Connecticut and did not have French exposure. Montréal residents showed faster learning and better retention for the French voices compared to their Storrs-residing peers. These findings demonstrate that systematic exposure to a foreign language bolsters talker learning in that language, expanding the gradient effect of language experience on talker learning to perceptual learning that precedes sentence comprehension.

Link to article

Masapollo, M, POLKA, L. & Menard, L. (2015).  When infants, talk infants Listen: Pre-babbling infants prefer listening to speech with infant vocal properties, Developmental Science. pp 1–11 DOI: 10.1111/desc.12298

Abstract: To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant-directed speech (IDS). In addition, infants’ nascent articulatory abilities may induce a bias favoring infant speech given that 4- to 6-month-olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production-based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant-like vocal properties, supporting the view that infants’ production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.

Link to article

Masapollo, M., Polka,L., & Menard, L. (2015). Asymmetries in vowel perception: Effects of formant convergence and category "goodness", Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow, August 2015.
Orena, A.J., Theodore, R. & Polka,L. (2015). Language exposure benefit to talker learning in an unfamiliar language, Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow, August 2015.
Kadam, M., Orena, A.J., Theodore, R. & Polka,L. (2015). Gradient effects of reading ability on native and non-native talker identification, Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow, August 2015.
Dr. Susan Rvachew
Rvachew, S. & Brosseau-Lapré, F. (2015). A randomized trial of twelve-week interventions for the treatment of developmental phonological disorder in Francophone children. American Journal ofSpeech-Language Pathology, 24, 637-658.
Thordardottir, E., Cloutier, G., Ménard, S., Pelland-Blais, E., & Rvachew, S. (2015). Monolingual or bilingual intervention for primary language impairment? A randomized control trial. Journal of Speech, Language, and Hearing Research, 58, 287-300.
Rvachew, S. (2015). Developmental phonological disorders (pp. 61-72). L. Cummings (Ed.), Handbook of Communication Disorders. Cambridge, UK: Cambridge University Press.
Dr. Karsten Steinhauer
White, E.J., Genesee, F., Titone, D. & Steinhauer, K. (2015). Phonological processing in late second language learners: The effects of proficiency and pask. Bilingualism: Language and Cognition. http://dx.doi.org/10.1017/S1366728915000620
Courteau E, Steinhauer K, Royle P. (2015). L'acquisition du groupe nominal en français et de ses aspect morpho-syntaxiques et sémantiques : une étude de potentiels évoqués. Glossa, 117, 77-93.
Dr. Elin Thordardottir
Łuniewska, M., Haman, E., Armon-Lotem, S., E. Thordardottir, et al. (2015). Ratings of age of acquisition of 299 words across 25 languages. Is there a cross-linguistic order of words? Behavior Research Methods, Aug. 2015.
Brandeker, M. & E.Thordardottir (2015). Language exposure in bilingual toddlers: performance on nonword repetition and lexical tasks. American Journal of Speech Language Pathology, 24, 126-138.
Thordardottir, E., Ménard, S., Cloutier, G., Pelland-Blais, E., & Rvachew, S. (2015). Effectiveness of monolingual L2 and bilingual language intervention for children from minority language groups: A randomized control trial. Journal of Speech, Language and Hearing Research, 58(2), 287-300.
Thordardottir, E. (2015). The relationship between bilingual exposure and morphosyntactic development. International Journal of Speech Language Pathology, 17(2), 97-114.
Thordardottir, E. (2015). Proposed diagnostic procedures and criteria for Cost Action Studies on Bilingual SLI. In Armon-Lotem, S., J. de Jong & N. Meir (Eds.), Methods for assessing multilingual children: Disentangling bilingualism from language impairment. Bristol, UK: Multlingual Matters.

2014

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Associate Professor
Vincent Gracco, Ph.D., Professor
Nicole Li-Jessen, Ph.D., Assistant Professor
Aparna Nadig, Ph.D., Associate Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Professor
Susan Rvachew, Ph.D., Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Professor

Dr. Shari Baum
BAUM, S. (Baum, S. & Titone, D.) (2014). “Moving toward a neuroplasticity approach to bilingualism, executive control and aging,” Applied Psycholinguistics, 35, 857-894. (Keynote article with commentary)

Abstract: Normal aging is an inevitable race between increasing knowledge and decreasing cognitive capacity. Crucial to understanding and promoting successful aging is determining which of these factors dominates for particular neurocognitive functions. Here, we focus on the human capacity for language, for which healthy older adults are simultaneously advantaged and disadvantaged. In recent years, a more hopeful view of cognitive aging has emerged from work suggesting that age-related declines in executive control functions are buffered by life-long bilingualism. In this paper, we selectively review what is currently known and unknown about bilingualism, executive control, and aging. Our ultimate goal is to advance the views that these issues should be reframed as a specific instance of neuroplasticity more generally and, in particular, that researchers should embrace the individual variability among bilinguals by adopting experimental and statistical approaches that respect the complexity of the questions addressed. In what follows, we set out the theoretical assumptions and empirical support of the bilingual advantages perspective, review what we know about language, cognitive control, and aging generally, and then highlight several of the relatively few studies that have investigated bilingual language processing in older adults, either on their own or in comparison with monolingual older adults. We conclude with several recommendations for how the field ought to proceed to achieve a more multifactorial view of bilingualism that emphasizes the notion of neuroplasticity over that of simple bilingual versus monolingual group comparisons.

Link to article

--(Bourgignon, N., Baum, S., & Shiller, D.) (2014). “Lexical-perceptual integration influences sensorimotor adaptation in speech,” Frontiers in Human Neuroscience, 8, 1-9.

Abstract: A combination of lexical bias and altered auditory feedback was used to investigate the influence of higher-order linguistic knowledge on the perceptual aspects of speech motor control. Subjects produced monosyllabic real words or pseudo-words containing the vowel [ε] (as in “head”) under conditions of altered auditory feedback involving a decrease in vowel first formant (F1) frequency. This manipulation had the effect of making the vowel sound more similar to [I] (as in “hid”), affecting the lexical status of produced words in two Lexical-Change (LC) groups (either changing them from real words to pseudo-words: e.g., less—liss, or pseudo-words to real words: e.g., kess—kiss). Two Non-Lexical-Change (NLC) control groups underwent the same auditory feedback manipulation during the production of [ε] real- or pseudo-words, only without any resulting change in lexical status (real words to real words: e.g., mess—miss, or pseudo-words to pseudo-words: e.g., ness—niss). The results from the LC groups indicate that auditory-feedback-based speech motor learning is sensitive to the lexical status of the stimuli being produced, in that speakers tend to keep their acoustic speech outcomes within the auditory-perceptual space corresponding to the task-related side of the word/non-word boundary (real words or pseudo-words). For the NLC groups, however, no such effect of lexical status is observed.

Link to article

--(Deschamps, I., Baum, S., & Gracco, V.L.) (2014). “On the role of the supramarginal gyrus in phonological processing and verbal working memory: evidence from rTMS studies,” Neuropsychologia, 53, 39-46.

Abstract: The supramarginal gyrus (SMG) is activated for phonological processing during both language and verbal working memory tasks. Using rTMS, we investigated whether the contribution of the SMG to phonological processing is domain specific (specific to phonology) or more domain general (specific to verbal working memory). A measure of phonological complexity was developed based on sonority differences and subjects were tested after low frequency rTMS on a same/different judgment task and an n-back verbal memory task. It was reasoned that if the phonological processing in the SMG is more domain general, i.e., related to verbal working memory demands, performance would be more affected by the rTMS during the n-back task than during the same/different judgment task. Two auditory experiments were conducted. The first experiment demonstrated that under conditions where working memory demands are minimized (i.e. same/different judgment), repetitive stimulation had no effect on performance although performance varied as a function of phonological complexity. The second experiment demonstrated that during a verbal working memory task (n-back task), where phonological complexity was also manipulated, subjects were less accurate and slower at performing the task after stimulation but the effect of phonology was not affected. The results confirm that the SMG is involved in verbal working memory but not in the encoding of sonority differences.

Link to article

--(Molnar, M., Polka, L., Baum, S., Ménard, L., & Steinhauer, K.) (2014). “Learning two languages from birth shapes pre-attentive processing of vowel categories: Electrophysiological correlates of vowel discrimination in monolinguals and simultaneous bilinguals,” Bilingualism: Language & Cognition, 17, 526-541. doi:10.1017/S136672891300062X

Abstract: Using event-related brain potentials (ERPs), we measured pre-attentive processing involved in native vowel perception as reflected by the mismatch negativity (MMN) in monolingual and simultaneous bilingual (SB) users of Canadian English and Canadian French in response to various pairings of four vowels: English /u/, French /u/, French /y/, and a control /y/. The monolingual listeners exhibited a discrimination pattern that was shaped by their native language experience. The SB listeners, on the other hand, exhibited a MMN pattern that was distinct from both monolingual listener groups, suggesting that the SB pre-attentive system is tuned to access sub-phonemic detail with respect to both input languages, including detail that is not readily accessed by either of their monolingual peers. Additionally, simultaneous bilinguals exhibited sensitivity to language context generated by the standard vowel in the MMN paradigm. The automatic access to fine phonetic detail may aid SB listeners to rapidly adjust their perception to the variable listening conditions that they frequently encounter.

Link to article

--(Titone, D. & Baum, S.) (2014). “The future of bilingualism research: Insufferably optimistic and replete with new questions,” Applied Psycholinguistics, 35, 933-942. (Response to commentary on keynote article).

Abstract: The scientific process applied to any domain is a thing of power and beauty. It enables its practitioners to systematically and rigorously pursue questions of importance, in a manner that is, by necessity, adaptive and tenacious. As was his way with most matters of relevance to psychology, the words of William James (The Principles of Psychology, 1890) are illustrative here: Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely. (p. 7)

Link to article

Dr. Laura Gonnerman
GONNERMAN, L.M. (Kolne, K.L.D, Gonnerman, L.M.) (2014). Improving children’s spelling ability with a morphology-based intervention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp.). Austin, TX: Cognitive Science Society.

Abstract: Children who have difficulty with literacy development often experience pervasive and enduring trouble with spelling, even after receiving remedial instruction. Our study tests a new approach to improving the spelling of these children. We designed an instructional program emphasizing the morphological structure of words, and directly contrast its benefits to instruction that focuses on word meanings, avoiding any discussion of morphology. The intervention was conducted with French-speaking children in Grades 3 and 5 with varying literacy abilities. The results reveal that our intervention improved the spelling of all children in the study, but it was especially effective for children who displayed low spelling performance. Moreover, low-performing spellers who received the morphology instruction showed a greater improvement in their spelling of suffixes than children who participated in the vocabulary instruction. Our findings suggest that spelling instruction concentrated on morphological structure may be a powerful tool for improving children’s spelling ability.

Link to article

Dr. Vincent Gracco
GRACCO, V.L. (Klepousniotou, E., Gracco, V.L., Pike, G.B.) (2014). Pathways to lexical ambiguity: fMRI evidence for bilateral fronto-parietal involvement in language processing. Brain & Language, 131:56-64.

Abstract:Numerous functional neuroimaging studies reported increased activity in the pars opercularis and the pars triangularis (Brodmann's areas 44 and 45) of the left hemisphere during the performance of linguistic tasks. The role of these areas in the right hemisphere in language processing is not understood and, although there is evidence from lesion studies that the right hemisphere is involved in the appreciation of semantic relations, no specific anatomical substrate has yet been identified. This event-related functional magnetic resonance imaging study compared brain activity during the performance of language processing trials in which either dominant or subordinate meaning activation of ambiguous words was required. The results show that the ventral part of the pars opercularis both in the left and the right hemisphere is centrally involved in language processing. In addition, they highlight the bilateral co-activation of this region with the supramarginal gyrus of the inferior parietal lobule during the processing of this type of linguistic material. This study, thus, provides the first evidence of co-activation of Broca's region and the inferior parietal lobule, succeeding in further specifying the relative contribution of these cortical areas to language processing.

Link to article

--(Deschamps, I., Baum, S., Gracco,V.L.) (2014). On the role of the supramarginal gyrus in phonological processing and verbal working memory: evidence from rTMS studies. Neuropsychologia, 53:39-46.

Abstract: The supramarginal gyrus (SMG) is activated for phonological processing during both language and verbal working memory tasks. Using rTMS, we investigated whether the contribution of the SMG to phonological processing is domain specific (specific to phonology) or more domain general (specific to verbal working memory). A measure of phonological complexity was developed based on sonority differences and subjects were tested after low frequency rTMS on a same/different judgment task and an n-back verbal memory task. It was reasoned that if the phonological processing in the SMG is more domain general, i.e., related to verbal working memory demands, performance would be more affected by the rTMS during the n-back task than during the same/different judgment task. Two auditory experiments were conducted. The first experiment demonstrated that under conditions where working memory demands are minimized (i.e. same/different judgment), repetitive stimulation had no effect on performance although performance varied as a function of phonological complexity. The second experiment demonstrated that during a verbal working memory task (n-back task), where phonological complexity was also manipulated, subjects were less accurate and slower at performing the task after stimulation but the effect of phonology was not affected. The results confirm that the SMG is involved in verbal working memory but not in the encoding of sonority differences.

Link to article

---(Ito,T., Gracco, V.L., Ostry,D.J.) (2014). Temporal factors affecting somatosensory-auditory interactions in speech processing. Frontiers in Language Science. doi: 10.3389/fpsyg.2014.01198

Abstract: Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

Link to article

--(Smits-Bandstra,S., Gracco, V.L.) (2014). Retention of Implicit Sequence Learning in Persons who Stutter and Persons with Parkinson’s Disease. Journal of Motor Behavior. 10.1080/00222895.2014.961890

Abstract: The authors investigated the retention of implicit sequence learning in 14 persons with Parkinson's disease (PPD), 14 persons who stutter (PWS), and 14 control participants. Participants completed a nonsense syllable serial reaction time task in a 120-min session. Participants named aloud 4 syllables in response to 4 visual stimuli. The syllables formed a repeating 8-item sequence not made known to participants. After 1 week, participants completed a 60-min retention session that included an explicit learning questionnaire and a sequence generation task. PPD showed retention of general learning equivalent to controls but PWS's reaction times were significantly slower on early trials of the retention test relative to other groups. Controls showed implicit learning during the initial session that was retained on the retention test. In contrast, PPD and PWS did not demonstrate significant implicit learning until the retention test suggesting intact, but delayed, learning and retention of implicit sequencing skills. All groups demonstrated similar limited explicit sequence knowledge. Performance differences between PWS and PPD relative to controls during the initial session and on early retention trials indicated possible dysfunction of the cortico-striato-thalamo-cortical loop. The etiological implications for stuttering, and clinical implications for both populations, of this dysfunction are discussed.

Link to article

Dr. Nicole Li-Jessen
Li, N.Y.K. (Coppoolse, J. M. S., Li, N. Y. K., Heris, H. K., Pitaro, J., Akinpelu, O., Thibeault, S. L., Daniel, S. J., Van Kooten, T., G., & Mongeau, L.) (2014). In vivo study in a rat animal model of composite microgels based on hyaluronic acid and gelatin for the reconstruction of surgically injured vocal folds. Journal of Speech, Language and Hearing Research, 57(2), S658-73.

Abstract: The objective of this study was to investigate local injection with a hierarchically microstructured hyaluronic acid-gelatin (HA-Ge) hydrogel for the treatment of acute vocal fold injury using a rat model. METHOD Vocal fold stripping was performed unilaterally in 108 Sprague-Dawley rats. A volume of 25 μl saline (placebo controls), HA-bulk, or HA-Ge hydrogel was injected into the lamina propria (LP) 5 days after surgery. The vocal folds were harvested at 3, 14, and 28 days after injection and analyzed using hematoxylin and eosin staining and immunohistochemistry staining for macrophages, myofibroblasts, elastin, collagen type I, and collagen type III. RESULTS The macrophage count was statistically significantly lower in the HA-Ge group than in the saline group (p < .05) at Day 28. Results suggested that the HA-Ge injection did not induce inflammatory or rejection response. Myofibroblast counts and elastin were statistically insignificant across treatment groups at all time points. Increased elastin deposition was qualitatively observed in both HA groups from Day 3 to Day 28, and not in the saline group. Significantly more elastin was observed in the HA-bulk group than in the uninjured group at Day 28. Significantly more collagen type I was observed in the HA-bulk and HA-Ge groups than in the saline group (p < .05) at Day 28. The collagen type I concentration in the HA-Ge and saline groups was found to be comparable to that in the uninjured controls at Day 28. The concentration of collagen type III in all treatment groups was similar to that in uninjured controls at Day 28. CONCLUSION Local HA-Ge and HA-bulk injections for acute injured vocal folds were biocompatible and did not induce adverse response.

Link to article

--(Ingle, J., Helou, L.B., Li, N. Y. K., Hebda, P. A., Rosen, C. R. & Verdolini-Abbott K.) (2014). Role of steroids in acute phonotrauma: a basic science investigation. Laryngoscope, 124(4), 921-927. PMID: 24474147

Abstract:
OBJECTIVES/HYPOTHESIS: Steroids are used for the treatment of laryngitis in vocal performers and other individuals despite the absence of evidence demonstrating their impact on vocal fold inflammation. Our objective was to examine laryngeal secretion cytokine inflammatory profile changes associated with corticosteroid treatment in a human phonotrauma model.
STUDY DESIGN: Prospective, individual, randomized, double-blinded, controlled trial.
METHODS: Participants included 10 healthy females who were randomized to either treatment with oral hydrocortisone or placebo, each given in three doses over 20 hours after the experimental induction of acute phonotrauma. Cytokines associated with inflammation and healing (interleukin [IL]-1β, IL-6, IL-10) were measured in laryngeal secretions before and after vocal loading and at 4 and 20 hours after treatment.
RESULTS: Proinflammatory mediators IL-1β and IL-6 were doubled in the controls versus the steroid treatment group at 21 hours following induction of acute vocal fold inflammation. Anti-inflammatory cytokine IL-10 showed a 6.3-fold increase in the steroid treatment group versus the controls, indicating anti-inflammatory modulation by steroid treatment.
CONCLUSIONS: This study provides biologic evidence supporting the use of steroids for acute vocal fold inflammation associated with phonotrauma.

Link to article

--(Li, N. Y. K., Chen, F.*, Dikkers, F. G., & Thibeault, S. L.) (2014). Dose-dependent effect of mitomycin C on human vocal fold fibroblasts. Head & Neck, 36(3), 401-410. PMID: 23765508. * Featured article on the cover of the journal.

Abstract:
BACKGROUND: The purpose of this study was to evaluate in vitro cytotoxicity and antifibrotic effects of mitomycin C on normal and scarred human vocal fold fibroblasts.
METHODS: Fibroblasts were subjected to mitomycin C treatment at 0.2, 0.5, or 1 mg/mL, or serum control. Cytotoxicity, immunocytochemistry, and Western blot for collagen I/III were performed at days 0, 1, 3, and 5.
RESULTS: Significant decreases in live cells were measured for mitomycin C-treated cells on days 3 and 5 for all doses. Extracellular staining of collagen I/III was observed in mitomycin C-treated cells across all doses and times. Extracellular staining suggests apoptosis with necrosis, compromising the integrity of cell membranes and release of cytosolic proteins into the extracellular environment. Western blot indicates inhibition of collagen at all doses except 0.2 mg/mL at day 1.
CONCLUSION: A total of 0.2 mg/mL mitomycin C may provide initial and transient stimulation of collagen for necessary repair to damaged tissue without the long-term risk of fibrosis.

Link to article

--(Orbelo, D. M., Li, N. Y. K. & Verdolini, K.) (2014). Lessac Madson resonant voice therapy in the treatment of secondary muscle tension dysphonia. In J. C. Stemple & E. R. Hapner (Ed.), Voice therapy: Clinical Studies (4th ed.). San Diego: Plural Publishing.
Dr. Marc Pell
Pell, M.D. (Pell,M.D., Monetta, L., Rothermich, K., Kotz, S.A., Cheang, H.S., & McDonald, S.) (2014). Social perception in adults with Parkinson’s disease. Neuropsychology, 28 (6), 905-916.

Abstract:
OBJECTIVE: Our study assessed how nondemented patients with Parkinson's disease (PD) interpret the affective and mental states of others from spoken language (adopt a "theory of mind") in ecologically valid social contexts. A secondary goal was to examine the relationship between emotion processing, mentalizing, and executive functions in PD during interpersonal communication.
METHOD: Fifteen adults with PD and 16 healthy adults completed The Awareness of Social Inference Test, a standardized tool comprised of videotaped vignettes of everyday social interactions (McDonald, Flanagan, Rollins, & Kinch, 2003). Individual subtests assessed participants' ability to recognize basic emotions and to infer speaker intentions (sincerity, lies, sarcasm) from verbal and nonverbal cues, and to judge speaker knowledge, beliefs, and feelings. A comprehensive neuropsychological evaluation was also conducted.
RESULTS: Patients with mild-moderate PD were impaired in the ability to infer "enriched" social intentions, such as sarcasm or lies, from nonliteral remarks; in contrast, adults with and without PD showed a similar capacity to recognize emotions and social intentions meant to be literal. In the PD group, difficulties using theory of mind to draw complex social inferences were significantly correlated with limitations in working memory and executive functioning.
CONCLUSIONS: In early PD, functional compromise of the frontal-striatal-dorsal system yields impairments in social perception and understanding nonliteral speaker intentions that draw upon cognitive theory of mind. Deficits in social perception in PD are exacerbated by a decline in executive resources, which could hamper the strategic deployment of attention to multiple information sources necessary to infer social intentions.

Link to article

--(Rigoulot, S. & Pell, M.D.) (2014). Emotion in the voice influences the way we scan emotional faces. Speech Communication, 65, 36-49.

Abstract: Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is congruent or incongruent with the prosody. Twenty-one participants viewed individual faces expressing fear, sadness, disgust, or happiness while listening to an emotionally-inflected pseudo-utterance spoken in a congruent or incongruent prosody. Participants judged whether the emotional meaning of the face and voice were the same or different (match/mismatch). Results confirm that there were significant effects of prosody congruency on eye movements when participants scanned a face, although these varied by emotion type; a matching prosody promoted more frequent looks to the upper part of fear and sad facial expressions, whereas visual attention to upper and lower regions of happy (and to some extent disgust) faces was more evenly distributed. These data suggest ways that vocal emotion cues guide how humans process facial expressions in a way that could facilitate recognition of salient visual cues, to arrive at a holistic impression of intended meanings during interpersonal events.

Link to article

--(Jiang, X. & Pell, M.D.) (2014). Encoding and decoding confidence information in speech. Speech Prosody 7th International Conference Proceedings, Dublin, Ireland.

Abstract: In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or ‘feeling of knowing’. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330–500 msec and 550–740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980–1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by revealing how a speaker's mental state (i.e., feeling of knowing) is simultaneously inferred from vocal expressions.

Link to article

--(Liu, P. & Pell, M.D.) (2014). Recognizing vocal emotions in Mandarin Chinese: A cross-language comparison. Speech Prosody 7th International Conference Proceedings, Dublin, Ireland.

Abstract: To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu [at] mail.mcgill.ca.

Link to article

--(Rigoulot, S., Fish, K., & Pell, M.D.) (2014). Neural correlates of inferring speaker sincerity from white lies: An event-related potential source localization study. Brain Research, 1565, 48-62.

Abstract: During social interactions, listeners weigh the importance of linguistic and extra-linguistic speech cues (prosody) to infer the true intentions of the speaker in reference to what is actually said. In this study, we investigated what brain processes allow listeners to detect when a spoken compliment is meant to be sincere (true compliment) or not (“white lie”). Electroencephalograms of 29 participants were recorded while they listened to Question–Response pairs, where the response was expressed in either a sincere or insincere tone (e.g., “So, what did you think of my presentation?”/“I found it really interesting.”). Participants judged whether the response was sincere or not. Behavioral results showed that prosody could be effectively used to discern the intended sincerity of compliments. Analysis of temporal and spatial characteristics of event-related potentials (P200, N400, P600) uncovered significant effects of prosody on P600 amplitudes, which were greater in response to sincere versus insincere compliments. Using low resolution brain electromagnetic tomography (LORETA), we determined that the anatomical sources of this activity were likely located in the (left) insula, consistent with previous reports of insular activity in the perception of lies and concealments. These data extend knowledge of the neurocognitive mechanisms that permit context-appropriate inferences about speaker feelings and intentions during interpersonal communication.

Link to article

Dr. Linda Polka
POLKA, L. (Polka, L., Masapollo, M., & Menard) (2014). Who's talking now? Infant perception of vowels with infant vocal properties, Psychological Science published on-line June 2 doi:10.1177/0956797614533571

Abstract: Little is known about infants’ abilities to perceive and categorize their own speech sounds or vocalizations produced by other infants. In the present study, prebabbling infants were habituated to /i/ (“ee”) or /a/ (“ah”) vowels synthesized to simulate men, women, and children, and then were presented with new instances of the habituation vowel and a contrasting vowel on different trials, with all vowels simulating infant talkers. Infants showed greater recovery of interest to the contrasting vowel than to the habituation vowel, which demonstrates recognition of the habituation-vowel category when it was produced by an infant. A second experiment showed that encoding the vowel category and detecting the novel vowel required additional processing when infant vowels were included in the habituation set. Despite these added cognitive demands, infants demonstrated the ability to track vowel categories in a multitalker array that included infant talkers. These findings raise the possibility that young infants can categorize their own vocalizations, which has important implications for early vocal learning.

Link to article

--(Bohn, 0-S, & Polka, L.) (2014). Fast phonetic learning in very young infants: what it shows and what it doesn't show. Frontiers in Psychology, Language Sciences, volume 5, 1-2.
Dr. Susan Rvachew
RVACHEW, S. (Thordardottir, E., Cloutier, G., Ménard, S., Pelland‐Blais, E., & Rvachew, S.) (2014). Monolingual or bilingual intervention for primary language impairment? A randomized control trial. Journal of Speech, Language, and Hearing Research. doi: 10.1044/2014_JSLHR‐L‐13‐0277

Abstract: This study investigated the clinical effectiveness of monolingual versus bilingual language intervention, the latter involving speech-language pathologist–parent collaboration. The study focuses on methods that are currently being recommended and that are feasible within current clinical contexts.

Link to article

--(Rvachew, S. & Rafaat, S.) (2014). Report on benchmark wait times for pediatric speech sound disorders. Canadian Journal of Speech‐Language Pathology and Audiology, 38, 82‐96.

Abstract: The Pan Canadian Alliance of Speech-Language Pathology and Audiology Organizations has developed wait times benchmarks for diagnostic groupings relevant to speech-language pathology and audiology. This report presents the outcome of this endeavor for the Speech Sound Disorder (SSD) diagnosis. The purpose of a wait time benchmark is to provide a credible evidence-based recommendation for a given service (in this case, speech-language pathology assessment and intervention for SSDs), and to clarify the risk factors associated with waiting past the time when the patient’s health is likely to be adversely affected according to clinical consensus and the best available scientific evidence. SSDs are characterized by a high frequency of speech sound errors relative to the child’s age peers, impacting the intelligibility of the child’s speech. SSD often cooccurs with oral and written language impairments. When the SSD persists past the age of school entry, long-term difficulties in the social, emotional, academic and vocational domains can become significant concerns. Fortunately standard interventions have been shown to be effective when provided with sufficient intensity and duration. The Alliance’s Wait Times Project reviewed this literature and recommended wait times for assessment and intervention with the most critical period for rapid service being the two year window prior to school entry. This report provides an example of a collaborative enterprise between academia and clinical practitioners that serves to benefit both consumers and providers of speech, language, and hearing services across the country.

Link to article

--(Rvachew, S., Leroux, É., & Brosseau‐Lapré, F.) (2014). Production of wordinitial consonant sequences by francophone preschoolers with a developmental phonological disorder. Canadian Journal of Speech‐Language Pathology and Audiology, 37, 252‐267.

Abstract:
Purpose: The purpose of this pilot study is to describe patterns of word initial consonant sequence errors as produced by 50 francophone children, age 46 to 69 months, who were receiving treatment for a developmental phonological disorder (DPD) in Québec.
Method: The children’s productions of consonant sequences on a single-word test of articulation were coded as correct or incorrect and each error type was classified in relation to the 17 types of error described by Chin and Dinnsen (1992) for English-speaking children. Errors were also described in relation to types of consonant sequences as represented in French phonology.
Results: The description of consonant sequence errors by francophone children revealed similarities and differences in comparison to English-speaking children. A high degree of variability was observed across words and participants.
Conclusion: The need to take into account language-specific developmental norms for phonemes and prosodic structures when planning phonology intervention is highlighted in this study.

Link to article

--(Brosseau‐Lapré, F. & Rvachew, S.)(2014). Cross‐linguistic comparison of speech errors produced by English‐ and French‐speaking preschool‐age children with developmental phonological disorders. International Journal of Speech‐Language Pathology, 16(2), 98‐108.

Abstract: Twenty-four French-speaking children with developmental phonological disorders (DPD) were matched on percentage of consonants correct (PCC)-conversation, age, and receptive vocabulary measures to English-speaking children with DPD in order to describe how speech errors are manifested differently in these two languages. The participants' productions of consonants on a single-word test of articulation were compared in terms of feature-match ratios for the production of target consonants, and type of errors produced. Results revealed that the French-speaking children had significantly lower match ratios for the major sound class features [+ consonantal] and [+ sonorant]. The French-speaking children also obtained significantly lower match ratios for [+ voice]. The most frequent type of errors produced by the French-speaking children was syllable structure errors, followed by segment errors, and a few distortion errors. On the other hand, the English-speaking children made more segment than syllable structure and distortion errors. The results of the study highlight the need to use test instruments with French-speaking children that reflect the phonological characteristics of French at multiple levels of the phonological hierarchy.

Link to article

--(Rvachew, S. & Brosseau‐Lapre, F.) (2014). Pre‐ and post‐treatment production of syllable initial /ʁ/‐clusters by French‐speaking children (pp. 117‐139). In M. Yavas (Ed.), Unusual productions in phonology: universals and languagespecific considerations. Psychology Press/Taylor Francis.
--(Rvachew, S. & Tausch, C.) (2014). Otitis media and language development PAGE 5 (http://dx.doi.org/10.4135/9781483346441.n135). P. Brooks, V. Kempe & J.G. Golson (Eds.), Encyclopedia of Language Development. Thousand Oaks, CA: SAGE Reference.

Abstract: Otitis media (OM) includes all conditions involving fluid or inflammation in the middle-ear space. The fluid causes temporary hearing loss, and there is concern about subsequent language impairment because OM is most likely to occur early in life during critical periods for auditory system and speech perception development. Research indicates that the relationship between OM and language development is complex: OM impacts are mediated by the child's access to the language environment via pathways that are both direct (i.e., child hearing and auditory function factors) and indirect (i.e., child attentional and parental input factors). Furthermore, there may be interactions with genetic risk for OM itself as well as speech and language disorders. Research regarding these complex pathways is hampered by the difficulty describing a given child's history of OM exposure. When there is inflammation and infection, the child may have pain or fever and be otherwise unwell, prompting a request ...

Link to article

--(Rvachew, S.) (2014). Developmental phonological disorders (pp. 61‐72). L. Cummings (Ed.), Handbook of Communication Disorders. Cambridge, UK: Cambridge University Press.
Dr. Karsten Steinhauer
STEINHAUER, K. (Steinhauer, K.) (2014). Event-related potentials (ERPs) in second language research: A brief introduction to the technique, a selected review, and an invitation to reconsider critical periods in L2. Applied Linguistics 35 (4), 393-417.

Abstract: This article provides a selective overview of recent event-related brain potential (ERP) studies in L2 morpho-syntax, demonstrating that the ERP evidence supporting the critical period hypothesis (CPH) may be less compelling than previously thought. The article starts with a general introduction to ERP methodology and language-related ERP profiles in native speakers. The second section presents early ERP studies supporting the CPH, discusses some of their methodological problems, and follows up with data from more recent studies avoiding these problems. It is concluded that well-controlled ERP studies support the convergence hypothesis, according to which L2 learners initially differ from native speakers and then converge on native-like neurocognitive processing mechanisms. The fact that ERPs in late L2 learners at high levels of proficiency are often indistinguishable from those of native speakers suggests that age-of-acquisition effects in SLA are not primarily driven by maturational constraints.

Link to article

--(Molnar, M., Polka, L., Baum, S., & Steinhauer, K.)(2014). Learning two languages from birth shapes the pre-attentive process of speech perception: Electrophysiological correlates of vowel discrimination in monolingual and simultaneous bilinguals. Bilingualism: Language and Cognition, 17, 526-541. [BLC-12-RA—0090]

Abstract: Using event-related brain potentials (ERPs), we measured pre-attentive processing involved in native vowel perception as reflected by the mismatch negativity (MMN) in monolingual and simultaneous bilingual (SB) users of Canadian English and Canadian French in response to various pairings of four vowels: English /u/, French /u/, French /y/, and a control /y/. The monolingual listeners exhibited a discrimination pattern that was shaped by their native language experience. The SB listeners, on the other hand, exhibited a MMN pattern that was distinct from both monolingual listener groups, suggesting that the SB pre-attentive system is tuned to access sub-phonemic detail with respect to both input languages, including detail that is not readily accessed by either of their monolingual peers. Additionally, simultaneous bilinguals exhibited sensitivity to language context generated by the standard vowel in the MMN paradigm. The automatic access to fine phonetic detail may aid SB listeners to rapidly adjust their perception to the variable listening conditions that they frequently encounter.

Link to article

--(Kasparian, K., Vespignani, F. & Steinhauer, K.)(2014). The case of a non-native-like first language: ERP evidence of first language (L1) attrition in lexical and morphosyntactic processing. International Journal of Psychophysiology 94 (2), 159-160..

Abstract: The notion of a critical-period for second-language-learning is controversial; it is unresolved whether maturational constraints on neuroplasticity limit the ]";“native-likeness]";” of neurocognitive mechanisms underlying L2-processing, or whether other factors (e.g. exposure or proficiency) have a greater impact than age-of-acquisition on language processing in the brain. First-generation-immigrants who move to a new country in adulthood offer insight on this question, as they become highly-proficient in the late-acquired L2, while experiencing difficulties or ]";“attrition]";” in their native-L1.

Link to article

--(Steinhauer, K.)(2014). Learning and forgetting languages—An introduction to ERP approaches. International Journal of Psychophysiology 94 (2), 157.

Abstract: Early studies demonstrating systematic ERP differences between first language (L1) and second language (L2) are often cited as ‘hard evidence’ for the ‘critical period hypothesis’ (CPH), postulating that loss of brain plasticity in childhood requires adult L2 learners to rely on different brain mechanisms Weber-Fox and Neville, 1996). However, recent work has shown that these early studies confounded age of L2acquisition and L2 proficiency, and that — with increasing L2 proficiency — even adult L2 learners can elicit ERP profiles typical of native speakers, thus casting doubt on the CPH (Steinhauer et al., 2009). While the transition from low to native-like L2 proficiency corresponds to systematic qualitative and quantitative changes in ERPs, the specific trajectories are modulated by multiple factors. These include influences of one's mother tongue (L1 transfer), training environment (classroom versus immersion; Morgan-Short et al., 2012), and variability in psychometric measures (e.g., motivation; Tanner et al., 2014). More recently, L2 research has been complemented by studies investigating the neurocognitive changes of L1 attrition (and the impact of L2 on L1) in immigrants who are beginning to lose their mother tongue. If proficiency is the main predictor of ERP profiles, this population may show ‘native-like’ ERPs in their L2, not their L1. In L1 acquisition, research questions concern the time course of emerging linguistic abilities as reflected by ERPs, including similarities and differences compared to the trajectories seen in L2 learners. One major challenge is to tease apart age effects due to brain maturation and changes reflecting linguistic development.

Link to article

--(Royle, P. & Steinhauer, K.) (2014). Component changes in ERP profiles during language acquisition. International Journal of Psychophysiology 94 (2), 158.

Abstract: First language (L1) acquisition has been characterized as ‘continuous’ and contrasted with ‘dis-continuous’ second language (L2) acquisition in late learners (Clahsen, 1988). Event-related potentials have proven to be an excellent tool to reveal the dynamic neurocognitive changes in language processing as a function of proficiency in L2 (e.g., Osterhout etal, 2006; Steinhauer et al, 2009). ERP evidence for systematic neurocognitive changes in L1 development is surprisingly sparse, but has led to some tentative ERP profiles (e.g., Friederici, 2005) that show similarities to those of L2 learners. An additional problem in interpreting L1 acquisition data is that language proficiency is highly correlated with brain maturation and age, all of which may contribute to changing ERP profiles.

Our audio-visual ERP study tested 52 children aged 5–9 years inconceptual-semantic and grammatical gender mismatch conditions (le/*la poisson vert/*verte 'the.m/f. fish green.m/f) embedded in highly controlled and natural sounding spoken French sentences (Courteau etal., 2013). Even in the absence of grammaticality judgments, we observed phonological (frontal), semantic (N400) and morphosyntactic (LAN-P600) ERP components. Offline grammaticality judgement data for the same structures allow us to tease apart age and L1 proficiency effects. Comparisons with our data from adult speakers reveal the impact of factors such as age and proficiency on ERPs in both L1 and L2.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E.) (2014). Effects of exposure on vocabulary, morphosyntax and language processing in typical and impaired language development. In T. Grüter & J. Paradis (Eds.), Input and Experience in Bilingual Development. Philadelphia, PA: John Benjamins: TiLAR (Trends in Language Acquisition Research) series.
--(Thordardottir, E.) (2014). The relationship between bilingual exposure and morphosyntactic development. International Journal of Speech Language Pathology, Early on-line, DOI: 10.3109/17549507.2014.923509

Abstract:
Purpose: The study examined the effect of bilingual input on the grammatical development of bilingual children in comparison to monolingual peers.
Method: Spontaneous language samples were collected in English and French from typically-developing bilingual and monolingual pre-schoolers aged 3 years (n = 56) and 5 years (n = 83). Within each age group, children varied in bilingual exposure patterns but were matched on age, non-verbal cognition, maternal education and language status, speaking two majority languages. Measures included mean length of utterance (MLU) in words and morphemes, and accuracy and diversity of morphological use.
Result: Grammatical development in each language was strongly influenced by amount of same-language experience. Children with equal exposure to both languages scored comparably to monolingual children in both languages, whereas children with unequal exposure evidenced similarly unequal performance across languages and scored significantly lower than monolinguals in their weaker language. Scoring significantly lower than monolinguals in both languages may, therefore, be a sign of language impairment. Each language followed a strongly language-specific sequence of acquisition and error patterns. Five-year-old children with low exposure to English displayed an optional infinitive pattern, a strong clinical marker for Primary Language Impairment in monolingual English-speaking children.
Conclusion: Descriptive normative data are presented that permit more accurate interpretation of bilingual assessment data.

Link to article

--(Mayer-Crittenden, C., Thordardottir, E., Robillard, M., Minor-Corriveau, M., & Bélanger, R.) (2014). Données langagières franco-ontariennes : effets du contexte minoritaire et du bilinguisme. Revue Canadienne d’orthophonie et d’audiologie (Canadian Journal of Speech Language Pathology and Audiology), 38, 304-324.

Abstract: L’évaluation langagière des enfants franco-ontariens s’avère une tâche complexe pour les orthophonistes en raison d’un manque d’outils et de normes régionales. Cette étude a d’abord reproduit, auprès de 26 enfants franco-ontariens monolingues, une recherche québécoise (Thordardottir, Keheyia, Lessard, Sutton et Trudeau. 2010). Les enfants de la présente étude ont été appariés selon l’âge (n = 26, âge moyen = 60,38 mois, écart-type = 5,99), le statut socio-économique et la cognition non verbale ; ils différaient des Franco-Québécois selon la quantité d’intrants (input) et selon le statut linguistique des langues (minoritaire/majoritaire). Notre étude a ensuite évalué la performance d’enfants bilingues (français-anglais), soit les franco-dominants (n = 48, âge moyen = 59,60 mois, écart-type = 5,73) du même âge sur cette même batterie de tests. Ces deux groupes linguistiques ont été créés selon le niveau d’exposition aux langues. Les résultats de l’analyse descriptive montrent qu’au plan linguistique, les Franco-Ontariens monolingues réussissent moins bien que les Franco-Québécois et les bilingues franco-dominants réussissent encore moins bien que les monolingues de sorte que l’emploi des normes québécoises pour les Franco-Ontariens est remis en question. Cependant, une comparaison post hoc n’a produit aucune différence significative entre les Franco-Québécois et les Franco-Ontariens monolingues. D’autre part, les monolingues ontariens et québecois ont mieux réussi aux épreuves du langage expressif et réceptif que les bilingues franco-dominants. La présente étude a permis quelques avancées puisque très peu d’études dans la littérature portent sur l’évaluation des compétences linguistiques des enfants franco-ontariens, définissant ainsi des données préliminaires pour ces groupes d’âge

Link to article

--(MacLeod, A., Sutton, A., Sylvestre, A., Thordardottir, E. & Trudeau,N.)(2014). Outil de dépistage des troubles du développement des sons de la parole: base théorique et données préliminaires. Canadian Journal of Speech Language Pathology and Audiology, 38, 40-56.

Abstract: The goal of the present study is to present a tool for screening speech sound disorders among French-speaking preschool-aged children. Presently, there are no tools supported by research and normative data available to evaluate consonant production in French-speaking preschool- aged children. The present screening tool consists of 40 words. The preliminary normative data is based on the productions of 243 children aged 20 to 53 months. In addition, a specificity and sensibility analysis was conducted based on a group of 10 children who were identified as having a speech sound disorder. The results from the present study indicate that this promising tool for the screening of French-speaking children with speech sound disorders.

Link to article

2013

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM,S. (Ménard, L., Toupin, C., Baum, S., Drouin, S., Aubin, J., & Tiede, M.) (2013). Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 134, 2975-2987.

Abstract: In a previous paper [Ménard et al., J. Acoust. Soc. Am. 126, 1406–1414 (2009)], it was demonstrated that, despite enhanced auditory discrimination abilities for synthesized vowels, blind adult French speakers produced vowels that were closer together in the acoustic space than those produced by sighted adult French speakers, suggesting finer control of speech production in the sighted speakers. The goal of the present study is to further investigate the articulatory effects of visual deprivation on vowels produced by 11 blind and 11 sighted adult French speakers. Synchronous ultrasound, acoustic, and video recordings of the participants articulating the ten French oral vowels were made. Results show that sighted speakers produce vowels that are spaced significantly farther apart in the acoustic vowel space than blind speakers. Furthermore, blind speakers use smaller differences in lip protrusion but larger differences in tongue position and shape than their sighted peers to produce rounding and place of articulation contrasts. Trade-offs between lip and tongue positions were examined. Results are discussed in the light of the perception-for-action control theory.

Link to article

Dr. Meghan Clayards
CLAYARDS,M. (Brosseau-Lapré, F., Rvachew, S., Clayards, M., Dickson, D.) (2013). Stimulus variability and perceptual learning of non-native vowel categories. Applied Psycholinguistics. 34 (3), 419-441 doi:10.1017/S0142716411000750

Abstract: English-speakers' learning of a French vowel contrast (/ə/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

Dr. Laura Gonnerman
GONNERMAN,L. (Blais, M-J., & Gonnerman, L.M.) (2013). Explicit and implicit semantic processing of verb-particle constructions by French-English bilinguals. Bilingualism: Language and Cognition, 16, 829-846.

Abstract: Verb–particle constructions are a notoriously difficult aspect of English to acquire for second-language (L2) learners. The present study investigated whether L2 English speakers are sensitive to gradations in semantic transparency of verb–particle constructions (e.g., finish up vs. chew out). French–English bilingual participants (first language: French, second language: English) completed an off-line similarity ratings survey, as well as an on-line masked priming task. Results of the survey showed that bilinguals’ similarity ratings became more native-like as their English proficiency levels increased. Results from the masked priming task showed that response latencies from high, but not low-proficiency bilinguals were similar to those of monolinguals, with mid- and high-similarity verb–particle/verb pairs (e.g., finish up/finish) producing greater priming than low-similarity pairs (e.g., chew out/chew). Taken together, the results suggest that L2 English speakers develop both explicit and implicit understanding of the semantic properties of verb–particle constructions, which approximates the sensitivity of native speakers as English proficiency increases.

Link to article

--(Rvachew, S., *Marquis, A., *Brosseau-Lapré, F., *Paul, M., Royle, P., Gonnerman, L.M.) (2013). Speech articulation performance of francophone children in the early school years: Norming of the Test de Dépistage Francophone de Phonologie. Clinical Linguistics and Phonetics, 27, 950-968.

Abstract: Good quality normative data are essential for clinical practice in speech-language pathology but are largely lacking for French-speaking children. We investigated speech production accuracy by French-speaking children attending kindergarten (maternelle) and first grade (première année). The study aimed to provide normative data for a new screening test – the Test de Dépistage Francophone de Phonologie. Sixty-one children named 30 pictures depicting words selected to be representative of the distribution of phonemes, syllable shapes and word lengths characteristic of Québec French. Percent consonants’ correct was approximately 90% and did not change significantly with age although younger children produced significantly more syllable structure errors than older children. Given that the word set reflects the segmental and prosodic characteristics of spoken Québec French, and that ceiling effects were not observed, these results further indicate that phonological development is not complete by the age of seven years in French-speaking children.

Link to article

--(Kolne, K.L.D, *Hill, K.J., & Gonnerman, L.M.) (2013). The role of morphology in spelling: Long-term effects of training. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society (pp. 2766-2771). Austin, TX: Cognitive Science Society.

Abstract: We directly compared the effectiveness of a spelling intervention focused on morphological structure with one that emphasized the meanings of complex words, to differentiate their relative contributions to spelling acquisition in grade 3 and grade 5. We found that the morphology intervention provided a greater improvement than the vocabulary intervention, especially for children in grade 5. To compare the long-term effects of the two interventions, we tested the children’s spelling ability six-months after the conclusion of the intervention program. Results show that both grades maintain an increase in spelling accuracy compared to their pre-intervention performance. Additionally, the children in grade 5 who received morphological instruction retained more spelling knowledge than those who received the vocabulary instruction. These results suggest that teaching children about the structure of complex words supports their spelling ability in the long-term, providing evidence for the important role of morphological knowledge in literacy development.

Link to article

Dr. Vincent Gracco
GRACCO,V.L.(Tremblay, P., Deschamps, I., & Gracco, V.L.) (2013) Regional heterogeneity in the processing and the production of speech in the human planum temporale. Cortex, 49: 143-157.

Abstract:
INTRODUCTION:
The role of the left planum temporale (PT) in auditory language processing has been a central theme in cognitive neuroscience since the first descriptions of its leftward neuroanatomical asymmetry. While it is clear that PT contributes to auditory language processing there is still some uncertainty about its role in spoken language production.

METHODS:
Here we examine activation patterns of the PT for speech production, speech perception and single word reading to address potential hemispheric and regional functional specialization in the human PT. To this aim, we manually segmented the left and right PT in three non-overlapping regions (medial, lateral and caudal PT) and examined, in two complementary experiments, the contribution of exogenous and endogenous auditory input on PT activation under different speech processing and production conditions.

RESULTS:
Our results demonstrate that different speech tasks are associated with different regional functional activation patterns of the medial, lateral and caudal PT. These patterns are similar across hemispheres, suggesting bilateral processing of the auditory signal for speech at the level of PT.

CONCLUSIONS:
Results of the present studies stress the importance of considering the anatomical complexity of the PT in interpreting fMRI data.

Link to article

--(Beal,D., Gracco, V.L., Brettschneider, J., Kroll, R.M., DeNil,L.) (2013). A voxel-based morphometry (VBM) analysis of regional grey and white matter volume abnormalities within the speech production network of children who stutter. Cortex, 49: 2151-2161.

Abstract: It is well documented that neuroanatomical differences exist between adults who stutter and their fluently speaking peers. Specifically, adults who stutter have been found to have more grey matter volume (GMV) in speech relevant regions including inferior frontal gyrus, insula and superior temporal gyrus (Beal et al., 2007; Song et al., 2007). Despite stuttering having its onset in childhood only one study has investigated the neuroanatomical differences between children who do and do not stutter. Chang et al. (2008) reported children who stutter had less GMV in the bilateral inferior frontal gyri and middle temporal gyrus relative to fluently speaking children. Thus it appears that children who stutter present with unique neuroanatomical abnormalities as compared to those of adults who stutter. In order to better understand the neuroanatomical correlates of stuttering earlier in its development, near the time of onset, we used voxel-based morphometry to examine volumetric differences between 11 children who stutter and 11 fluent children. Children who stutter had less GMV in the bilateral inferior frontal gyri and left putamen but more GMV in right Rolandic operculum and superior temporal gyrus relative to fluent children. Children who stutter also had less white matter volume bilaterally in the forceps minor of the corpus callosum. We discuss our findings of widespread anatomic abnormalities throughout the cortical network for speech motor control within the context of the speech motor skill limitations identified in people who stutter (Namasivayam and van Lieshout, 2008; Smits-Bandstra et al., 2006).

Link to article

--(Sato,M., Troille,E., Menard, L., Cathiard, M-A., Gracco, V.L.) (2013). Silent articulation modulates auditory and audiovisual speech perception. Experimental Brain Research, DOI 10.1007/s00221-013-3510-8.

Abstract: The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.

Link to article

--(Smits-Bandstra, S., Gracco, V.L.) (2013). Verbal Implicit Sequence Learning in Persons who Stutter and Persons with Parkinson’s Disease. Journal of Motor Behavior, 45(5): 381-393.

Abstract: The authors investigated the integrity of implicit learning systems in 14 persons with Parkinson's disease (PPD), 14 persons who stutter (PWS), and 14 control participants. In a 120-min session participants completed a verbal serial reaction time task, naming aloud 4 syllables in response to 4 visual stimuli. Unbeknownst to participants, the syllables formed a repeating 8-item sequence. PWS and PPD demonstrated slower reaction times for early but not late learning trials relative to controls reflecting delays but not deficiencies in general learning. PPD also demonstrated less accuracy in general learning relative to controls. All groups demonstrated similar limited explicit sequence knowledge. Both PWS and PPD demonstrated significantly less implicit sequence learning relative to controls, suggesting that stuttering may be associated with compromised functional integrity of the cortico-striato-thalamo-cortical loop.

Link to article

--(Grabski, K., Tremblay, P., Gracco, V.L., Girin, L., Granjon, L., Sato, M.) (2013). A mediating role of the auditory dorsal pathway in selective adaptation to speech: a state-dependent transcranial magnetic stimulation study. Brain Research, 1515: 55-65.

Abstract: In addition to sensory processing, recent neurobiological models of speech perception postulate the existence of a left auditory dorsal processing stream, linking auditory speech representations in the auditory cortex with articulatory representations in the motor system, through sensorimotor interaction interfaced in the supramarginal gyrus and/or the posterior part of the superior temporal gyrus. The present state-dependent transcranial magnetic stimulation study is aimed at determining whether speech recognition is indeed mediated by the auditory dorsal pathway, by examining the causal contribution of the left ventral premotor cortex, supramarginal gyrus and posterior part of the superior temporal gyrus during an auditory syllable identification/categorization task. To this aim, participants listened to a sequence of /ba/ syllables before undergoing a two forced-choice auditory syllable decision task on ambiguous syllables (ranging in the categorical boundary between /ba/ and /da/). Consistent with previous studies on selective adaptation to speech, following adaptation to /ba/, participants responses were biased towards /da/. In contrast, in a control condition without prior auditory adaptation no such bias was observed. Crucially, compared to the results observed without stimulation, single-pulse transcranial magnetic stimulation delivered at the onset of each target stimulus interacted with the initial state of each of the stimulated brain area by enhancing the adaptation effect. These results demonstrate that the auditory dorsal pathway contribute to auditory speech adaptation.

Link to article

--(Arnaud, L., Sato, M., Menard, L., Gracco, V.L.) (2013). Speech adaptation reveals enhanced neural processing in the associative occipital and parietal cortex of congenitally blind adults. PLoS ONE 8(5): e64553. doi:10.1371/journal.pone.0064553.

Abstract: In the congenitally blind (CB), sensory deprivation results in cross-modal plasticity, with visual cortical activity observed for various auditory tasks. This reorganization has been associated with enhanced auditory abilities and the recruitment of visual brain areas during sound and language processing. The questions we addressed are whether visual cortical activity might also be observed in CB during passive listening to auditory speech and whether cross-modal plasticity is associated with adaptive differences in neuronal populations compared to sighted individuals (SI). We focused on the neural substrate of vowel processing in CB and SI adults using a repetition suppression (RS) paradigm. RS has been associated with enhanced or accelerated neural processing efficiency and synchronous activity between interacting brain regions. We evaluated whether cortical areas in CB were sensitive to RS during repeated vowel processing and whether there were differences across the two groups. In accordance with previous studies, both groups displayed a RS effect in the posterior temporal cortex. In the blind, however, additional occipital, temporal and parietal cortical regions were associated with predictive processing of repeated vowel sounds. The findings suggest a more expanded role for cross-modal compensatory effects in blind persons during sound and speech processing and a functional transfer of specific adaptive properties across neural regions as a consequence of sensory deprivation at birth.

Link to article

--(Mollaei, F., Shiller, D., Gracco, V.L.) (2013). Sensorimotor adaptation of speech in Parkinson’s disease. Movement Disorders. DOI: 10.1002/mds.25588

Abstract: The basal ganglia are involved in establishing motor plans for a wide range of behaviors. Parkinson's disease (PD) is a manifestation of basal ganglia dysfunction associated with a deficit in sensorimotor integration and difficulty in acquiring new motor sequences, thereby affecting motor learning. Previous studies of sensorimotor integration and sensorimotor adaptation in PD have focused on limb movements using visual and force-field alterations. Here, we report the results from a sensorimotor adaptation experiment investigating the ability of PD patients to make speech motor adjustments to a constant and predictable auditory feedback manipulation. Participants produced speech while their auditory feedback was altered and maintained in a manner consistent with a change in tongue position. The degree of adaptation was associated with the severity of motor symptoms. The patients with PD exhibited adaptation to the induced sensory error; however, the degree of adaptation was reduced compared with healthy, age-matched control participants. The reduced capacity to adapt to a change in auditory feedback is consistent with reduced gain in the sensorimotor system for speech and with previous studies demonstrating limitations in the adaptation of limb movements after changes in visual feedback among patients with PD.

Link to article

--(Klepousniotou, E., Gracco, V.L., Pike, G.B.) (2013). Pathways to lexical ambiguity: fMRI evidence for bilateral fronto-parietal involvement in language processing. Brain & Language, 123:11-21.

Abstract: Numerous functional neuroimaging studies reported increased activity in the pars opercularis and the pars triangularis (Brodmann’s areas 44 and 45) of the left hemisphere during the performance of linguistic tasks. The role of these areas in the right hemisphere in language processing is not understood and, although there is evidence from lesion studies that the right hemisphere is involved in the appreciation of semantic relations, no specific anatomical substrate has yet been identified. This event-related functional magnetic resonance imaging study compared brain activity during the performance of language processing trials in which either dominant or subordinate meaning activation of ambiguous words was required. The results show that the ventral part of the pars opercularis both in the left and the right hemisphere is centrally involved in language processing. In addition, they highlight the bilateral co-activation of this region with the supramarginal gyrus of the inferior parietal lobule during the processing of this type of linguistic material. This study, thus, provides the first evidence of co-activation of Broca’s region and the inferior parietal lobule, succeeding in further specifying the relative contribution of these cortical areas to language processing.

Link to article

Dr. Aparna Nadig
NADIG,A. (Nadig, A., & Shaw, H.) (Published online 18 Dec 2012). Acoustic marking of prominence: How do preadolescent speakers with and without high-functioning autism mark contrast in an interactive task? Language and Cognitive Processes.

Abstract: The acoustic correlates of discourse prominence have garnered much interest in recent adult psycholinguistics work, and the relative contributions of amplitude, duration and pitch to prominence have also been explored in research with young children. In this study, we bridge these two age groups by examining whether specific acoustic features are related to the discourse function of marking contrastive stress by preadolescent speakers, via speech obtained in a referential communication task that presented situations of explicit referential contrast. In addition, we broach the question of listener-oriented versus speaker-internal factors in the production of contrastive stress by examining both speakers who are developing typically and those with high-functioning autism (HFA). Diverging from conventional expectations and early reports, we found that speakers with HFA, like their typically developing peers (TYP), appropriately marked prominence in the expected location, on the pre-nominal adjective, in instructions such as “Pick up the BIG cup”. With respect to the use of specific acoustic features, both groups of speakers employed amplitude and duration to mark the contrastive element, whereas pitch was not produced selectively to mark contrast by either group. However, the groups also differed in their relative reliance on acoustic features, with HFA speakers relying less consistently on amplitude than TYP speakers, and TYP speakers relying less consistently on duration than HFA speakers. In summary, the production of contrastive stress was found to be globally similar across groups, with fine-grained differences in the acoustic features employed to do so. These findings are discussed within a developmental framework of the production of acoustic features for marking discourse prominence, and with respect to the variations among speakers with autism spectrum disorders that may lead to appropriate production of contrastive stress.

Link to article

-- (Bang, J., Burns, J. & Nadig, A.) (2013). Conveying subjectivity in conversation: Mental state terms and personal narratives in typical development and children with high functioning autism. Journal of Autism and Developmental Disorders, 43 (7), 1732-1740.

Abstract: Mental state terms and personal narratives are conversational devices used to communicate subjective experience in conversation. Pre-adolescents with high-functioning autism (HFA, n = 20) were compared with language-matched typically-developing peers (TYP, n = 17) on production of mental state terms (i.e., perception, physiology, desire, emotion, cognition) and personal narratives (sequenced retelling of life events) during short conversations. HFA and TYP participants did not differ in global use of mental state terms, nor did they exhibit reduced production of cognitive terms in particular. Participants with HFA produced significantly fewer personal narratives. They also produced a smaller proportion of their mental state terms during personal narratives. These findings underscore the importance of assessing and developing qualitative aspects of conversation in highly verbal individuals with autism.

Link to article

--(Bani Hani, H., Gonzalez-Barrero, A. & Nadig, A.) (2013). Children’s referential understanding of novel words and parent labelling behaviours: similarities across children with and without autism spectrum disorders. Journal of Child Language, 40 (5), 971-1002.

Abstract: This study examined two facets of the use of social cues for early word learning in parent–child dyads, where children had an Autism Spectrum Disorder (ASD) or were typically developing. In Experiment 1, we investigated word learning and generalization by children with ASD (age range: 3;01–6;02) and typically developing children (age range: 1;02–4;09) who were matched on language ability. In Experiment 2, we examined verbal and non-verbal parental labeling behaviors. First, we found that both groups were similarly able to learn a novel label using social cues alone, and to generalize this label to other representations of the object. Children who utilized social cues for word learning had higher language levels. Second, we found that parental cues used to introduce object labels were strikingly similar across groups. Moreover, parents in both groups adapted labeling behavior to their child's language level, though this surfaced in different ways across groups.

Link to article

Dr. Marc Pell
PELL,M.D. (Garrido-Vásquez, P., Pell, M.D., Paulmann, S., Strecker, K., Schwarz, J., & Kotz, S.A.) (2013). An ERP study of vocal emotion processing in asymmetric Parkinson's disease. Social, Cognitive and Affective Neuroscience, 8 (8), 918-927.

Abstract: Parkinson's disease (PD) has been related to impaired processing of emotional speech intonation (emotional prosody). One distinctive feature of idiopathic PD is motor symptom asymmetry, with striatal dysfunction being strongest in the hemisphere contralateral to the most affected body side. It is still unclear whether this asymmetry may affect vocal emotion perception. Here, we tested 22 PD patients (10 with predominantly left-sided [LPD] and 12 with predominantly right-sided motor symptoms) and 22 healthy controls in an event-related potential study. Sentences conveying different emotional intonations were presented in lexical and pseudo-speech versions. Task varied between an explicit and an implicit instruction. Of specific interest was emotional salience detection from prosody, reflected in the P200 component. We predicted that patients with predominantly right-striatal dysfunction (LPD) would exhibit P200 alterations. Our results support this assumption. LPD patients showed enhanced P200 amplitudes, and specific deficits were observed for disgust prosody, explicit anger processing and implicit processing of happy prosody. Lexical speech was predominantly affected while the processing of pseudo-speech was largely intact. P200 amplitude in patients correlated significantly with left motor scores and asymmetry indices. The data suggest that emotional salience detection from prosody is affected by asymmetric neuronal degeneration in PD.

Link to article

-- (Rigoulot, S., *Wassiliwizky, E., & Pell, M.D.) (2013). Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition. Frontiers in Psychology, 4, 1-14. Doi: 10.3389/fpsyg.2013.00367.

Abstract: Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400-1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.

Link to article

Dr. Linda Polka
POLKA, L. (Nazzi, T., Mersad, K., Sundara, M., Iakimova, G., & Polka, L.) (2013). Early word segmentation in infants acquiring Parisian French: task-dependent and dialect-specific aspects, Journal of Child Language, 1-24.

Abstract: Six experiments explored Parisian French-learning infants' ability to segment bisyllabic words from fluent speech. The first goal was to assess whether bisyllabic word segmentation emerges later in infants acquiring European French compared to other languages. The second goal was to determine whether infants learning different dialects of the same language have partly different segmentation abilities, and whether segmenting a non-native dialect has a cost. Infants were tested on standard European or Canadian French stimuli, in the word-passage or passage-word order. Our study first establishes an early onset of segmentation abilities: Parisian infants segment bisyllabic words at age 0;8 in the passage-word order only (revealing a robust order of presentation effect). Second, it shows that there are differences in segmentation abilities across Parisian and Canadian French infants, and that there is a cost for cross-dialect segmentation for Parisian infants. We discuss the implications of these findings for understanding word segmentation processes.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Marquis, A., Brosseau‐Lapré, F., Royle, P., Paul, M., & Gonnerman, L. M.) (2013). Speech articulation performance of francophone children in the early school years: Norming of the Test de Dépistage Francophone de Phonologie. Clinical Linguistics & Phonetics, Early Online, 1‐19. doi:10.3109/02699206.2013.830149.

Abstract: Good quality normative data are essential for clinical practice in speech-language pathology but are largely lacking for French-speaking children. We investigated speech production accuracy by French-speaking children attending kindergarten (maternelle) and first grade (première année). The study aimed to provide normative data for a new screening test – the Test de Dépistage Francophone de Phonologie. Sixty-one children named 30 pictures depicting words selected to be representative of the distribution of phonemes, syllable shapes and word lengths characteristic of Québec French. Percent consonants’ correct was approximately 90% and did not change significantly with age although younger children produced significantly more syllable structure errors than older children. Given that the word set reflects the segmental and prosodic characteristics of spoken Québec French, and that ceiling effects were not observed, these results further indicate that phonological development is not complete by the age of seven years in French-speaking children.

Link to article

--(Brosseau‐Lapré, F., & Rvachew, S.) (2013). Cross‐linguistic comparison of speech errors produced by English‐ and French‐speaking preschool age children with developmental phonological disorders. International Journal of Speech‐Language Pathology, Early Online, 1‐11.

Abstract: Twenty-four French-speaking children with developmental phonological disorders (DPD) were matched on percentage of consonants correct (PCC)-conversation, age, and receptive vocabulary measures to English-speaking children with DPD in order to describe how speech errors are manifested differently in these two languages. The participants' productions of consonants on a single-word test of articulation were compared in terms of feature-match ratios for the production of target consonants, and type of errors produced. Results revealed that the French-speaking children had significantly lower match ratios for the major sound class features [+ consonantal] and [+ sonorant]. The French-speaking children also obtained significantly lower match ratios for [+ voice]. The most frequent type of errors produced by the French-speaking children was syllable structure errors, followed by segment errors, and a few distortion errors. On the other hand, the English-speaking children made more segment than syllable structure and distortion errors. The results of the study highlight the need to use test instruments with French-speaking children that reflect the phonological characteristics of French at multiple levels of the phonological hierarchy.

Link to article

--(Brosseau‐Lapré, F., Rvachew, S., Clayards, M. & Dickson, D.) (2013). Stimulus variability and perceptual learning of non‐native vowel categories. Applied Psycholinguistics, 34, 419‐441. doi:10.1017/S0142716411000750

Abstract: English-speakers' learning of a French vowel contrast (/ə/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (Royle, P., Drury, J. E., & Steinhauer, K.) (2013). ERPs and task effects in the auditory processing of gender agreement and semantics in French. The Mental Lexicon, 8(2), 216‐244.

Abstract: We investigated task effects on violation ERP responses to Noun-Adjective gender mismatches and lexical/conceptual semantic mismatches in a combined auditory/visual paradigm in French. Participants listened to sentences while viewing pictures of objects. This paradigm was designed to investigate language processing in special populations (e.g., children) who may not be able to read or to provide stable behavioural judgment data. Our main goal was to determine how ERP responses to our target violations might differ depending on whether participants performed a judgment task (Task) versus listening for comprehension (No-Task). Characterizing the influence of the presence versus absence of judgment tasks on violation ERP responses allows us to meaningfully interpret data obtained using this paradigm without a behavioural task and relate them to judgment-based paradigms in the ERP literature. We replicated previously observed ERP patterns for semantic and gender mismatches, and found that the task especially affected the later P600 component.

Link to article

--(Nickels, S., Opitz, B., & Steinhauer, K.) (2013). ERPs show that classroom‐instructed late second language learners rely on the same prosodic cues in syntactic parsing as native speakers. Neuroscience letters, 557, 107‐111.

Abstract: The loss of brain plasticity after a 'critical period' in childhood has often been argued to prevent late language learners from using the same neurocognitive mechanisms as native speakers and, therefore, from attaining a high level of second language (L2) proficiency [7,11]. However, more recent behavioral and electrophysiological research has challenged this 'Critical Period Hypothesis', demonstrating that even late L2 learners can display native-like performance and brain activation patterns [17], especially after longer periods of immersion in an L2 environment. Here we use event-related potentials (ERPs) to show that native-like processing can also be observed in the largely under-researched domain of speech prosody - even when L2 learners are exposed to their second language almost exclusively in a classroom setting. Participants listened to spoken sentences whose prosodic boundaries would either cooperate or conflict with the syntactic structure. Previous work had shown that this paradigm is difficult for elderly native speakers, however, German L2 learners of English showed very similar ERP components for on-line prosodic phrasing as well as for prosody-syntax mismatches (garden path effects) as the control group of native speakers. These data suggest that L2 immersion is not always necessary to master complex L2 speech processing in a native-like way.

Link to article

--(Bowden, H. W., Steinhauer, K., Sanz, C., & Ullman, M. T.) (2013). Native‐like brain processing of syntax can be attained by university foreign language learners. Neuropsychologia, 51(13), 2492‐2511.

Abstract: Using event-related potentials (ERPs), we examined the neurocognition of late-learned second language (L2) Spanish in two groups of typical university foreign-language learners (as compared to native (L1) speakers): one group with only one year of college classroom experience, and low-intermediate proficiency (L2 Low), and another group with over three years of college classroom experience as well as 1–2 semesters of immersion experience abroad, and advanced proficiency (L2 Advanced). Semantic violations elicited N400s in all three groups, whereas syntactic word-order violations elicited LAN/P600 responses in the L1 and L2 Advanced groups, but not the L2 Low group. Indeed, the LAN and P600 responses were statistically indistinguishable between the L1 and L2 Advanced groups. The results support and extend previous findings. Consistent with previous research, the results suggest that L2 semantic processing always depends on L1-like neurocognitive mechanisms, whereas L2 syntactic processing initially differs from L1, but can shift to native-like processes with sufficient proficiency or exposure, and perhaps with immersion experience in particular. The findings further demonstrate that substantial native-like brain processing of syntax can be achieved even by typical university foreign-language learners.

Link to article

--(Courteau E, Royle P, Gascon A, Marquis A, Drury JE, Steinhauer K.) (2013) Gender concord and semantic processing in French children: An auditory ERP study. Dans S Baiz, N, Goldman & R Hawkes (Éds.),Proceedings of the 37th Annual Boston University Conference on Language Development. (Vol. 1, pp. 87‐99). Boston: Cascadilla.

Abstract: The present study used event - related brain potentials (ERPs) to investigate language processing in young children, focusing on gender agreement (determiner - noun and noun - adjective) and conceptual semantic s in French.Electrophysiological measurement techniques provide a valuable addition to our methodological toolkit for studying agreement processing in this population, in particular concerning noun - adjective agreement (concord), since ot her traditional sources of data have tended to be uninformative. A lthough children arguably exhibit systematic constraints on their linguistic behavior, this is not always evident in the laboratory (e.g., where task demands may mask the presence of linguis tic knowledge) or in investigations of child language corpora. For example, although French - speaking children seem to master adjective and determiner concord early on, productive use of gender - marked adjectives is not clearly supported in the corpus, where determiner use predominates ( Valois & Royle, 2009) , or in elicitation, where idiosyncratic gender marking on adjectives may result in variabl e mastery of feminine forms ( Royle & Valois, 2010). Here we report on an auditory/visual ERP study that shows that the processing of gender agreement can be reliably tapped in young French children.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E. & Brandeker, M.) (2013). The effect of bilingual exposure versus language impairment on nonword repetition and sentence imitation scores. Journal of Communication Disorders, 46, 1-16.

Abstract: Purpose:
Nonword repetition (NWR) and sentence imitation (SI) are increasingly used as diagnostic tools for the identification of Primary Language Impairment (PLI). They may be particularly promising diagnostic tools for bilingual children if performance on them is not highly affected by bilingual exposure. Two studies were conducted which examined (1) the effect of amount of bilingual exposure on performance on French and English nonword repetition and sentence imitation in 5-year-old French-English bilingual children and (2) the diagnostic accuracy of the French versions of these measures and of receptive vocabulary in 5-year-old monolingual French-speakers and bilingual speakers with and without PLI, carefully matched on language exposure.

Method:
Study 1 included 84 5-year-olds acquiring French and English simultaneously, differing in their amount of exposure to the two languages but equated on age, nonverbal cognition and socio-economic status. Children were administered French and English tests of NWR and SI. In Study 2, monolingual and bilingual children with and without PLI (four groups, n = 14 per group) were assessed for NWR, SI, and receptive vocabulary in French to determine diagnostic accuracy.

Results:
Study 1: Both processing measures, but in particular NWR, were less affected by previous exposure than vocabulary measures. Bilingual children with varying levels of exposure were unaffected by the length of nonwords. Study 2: In contrast to receptive vocabulary, NWR and SI correctly distinguished children with PLI from children with typical development (TD) regardless of bilingualism. Sensitivity levels were acceptable, but specificity was lower.

Conclusions:
Bilingual children perform differently than children with PLI on NWR and SI. In contrast to children with PLI, bilingual children with a large range of previous exposure levels achieve high NWR scores and are unaffected by the length of the nonwords.

Link to article

2012

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Bélanger, N., Mayberry, R., & Baum, S.) (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16, 263-285.

Abstract: Deaf people often achieve low levels of reading skills. The hypothesis that the useof phonological codes is associated with good reading skills in deaf readers is not yet fully supported in the literature. We investigated skilled and less skilled adultdeaf readers’ use of orthographic and phonological codes in reading. Experiment 1used a masked priming paradigm to investigate automatic use of these codes duringvisual word processing. Experiment 2 used a serial recall task to determine whetherorthographic and phonological codes are used to maintain words in memory. Skilled hearing, skilled deaf, and less skilled deaf readers used orthographic codes duringword recognition and recall, but only skilled hearing readers relied on phonologicalcodes during these tasks. It is important to note that skilled and less skilled deaf readers performed similarly in both tasks, indicating that reading dif?culties in deaf adults may not be linked to the activation of phonological codes during reading

Link to article

-- (Zatorre, R. & Baum, S.) (2012). Musical melody and speech intonation: Singing a different tune? PLoS Biology, 10(7): e1001372. doi:10.1371/journal.pbio.1001372.

Abstract: Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L. (Gonnerman, L.M.) (2012). The roles of efficiency and complexity in the processing of verb particle constructions. Journal of Speech Sciences, 2, 3-31.

Abstract: Recent theories have proposed that processing difficulty affects both individuals’ choice of grammatical structures and the distribution of these structures across languages of the world (Hawkins, 2004). Researchers have proposed that performance constraints, such as efficiency, integration, and storage costs, drive languages to choose word orders that minimize processing demands for individual speakers (Hawkins, 1994; Gibson, 2000). This study investigates whether three performance factors, adjacency, dependency, and complexity, affect reading times of sentences with verb-particle constructions. Results indicate that it is more difficult to process dependent verb-particles in shifted sentences that contain more complex intervening noun phrases. These findings demonstrate how performance factors interact and how the relative weight of each affects processing. The results also support the notion that processing ease affects grammaticalization, such that those structures which are more easily processed by individuals (subject relatives and adjacent dependent constituents) are more common across languages (Keenan & Hawkins, 1987).

Link to article

-- (Blais, M-J., & Gonnerman, L.M.,) (2012). The role of semantic transparency in the processing of verb particle constructions by French-English bilinguals. In N. Miyake, D. Peebles, & R.P. Cooper (Eds.), Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society (pp. 1338-1343). Austin, TX: Cognitive Science Society.

Abstract: Verb-particle constructions (phrasal verbs) are a notoriously difficult aspect of English to acquire for second-language (L2) learners. This study was conducted to assess whether L2 English speakers would show sensitivity to the subtle semantic properties of these constructions, namely the gradations in semantic transparency of different verb-particle constructions (e.g., finish up vs. chew out). L1 French, L2 English bilingual participants completed an off-line (explicit) survey of similarity ratings, as well as an on-line (implicit) masked priming task. Bilinguals showed less agreement in their off-line ratings of semantic similarity, but their ratings were generally similar to those of monolinguals. On the masked priming task, the more proficient bilinguals showed a pattern of effects parallel to monolinguals, indicating similar sensitivity to semantic similarity at an implicit level. These findings suggest that the properties of verb-particle constructions can be both implicitly and explicitly grasped by L2 speakers whose L1 lacks phrasal verbs.

Link to article

-- (Marquis, A., Royle, P., Gonnerman, L. & Rvachew, S.) (2012). La conjugaison du verbe en début de scolarisation. Travaux interdisciplinaires sur la parole et le langage, 28, 2-13.

Abstract: We evaluated 35 Québec French children on their ability to produce regular, sub-regular, and irregular passé composé verb forms (ending in -é, -i, -u or other). An elicitation task was administered to children attending preschool or first grade. Target verbs were presented, along with images representing them, in infinitive (e.g., Marie va cacher ses poupées ‘Mary aux.pres. hide-inf. her dolls’= ‘Mary will hide her dolls’) and present tense (ex. Marie cache toujours ses poupées ‘Mary hide-3s. always her dolls’= ‘Mary always hides her dolls’) contexts, in order to prime the appropriate inflectional ending. Children were asked to produce target verb forms in the passé composé (perfect past) by answering the question ‘What did he/she do yesterday?’. Results show no reduction of erroneous productions or error types with age. Response patterns highlight morphological pattern frequency effects, in addition to productivity and reliability effects, on children’s mastery of French conjugation. These data have consequences for psycholinguistic models of regular and irregular morphology processing and acquisition.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Klepousniotou E, Pike GB, Steinhauer K, Gracco VL) (2012). Not all ambiguous words are created equal: An EEG investigation of homonymy and polysemy. Brain &Language, 123(1): 1-7.

Abstract: Event-related potentials (ERPs) were used to investigate the time-course of meaning activation of different types of ambiguous words. Unbalanced homonymous ("pen"), balanced homonymous ("panel"), metaphorically polysemous ("lip"), and metonymically polysemous words ("rabbit") were used in a visual single-word priming delayed lexical decision task. The theoretical distinction between homonymy and polysemy was reflected in the N400 component. Homonymous words (balanced and unbalanced) showed effects of dominance/frequency with reduced N400 effects predominantly observed for dominant meanings. Polysemous words (metaphors and metonymies) showed effects of core meaning representation with both dominant and subordinate meanings showing reduced N400 effects. Furthermore, the division within polysemy, into metaphor and metonymy, was supported. Differences emerged in meaning activation patterns with the subordinate meanings of metaphor inducing differentially reduced N400 effects moving from left hemisphere electrode sites to right hemisphere electrode sites, potentially suggesting increased involvement of the right hemisphere in the processing of figurative meaning.

Link to article

-- (Beal D, Gracco VL, Brettschneider J, Kroll RM, DeNil L) (2012). A voxel-based morphometry (VBM) analysis of regional grey and white matter volume abnormalities within the speech production network of children who stutter. Cortex, DOI information: 10.1016/j.cortex.2012.08.013.

Abstract: It is well documented that neuroanatomical differences exist between adults who stutter and their fluently speaking peers. Specifically, adults who stutter have been found to have more grey matter volume (GMV) in speech relevant regions including inferior frontal gyrus, insula and superior temporal gyrus (Beal et al., 2007; Song et al., 2007). Despite stuttering having its onset in childhood only one study has investigated the neuroanatomical differences between children who do and do not stutter. Chang et al. (2008) reported children who stutter had less GMV in the bilateral inferior frontal gyri and middle temporal gyrus relative to fluently speaking children. Thus it appears that children who stutter present with unique neuroanatomical abnormalities as compared to those of adults who stutter. In order to better understand the neuroanatomical correlates of stuttering earlier in its development, near the time of onset, we used voxel-based morphometry to examine volumetric differences between 11 children who stutter and 11 fluent children. Children who stutter had less GMV in the bilateral inferior frontal gyri and left putamen but more GMV in right Rolandic operculum and superior temporal gyrus relative to fluent children. Children who stutter also had less white matter volume bilaterally in the forceps minor of the corpus callosum. We discuss our findings of widespread anatomic abnormalities throughout the cortical network for speech motor control within the context of the speech motor skill limitations identified in people who stutter (Namasivayam and van Lieshout, 2008; Smits-Bandstra et al., 2006).

Link to article

Dr. Aparna Nadig
NADIG, A. (Bourguignon, N., Nadig, A. & Valois, D.) (2012). The Biolinguistics of Autism: Emergent Perspectives. Biolinguistics, 6 (2), 124-165.

Abstract: This contribution attempts to import the study of autism into the biolinguistics program by reviewing the current state of knowledge on its neurobiology, physiology and verbal phenotypes from a comparative vantage point. A closer look at alternative approaches to the primacy of social cognition impairments in autism spectrum disorders suggests fundamental differences in every aspect of language comprehension and production, suggesting productive directions of research in auditory and visual speech processing as well as executive control. Strong emphasis is put on the great heterogeneity of autism phenotypes, raising important caveats towards an all-or-nothing classification of autism. The study of autism brings interesting clues about the nature and evolution of language, in particular its ontological connections with musical and visual perception as well as executive functions and generativity. Success in this endeavor hinges upon expanding beyond the received wisdom of autism as a purely social disorder and favoring a “cognitive style” approach increasingly called for both inside and outside the autistic community.

Link to article

-- (Nadig, A. & Shaw, H.) (2012). Expressive prosody in high-functioning autism: Increased pitch range and what it means to listeners. Journal of Autism and Developmental Disorders, 42 (4), 499-511.

Abstract: Are there consistent markers of atypical prosody in speakers with high functioning autism (HFA) compared to typically-developing speakers? We examined: (1) acoustic measurements of pitch range, mean pitch and speech rate in conversation, (2) perceptual ratings of conversation for these features and overall prosody, and (3) acoustic measurements of speech from a structured task. Increased pitch range was found in speakers with HFA during both conversation and structured communication. In global ratings listeners rated speakers with HFA as having atypical prosody. Although the HFA group demonstrated increased acoustic pitch range, listeners did not rate speakers with HFA as having increased pitch variation. We suggest that the quality of pitch variation used by speakers with HFA was non-conventional and thus not registered as such by listeners.

Link to article

Dr. Marc Pell
PELL, M.D. (Liu, P. & Pell, M.D.) (2012). Recognizing vocal emotions in Mandarin Chinese: A validated database of Chinese vocal emotional stimuli. Behavior Research Methods, 44, 1042-1051.

Abstract: To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu [at] mail.mcgill.ca.

Link to article

-- (Schwartz, R. & Pell, M.D.) (2012). Emotional speech processing at the intersection of prosody and semantics. PLoS ONE, 7 (10): e47279.

Abstract: The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing.

Link to article

-- (Pell, M.D., Robin, J., & Paulmann, S.) (2012). How quickly do listeners recognize emotional prosody in their native versus a foreign language? Speech Prosody 6th International Conference Proceedings, Shanghai, China.

Abstract: This study investigated whether the recognition of emotions from speech prosody occurs in a similar manner and has a similar time course when adults listen to their native language versus a foreign language. Native English listeners were presented emotionally-inflected pseudo-utterances produced in English or Hindi which had been gated to different time durations (200, 400, 500, 600, 700 ms). Results looked at how accurate the participants were to recognize emotions in each language condition and explored whether particular emotions could be identified from shorter time segments, and whether this was influenced by language experience. Results demonstrated that listeners recognized emotions reliably in both their native and in a foreign language; however, they demonstrated an advantage in accuracy and speed to detect some, but not all emotions, in the native language condition.

Link to article

-- (Rigoulot, S. & Pell, M.D.) (2012). Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces. PLoS ONE, 7 (1): e30740.

Abstract: Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.

Link to article

-- (Paulmann, S., Titone, D., & Pell, M.D.) (2012). How emotional prosody guides your way: evidence from eye movements. Speech Communication, 54, 92-107.

Abstract: This study investigated cross-modal effects of emotional voice tone (prosody) on face processing during instructed visual search. Specifically, we evaluated whether emotional prosodic cues in speech have a rapid, mandatory influence on eye movements to an emotionally-related face, and whether these effects persist as semantic information unfolds. Participants viewed an array of six emotional faces while listening to instructions spoken in an emotionally congruent or incongruent prosody (e.g., “Click on the happy face” spoken in a happy or angry voice). The duration and frequency of eye fixations were analyzed when only prosodic cues were emotionally meaningful (pre-emotional label window: “Click on the/…”), and after emotional semantic information was available (post-emotional label window: “…/happy face”). In the pre-emotional label window, results showed that participants made immediate use of emotional prosody, as reflected in significantly longer frequent fixations to emotionally congruent versus incongruent faces. However, when explicit semantic information in the instructions became available (post-emotional label window), the influence of prosody on measures of eye gaze was relatively minimal. Our data show that emotional prosody has a rapid impact on gaze behavior during social information processing, but that prosodic meanings can be overridden by semantic cues when linguistic information is task relevant.

Link to article

-- (Jaywant, A. & Pell, M.D.) (2012). Categorical processing of negative emotions from speech prosody. Speech Communication, 54, 1-10.

Abstract: Everyday communication involves processing nonverbal emotional cues from auditory and visual stimuli. To characterize whether emotional meanings are processed with category-specificity from speech prosody and facial expressions, we employed a cross-modal priming task (the Facial Affect Decision Task; Pell, 2005a) using emotional stimuli with the same valence but that differed by emotion category. After listening to angry, sad, disgusted, or neutral vocal primes, subjects rendered a facial affect decision about an emotionally congruent or incongruent face target. Our results revealed that participants made fewer errors when judging face targets that conveyed the same emotion as the vocal prime, and responded significantly faster for most emotions (anger and sadness). Surprisingly, participants responded slower when the prime and target both conveyed disgust, perhaps due to attention biases for disgust-related stimuli. Our findings suggest that vocal emotional expressions with similar valence are processed with category specificity, and that discrete emotion knowledge implicitly affects the processing of emotional faces between sensory modalities.

Link to article

Dr. Linda Polka
POLKA, L. (Nazzi,T., Goyet, L., Sundara, M. & Polka, L.) (2012). Différences linguistiques et dialectales dans la mise en place des procédures de segmentation de la parole. Enfance, 127-146

Abstract: This paper presents a review of recent studies investigating the issue of the early segmentation of continuous speech into words, a step in language acquisition that is a prerequisite for lexical acquisition. After having underlined the importance of this issue, we present studies having explored young infants’ use of two major segmentation cues: distributional cues and rhythmic unit cues. The first cue is considered to be non-specific to the language spoken in the infant’s environment, while the second cue differs across languages. The first cue thus predicts similar developmental trajectories for segmentation across languages, while the second cue predicts different types of developmental trajectories according to the rhythmic type of the language in acquisition. It was found that segmentation abilities emerge around 8 months of age and develop during the months that follow, and that the weight of the different cues vary across languages, according to the developmental period, and probably as a function of dialectal differences within a given language. We then discuss the fact that word form segmentation requires in all likeliness the combined use of different segmentation cues from the youngest age. We conclude by delineating some pending issues to be addressed in future research.

Link to article

-- (Polka, L. & Sundara, M,) (2012). Word segmentation in monolingual infants acquiring Canadian English and Canadian French: Native language, cross-language, and cross-dialect comparisons, Infancy, 17(2), 198-232.

Abstract: In five experiments, we tested segmentation of word forms from natural speech materials by 8-month-old monolingual infants who are acquiring Canadian French or Canadian English. These two languages belong to different rhythm classes; Canadian French is syllable-timed and Canada English is stress-timed. Findings of Experiments 1, 2, and 3 show that 8-month-olds acquiring either Canadian French or Canadian English can segment bi-syllable words in their native language. Thus, word segmentation is not inherently more difficult in a syllable-timed compared to a stress-timed language. Experiment 4 shows that Canadian French-learning infants can segment words in European French. Experiment 5 shows that neither Canadian French- nor Canadian English-learning infants can segment two syllable words in the other language. Thus, segmentation abilities of 8-month-olds acquiring either a stress-timed or syllable-timed language are language specific.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Marquis, A., Royle, P., Gonnerman, L. & Rvachew, S.) (2012). La conjugaison du verbe en début de scolarisation. Travaux interdisciplinaires sur la parole et le langage, 28, 2-13.

Abstract: We evaluated 35 Québec French children on their ability to produce regular, sub-regular, and irregular passé composé verb forms (ending in -é, -i, -u or other). An elicitation task was administered to children attending preschool or first grade. Target verbs were presented, along with images representing them, in infinitive (e.g., Marie va cacher ses poupées ‘Mary aux.pres. hide-inf. her dolls’= ‘Mary will hide her dolls’) and present tense (ex. Marie cache toujours ses poupées ‘Mary hide-3s. always her dolls’= ‘Mary always hides her dolls’) contexts, in order to prime the appropriate inflectional ending. Children were asked to produce target verb forms in the passé composé (perfect past) by answering the question ‘What did he/she do yesterday?’. Results show no reduction of erroneous productions or error types with age. Response patterns highlight morphological pattern frequency effects, in addition to productivity and reliability effects, on children’s mastery of French conjugation. These data have consequences for psycholinguistic models of regular and irregular morphology processing and acquisition.

Link to article

-- (Rvachew, S. & Brosseau-Lapré, F.) (2012). An input-focused intervention for children with developmental phonological disorders. Perspectives on Language Learning and Education, 19, 31-35.

In this article, we consider recent advances in theory and practice related to developmental phonological disorders (PDP). We consider the benefits of structured speech input to address PDP and provide a summary of a recent study designed to address phonological disorders in children using input-focused intervention. Results revealed that intervention focusing on input resulted in similar gains when compared to intervention focusing on speech production practice. We then discuss clinical implications.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (White, E.J., Genesee, F., & Steinhauer, K.) (2012). Brain Responses Before and After Intensive Second Language Learning: Proficiency Based Changes and First Language Background Effects in Adult Learners. PLoS ONE, 7(12), e52318. [PONE-D-12-22453]

Abstract: This longitudinal study tracked the neuro-cognitive changes associated with second language (L2) grammar learning in adults in order to investigate how L2 processing is shaped by a learner’s first language (L1) background and L2 proficiency. Previous studies using event-related potentials (ERPs) have argued that late L2 learners cannot elicit a P600 in response to L2 grammatical structures that do not exist in the L1 or that are different in the L1 and L2. We tested whether the neuro-cognitive processes underlying this component become available after intensive L2 instruction. Korean- and Chinese late-L2-learners of English were tested at the beginning and end of a 9-week intensive English-L2 course. ERPs were recorded while participants read English sentences containing violations of regular past tense (a grammatical structure that operates differently in Korean and does not exist in Chinese). Whereas no P600 effects were present at the start of instruction, by the end of instruction, significant P600s were observed for both L1 groups. Latency differences in the P600 exhibited by Chinese and Korean speakers may be attributed to differences in L1–L2 reading strategies. Across all participants, larger P600 effects at session 2 were associated with: 1) higher levels of behavioural performance on an online grammaticality judgment task; and 2) with correct, rather than incorrect, behavioural responses. These findings suggest that the neuro-cognitive processes underlying the P600 (e.g., “grammaticalization”) are modulated by individual levels of L2 behavioural performance and learning.

Link to article

-- (Royle, P., Drury, J.E., Bourguignon, N., & Steinhauer, K.) (2012). The temporal dynamics of inflected word recognition: A masked ERP priming study of French verbs. Neuropsychologia, 50, 3542–3553. Doi: 10.1016/j.neuropsychologia.2012.09.007

Abstract: Morphological aspects of human language processing have been suggested by some to be reducible to the combination of orthographic and semantic effects, while others propose that morphological structure is represented separately from semantics and orthography and involves distinct neuro-cognitive processing mechanisms. Here we used event-related brain potentials (ERPs) to investigate semantic, morphological and formal (orthographic) processing conjointly in a masked priming paradigm. We directly compared morphological to both semantic and formal/orthographic priming (shared letters) on verbs. Masked priming was used to reduce strategic effects related to prime perception and to suppress semantic priming effects. The three types of priming led to distinct ERP and behavioral patterns: semantic priming was not found, while formal and morphological priming resulted in diverging ERP patterns. These results are consistent with models of lexical processing that make reference to morphological structure. We discuss how they fit in with the existing literature and how unresolved issues could be addressed in further studies.

Link to article

-- (Klepousniotou E, Pike GB, Steinhauer K, Gracco VL) (2012). Not all ambiguous words are created equal: An EEG investigation of homonymy and polysemy. Brain &Language, 123(1): 1-7.

Abstract: Event-related potentials (ERPs) were used to investigate the time-course of meaning activation of different types of ambiguous words. Unbalanced homonymous ("pen"), balanced homonymous ("panel"), metaphorically polysemous ("lip"), and metonymically polysemous words ("rabbit") were used in a visual single-word priming delayed lexical decision task. The theoretical distinction between homonymy and polysemy was reflected in the N400 component. Homonymous words (balanced and unbalanced) showed effects of dominance/frequency with reduced N400 effects predominantly observed for dominant meanings. Polysemous words (metaphors and metonymies) showed effects of core meaning representation with both dominant and subordinate meanings showing reduced N400 effects. Furthermore, the division within polysemy, into metaphor and metonymy, was supported. Differences emerged in meaning activation patterns with the subordinate meanings of metaphor inducing differentially reduced N400 effects moving from left hemisphere electrode sites to right hemisphere electrode sites, potentially suggesting increased involvement of the right hemisphere in the processing of figurative meaning.

Link to article

-- (Bourguignon, N., Drury, J.E., Valois, D., & Steinhauer, K.) (2012). Decomposing animacy reversals between Agents and Experiencers: An ERP study. Brain and Language, 122, 179- 189.

Abstract: The present study aimed to refine current hypotheses regarding thematic reversal anomalies, which have been found to elicit either N400 or – more frequently – “semantic-P600” (sP600) effects. Our goal was to investigate whether distinct ERP profiles reflect aspectual-thematic differences between Agent-Subject Verbs (ASVs; e.g., ‘to eat’) and Experiencer-Subject Verbs (ESVs; e.g., ‘to love’) in English. Inanimate subject noun phrases created reversal anomalies on both ASV and ESV. Animacy-based prominence effects and semantic association were controlled to minimize their contribution to any ERP effects. An N400 was elicited by the target verb in the ESV but not the ASV anomalies, supporting the hypothesis of a distinctive aspectual-thematic structure between ESV and ASV. Moreover, the N400 finding for English ESV shows that, in contrast to previous claims, the presence versus absence of N400s for this kind of anomaly cannot be exclusively explained in terms of typological differences across languages.

Link to article

-- (Morgan-Short, K., Steinhauer, K., Sanz, C., & Ullman, M.T.) (2012). Explicit and implicit second language training differentially affect the achievement of native-language brain patterns. Journal of Cognitive Neuroscience, 24 (4), 933-947.

Abstract: It is widely believed that adults cannot learn a foreign language in the same way that children learn a first language. However, recent evidence suggests that adult learners of a foreign language can come to rely on native-like language brain mechanisms. Here, we show that the type of language training crucially impacts this outcome. We used an artificial language paradigm to examine longitudinally whether explicit training (that approximates traditional grammar-focused classroom settings) and implicit training (that approximates immersion settings) differentially affect neural (electrophysiological) and behavioral (performance) measures of syntactic processing. Results showed that performance of explicitly and implicitly trained groups did not differ at either low or high proficiency. In contrast, electrophysiological (ERP) measures revealed striking differences between the groups' neural activity at both proficiency levels in response to syntactic violations. Implicit training yielded an N400 at low proficiency, whereas at high proficiency, it elicited a pattern typical of native speakers: an anterior negativity followed by a P600 accompanied by a late anterior negativity. Explicit training, by contrast, yielded no significant effects at low proficiency and only an anterior positivity followed by a P600 at high proficiency. Although the P600 is reminiscent of native-like processing, this response pattern as a whole is not. Thus, only implicit training led to an electrophysiological signature typical of native speakers. Overall, the results suggest that adult foreign language learners can come to rely on native-like language brain mechanisms, but that the conditions under which the language is learned may be crucial in attaining this goal.

Link to article

-- (Steinhauer, K. & Drury, J.E.) (2012). On the early left-anterior negativity (ELAN) in syntax studies. Brain and Language. 120 (2), 135-162.

Abstract: Within the framework of Friederici's (2002) neurocognitive model of sentence processing, the early left anterior negativity (ELAN) in event-related potentials (ERPs) has been claimed to be a brain marker of syntactic first-pass parsing. As ELAN components seem to be exclusively elicited by word category violations (phrase structure violations), they have been taken as strong empirical support for syntax-first models of sentence processing and have gained considerable impact on psycholinguistic theory in a variety of domains. The present article reviews relevant ELAN studies and raises a number of serious issues concerning the reliability and validity of the findings. We also discuss how baseline problems and contextual factors can contribute to early ERP effects in studies examining word category violations. We conclude that--despite the apparent wealth of ELAN data--the functional significance of these findings remains largely unclear. The present paper does not claim to have falsified the existence of ELANs or syntax-related early frontal negativities. However, by separating facts from myths, the paper attempts to make a constructive contribution to how future ERP research in the area of syntax processing may better advance our understanding of online sentence comprehension.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Elin Thordardottir & Anna Gudrun Juliusdottir) (2012). Icelandic as a second language: A longitudinal study of language knowledge and processing by school-age children. International Journal of Bilingual Education and Bilingualism, 1-25. DOI:10.10801/ 13670050.2012.693062.

Abstract: School-age children (n=39) acquiring Icelandic as a second language were tested yearly over three years on Icelandic measures of language knowledge and language processing. Comparison with native speaker norms revealed large and significant differences for the great majority of the children. Those who scored within the normal monolingual range had a mean length of residence (LOR) of close to 8 years and had arrived to the country at an early age. Raw test scores revealed significant improvement across test times. However, the rate of learning did not occur sufficiently fast for the gap relative to native speakers to diminish over time. Effects of age at arrival and LOR were difficult to tease out. However, children arriving to the country in adolescence performed consistently less well than children with the same LOR arriving in mid childhood. In spite of low scores on standardized tests of language knowledge, the L2 learners scored uniformly high on an Icelandic test of nonword repetition. The acquisition of Icelandic as an L2 appears to occur at a slower rate than the L2 acquisition of English. This may be related to the grammatical complexity of the language as well as to the low global economic value of the Icelandic language.

Link to article

2011

Shari Baum, Ph.D., Professor
Meghan Clayards, Ph.D., Assistant Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Thibeault, M., Baum, S., Ménard, L., Richard, G., & McFarland, D.) (2011). Articulatory movements during speech adaptation to palatal perturbation. Journal of the Acoustical Society of America, 129, 2112-2120.

Abstract: Previous work has established that speakers have difficulty making rapid compensatory adjustments in consonant production (especially in fricatives) for structural perturbations of the vocal tract induced by artificial palates with thicker-than-normal alveolar regions. The present study used electromagnetic articulography and simultaneous acoustic recordings to estimate tongue configurations during production of [s š t k] in the presence of a thin and a thick palate, before and after a practice period. Ten native speakers of English participated in the study. In keeping with previous acoustic studies, fricatives were more affected by the palate than were the stops. The thick palate lowered the center of gravity and the jaw was lower and the tongue moved further backwards and downwards. Center of gravity measures revealed complete adaptation after training, and with practice, subjects’ decreased interlabial distance. The fact that adaptation effects were found for [k], which are produced with an articulatory gesture not directly impeded by the palatal perturbation, suggests a more global sensorimotor recalibration that extends beyond the specific articulatory target.

Link to article

-- (Pauker, E., Itzhak, I., Baum, S. R., & Steinhauer, K.) (2011). Co-operating and conflicting prosody in spoken English garden path sentences: Evidence from event-related potentials. Journal of Cognitive Neuroscience, 23, 2731-2751.

Abstract: In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.

Link to article

Dr. Meghan Clayards
CLAYARDS, M. (Niebuhr, O., Clayards, M., Meunier, C., & Lancia, L.) (2011) On place assimilation within sibilant sequences – comparing French and English. Journal of Phonetics 39, 429-451

Abstract: Two parallel acoustic analyses were performed for French and Englishsibilantsequences, based on comparably structured read-speech corpora. They comprised all sequences of voiced and voiceless alveolar and postalveolar sibilants that can occur across word boundaries in the two languages, as well as the individual alveolar and postalveolar sibilants, combined with preceding or following labial consonants across word boundaries. The individual sibilants provide references in order to determine type and degree of placeassimilation in the sequences. Based on duration and centre-of-gravity measurements that were taken for each sibilant and sibilantsequence, we found clear evidence for placeassimilation not only for English, but also for French. In both languages the assimilation manifested itself gradually in the time as well as in the frequency domain. However, while in Englishassimilation occurred strictly regressively and primarily towards postalveolar, Frenchassimilation was solely towards postalveolar, but in both regressive and progressive directions. Apart from these basic differences, the degree of assimilation in French and English was independent of simultaneous voice assimilation but varied considerably between the individual speakers. Overall, the context-dependent and speaker-specific assimilation patterns match well with previous findings.

Link to article

-- (Bejjanki, V.R., Clayards, M., Knill, D.C., & Aslin, R.N.) (2011) Cue integration in categorical tasks: insights from audio-visual speech perception. PLOS one 6(5): e19812. doi:10.1371/journal.pone.0019812

Abstract: Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks.

Link to article

-- (Clayards, M., & Doty, E.) (2011) Automatic analysis of sibilant assimilation in English. Proceedings of Acoustics week in Canada. Canadian Acoustics 39(3), 194-195
 
Dr. Vincent Gracco
GRACCO, V. (Shum M., Shiller D., Baum S., & Gracco V.) (2011) Sensorimotor integration for speech motor learning involves the inferior parietal cortex. European Journal of Neuroscience, 34(11), 1817-1822.

Abstract: Sensorimotor integration is important for motor learning. The inferior parietal lobe, through its connections with the frontal lobe and cerebellum, has been associated with multisensory integration and sensorimotor adaptation for motor behaviors other than speech. In the present study, the contribution of the inferior parietal cortex to speech motor learning was evaluated using repetitive transcranial magnetic stimulation (rTMS) prior to a speech motor adaptation task. Subjects' auditory feedback was altered in a manner consistent with the auditory consequences of an unintended change in tongue position during speech production, and adaptation performance was used to evaluate sensorimotor plasticity and short-term learning. Prior to the feedback alteration, rTMS or sham stimulation was applied over the left supramarginal gyrus (SMG). Subjects who underwent the sham stimulation exhibited a robust adaptive response to the feedback alteration whereas subjects who underwent rTMS exhibited a diminished adaptive response. The results suggest that the inferior parietal region, in and around SMG, plays a role in sensorimotor adaptation for speech. The interconnections of the inferior parietal cortex with inferior frontal cortex, cerebellum and primary sensory areas suggest that this region may be an important component in learning and adapting sensorimotor patterns for speech.

Link to article

-- (Tremblay P., Deschamps I., & Gracco,V.L.) (2011) Regional heterogeneity in the processing and the production of speech in the human planum temporale. Cortex, doi: 10.1016/j.cortex.2011.09.004

Abstract:
INTRODUCTION: The role of the left planum temporale (PT) in auditory language processing has been a central theme in cognitive neuroscience since the first descriptions of its leftward neuroanatomical asymmetry. While it is clear that PT contributes to auditory language processing there is still some uncertainty about its role in spoken language production.

METHODS: Here we examine activation patterns of the PT for speech production, speech perception and single word reading to address potential hemispheric and regional functional specialization in the human PT. To this aim, we manually segmented the left and right PT in three non-overlapping regions (medial, lateral and caudal PT) and examined, in two complementary experiments, the contribution of exogenous and endogenous auditory input on PT activation under different speech processing and production conditions.

RESULTS: Our results demonstrate that different speech tasks are associated with different regional functional activation patterns of the medial, lateral and caudal PT. These patterns are similar across hemispheres, suggesting bilateral processing of the auditory signal for speech at the level of PT.

CONCLUSIONS: Results of the present studies stress the importance of considering the anatomical complexity of the PT in interpreting fMRI data.

Link to article

-- (Beal D., Quraan M., Cheyne D., Taylor M., Gracco V.L., & DeNil L.) (2011) Speech-induced suppression of evoked auditory fields in children who stutter. NeuroImage. 54(4), 2994-3003.

Abstract: Auditory responses to speech sounds that are self-initiated are suppressed compared to responses to the same speech sounds during passive listening. This phenomenon is referred to as speech-induced suppression, a potentially important feedback-mediated speech-motor control process. In an earlier study, we found that both adults who do and do not stutter demonstrated a reduced amplitude of the auditory M50 and M100 responses to speech during active production relative to passive listening. It is unknown if auditory responses to self-initiated speech-motor acts are suppressed in children or if the phenomenon differs between children who do and do not stutter. As stuttering is a developmental speech disorder, examining speech-induced suppression in children may identify possible neural differences underlying stuttering close to its time of onset. We used magnetoencephalography to determine the presence of speech-induced suppression in children and to characterize the properties of speech-induced suppression in children who stutter. We examined the auditory M50 as this was the earliest robust response reproducible across our child participants and the most likely to reflect a motor-to-auditory relation. Both children who do and do not stutter demonstrated speech-induced suppression of the auditory M50. However, children who stutter had a delayed auditory M50 peak latency to vowel sounds compared to children who do not stutter indicating a possible deficiency in their ability to efficiently integrate auditory speech information for the purpose of establishing neural representations of speech sounds.

Link to article

-- (Feng Y., Gracco V.L., & Max L.) (2011) Integration of auditory and somatosensory error signals in the neural control of speech movements. Journal of Neurophysiology, 106(2), 667-679.

Abstract: We investigated auditory and somatosensory feedback contributions to the neural control of speech. In Task I, sensorimotor adaptation was studied by perturbing one of these sensory modalities or both modalities simultaneously. The first formant frequency (F1) in the auditory feedback was shifted up by a real-time processor and/or the extent of jaw opening was increased or decreased with a force field applied by a robotic device. All eight subjects lowered F1 to compensate for the up-shifted F1 in the feedback signal regardless of whether or not the jaw was perturbed. Adaptive changes in subjects' acoustic output resulted from adjustments in articulatory movements of the jaw or tongue. Adaptation in jaw opening extent in response to the mechanical perturbation occurred only when no auditory feedback perturbation was applied or when the direction of adaptation to the force was compatible with the direction of adaptation to a simultaneous acoustic perturbation. In Tasks II and III, subjects' auditory and somatosensory precision and accuracy were estimated. Correlation analyses showed that the relationships (a) between F1 adaptation extent and auditory acuity for F1 and (b) between jaw position adaptation extent and somatosensory acuity for jaw position were weak and statistically not significant. Taken together, the combined findings from this work suggest that, in speech production, sensorimotor adaptation updates the underlying control mechanisms in such a way that the planning of vowel-related articulatory movements takes into account a complex integration of error signals from previous trials but likely with a dominant role for the auditory modality.

Link to article

Dr. Marc Pell
PELL, M.D. & Kotz, S.A. (2011). On the time course of vocal emotion recognition. PLoS ONE, 6 (11): e27256. doi: 10.1371/journal.pone.0027256.

Abstract: How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n?=?48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the "identification point" for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M?=?517 ms), sadness (M?=?576 ms), and neutral (M?=?510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.

Link to article

-- (Jesso, S., Morlog, D., Ross, S., Pell, M.D., Pasternak, S., Mitchell, D., Kertesz, A., & Finger, E.) (2011). The effects of oxytocin on social cognition and behaviour in frontotemporal dementia. Brain, 134, 2493-2501.

Abstract: Patients with behavioural variant frontotemporal dementia demonstrate abnormalities in behaviour and social cognition, including deficits in emotion recognition. Recent studies suggest that the neuropeptide oxytocin is an important mediator of social behaviour, enhancing prosocial behaviours and some aspects of emotion recognition across species. The objective of this study was to assess the effects of a single dose of intranasal oxytocin on neuropsychiatric behaviours and emotion processing in patients with behavioural variant frontotemporal dementia. In a double-blind, placebo-controlled, randomized cross-over design, 20 patients with behavioural variant frontotemporal dementia received one dose of 24 IU of intranasal oxytocin or placebo and then completed emotion recognition tasks known to be affected by frontotemporal dementia and by oxytocin. Caregivers completed validated behavioural ratings at 8?h and 1 week following drug administrations. A significant improvement in scores on the Neuropsychiatric Inventory was observed on the evening of oxytocin administration compared with placebo and compared with baseline ratings. Oxytocin was also associated with reduced recognition of angry facial expressions by patients with behavioural variant frontotemporal dementia. Together these findings suggest that oxytocin is a potentially promising, novel symptomatic treatment candidate for patients with behavioural variant frontotemporal dementia and that further study of this neuropeptide in frontotemporal dementia is warranted.

Link to article

-- (Pell, M.D., Jaywant, A., Monetta, L., & Kotz, S.A.) (2011). Emotional speech processing: disentangling the effects of prosody and semantic cues. Cognition & Emotion, 25 (5), 834-853.

Abstract: To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody-semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.

Link to article

-- (Paulmann, S. & Pell, M.D.) (2011). Is there an advantage for recognizing multi-modal emotional stimuli? Motivation and Emotion, 35 (2), 192-201.

Abstract: Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2011). Recognizing sarcasm without language: A cross-linguistic study of English and Cantonese. Pragmatics & Cognition, 19 (2), 203-223. (Special issue on “Prosody and Humour”.)

Abstract: The goal of the present research was to determine whether certain speaker intentions conveyed through prosody in an unfamiliar language can be accurately recognized. English and Cantonese utterances expressing sarcasm, sincerity, humorous irony, or neutrality through prosody were presented to English and Cantonese listeners unfamiliar with the other language. Listeners identified the communicative intent of utterances in both languages in a crossed design. Participants successfully identified sarcasm spoken in their native language but identified sarcasm at near-chance levels in the unfamiliar language. Both groups were relatively more successful at recognizing the other attitudes when listening to the unfamiliar language (in addition to the native language). Our data suggest that while sarcastic utterances in Cantonese and English share certain acoustic features, these cues are insufficient to recognize sarcasm between languages; rather, this ability depends on (native) language experience.

Link to article

Dr. Linda Polka
POLKA, L. (Best, C., Bradlow, A., Guion, S., & Polka, L.) (2011). Using the lens of phonetic experience to resolve phonological forms. Journal of Phonetics,39, 453-455.

Abstract: This special issue of the Journal contains a selection of papers developed from original presentations at the 2nd ASA Special Workshop on Speech with the theme of Cross-Language Speech Perception and Variations in Linguistics Experience. The papers represent major theoretical and empirical contributions that converge upon the common theme of how our perception of phonological forms is guided and constrained by our experience with the phonetic details of the language(s) we have learned. Several of the papers presented here offer key theoretical advances and lay out novel or newly expanded frameworks that increase our understanding of speech perception as shaped by universal, first language acquisition abilities, general learning mechanisms, and language-specific perceptual tuning. Others offer careful empirical investigations of language learning by simultaneous bilinguals, as well as by later second language learners, and discuss their new findings in light of the theoretical proposals. The work presented here will provide a stimulating and thoughtful impetus toward further progress on the fundamentally significant issue of understanding of how language experience shapes our perception of phonetic details and phonological structure in spoken language.

Link to article

-- (Polka, L., & Bohn, O-S.) (2011). Natural Referent Vowel (NRV) framework: An emerging view of early phonetic development, Journal of Phonetics, 39, 467-478.

Abstract: The aim of this paper is to provide an overview of an emerging new framework for understanding earlyphoneticdevelopment—the NaturalReferentVowel (NRV) framework. The initial support for this framework was the finding that directional asymmetries occur often in infant vowel discrimination. The asymmetries point to an underlying perceptual bias favoring vowels that fall closer to the periphery of the F1/F2 vowel space. In Polka and Bohn (2003) we reviewed the data on asymmetries in infant vowel perception and proposed that certain vowels act as naturalreferentvowels and play an important role in shaping vowel perception. In this paper we review findings from studies of infant and adult vowel perception that emerged since Polka and Bohn (2003), from other labs and from our own work, and we formally introduce the NRV framework. We outline how this framework connects with linguistic typology and other models of speech perception and discuss the challenges and promise of NRV as a conceptual tool for advancing our understanding of phoneticdevelopment.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (MacLeod, A.A.N., Laukys, K., & Rvachew, S.) (2011). The impact of bilingual language learning on whole-word complexity and segmental accuracy among children aged 18 and 36 months. International Journal of Speech-Language Pathology, 13(6), 490-499.

Abstract: This study investigates the phonological acquisition of 19 monolingual English children and 21 English?French bilingual children at 18 and 36 months. It contributes to the understanding of age-related changes to phonological complexity and to differences due to bilingual language development. In addition, preliminary normative data is presented for English children and English?French bilingual children. Five measures were targeted to represent a range of indices of phonological development: the phonological mean length of utterance (pMLU) of the adult target, the pMLU produced by the child, the proportion of whole-word proximity (PWP), proportion of consonants correct (PCC), and proportion of whole words correct (PWC). The measures of children's productions showed improvements from 18 to 36 months; however, the rate of change varied across the measures, with PWP improving faster, then PCC, and finally PWC. The results indicated that bilingual children can keep pace with their monolingual peers at both 18 months and 36 months of age, at least in their dominant language. Based on these findings, discrepancies with monolingual phonological development that one might observe in a bilingual child's non-dominant language could be explained by reduced exposure to the language rather than a general slower acquisition of phonology.

Link to article

-- (Rvachew, S., Mattock, K., Clayards, M., Chiang, P., & Brosseau-Lapré, F.) (2011). Perceptual considerations in multilingual adult and child speech acquisition (pp. 58-68). In S. McLeod & B.A. Goldstein (Eds.), Multilingual Aspects of Speech Sound Disorders in Children. Bristol, UK: Multilingual Matters.

Book description: Multilingual Aspects of Speech Sound Disorders in Children explores both multilingual and multicultural aspects of children with speech sound disorders. The 30 chapters have been written by 44 authors from 16 different countries about 112 languages and dialects. The book is designed to translate research into clinical practice. It is divided into three sections: (1) Foundations, (2) Multilingual speech acquisition, (3) Speech-language pathology practice. An introductory chapter discusses cross-linguistic and multilingual aspects of speech sound disorders in children. Subsequent chapters address speech sound acquisition, how the disorder manifests in different languages, cultural contexts, and speakers, and addresses diagnosis, assessment and intervention. The research chapters synthesize available research across a wide range of languages. A unique feature of this book are the chapters that translate research into clinical practice. These chapters provide real-life vignettes for specific geographical or linguistic contexts.

Book information:
ISBN: 9781847695123

Link to book

-- (Brosseau-Lapré, F., Rvachew, S., Clayards, M. & Dickson, D.) (2011). Stimulus variability and perceptual learning of non-native vowel categories. Applied Psycholinguistics, doi:10.1017/S0142716411000750 NEWSLETTER ARTICLE

Abstract: English-speakers' learning of a French vowel contrast (/?/–/ø/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far (from the category boundary), and variable close (to the boundary) conditions, each in a single talker or a multiple talker version. The control condition involved identification of gender appropriate grammatical elements. Pre- and posttraining measures of vowel perception and production were obtained from each participant. When assessing pre- to posttraining changes in the slope of the identification functions, statistically significant training effects were observed in the multiple voice far and multiple voice close conditions.

Link to article

-- (Rvachew, S. & Brosseau-Lapré, F.) (2011). Preschoolers with phonological disorders learn language and literacy skills in 12 weeks. Communiqué, 25(3), 18-19.
Dr. Karsten Steinhauer
STEINHAUER, K. (Hwang, H. & Steinhauer, K.) (2011). Phrase length matters: The interplay between implicit prosody and syntax in Korean ‘garden path’ sentences. Journal of Cognitive Neuroscience, 23 (11), 3555-3575. (doi:10.1162/jocn_a_00001)

Abstract: In spoken language comprehension, syntactic parsing decisions interact with prosodic phrasing, which is directly affected by phrase length. Here we used ERPs to examine whether a similar effect holds for the on-line processing of written sentences during silent reading, as suggested by theories of "implicit prosody." Ambiguous Korean sentence beginnings with two distinct interpretations were manipulated by increasing the length of sentence-initial subject noun phrases (NPs). As expected, only long NPs triggered an additional prosodic boundary reflected by a closure positive shift (CPS) in ERPs. When sentence materials further downstream disambiguated the initially dispreferred interpretation, the resulting P600 component reflecting processing difficulties ("garden path" effects) was smaller in amplitude for sentences with long NPs. Interestingly, additional prosodic revisions required only for the short subject disambiguated condition-the delayed insertion of an implicit prosodic boundary after the subject NP-were reflected by a frontal P600-like positivity, which may be interpreted in terms of a delayed CPS brain response. These data suggest that the subvocally generated prosodic boundary after the long subject NP facilitated the recovery from a garden path, thus primarily supporting one of two competing theoretical frameworks on implicit prosody. Our results underline the prosodic nature of the cognitive processes underlying phrase length effects and contribute cross-linguistic evidence regarding the on-line use of implicit prosody for parsing decisions in silent reading.

Link to article

-- (Pauker, E., Itzhak, I., Baum, S.R., & Steinhauer, K.) (2011). Effects of cooperating and conflicting prosody in spoken English garden path sentences: ERP evidence for the boundary deletion hypothesis. Journal of Cognitive Neuroscience, 23 (10), 2731-2751. (doi: 10.1162/jocn.2011.21610)

Abstract: In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.

Link to article

-- (Steinhauer, K.) (2011). Combining Behavioral Measures and Brain Potentials to Study Categorical Prosodic Boundary Perception and Relative Boundary Strength. Proceedings of the 17th International Congress of Phonetic Sciences (ICPhS XVII), Hongkong (China), p. 1898-1901.

Abstract: Two controversial issues in speech prosody research concern (i) the traditional notion of categorical boundary perception (i.e., intermediate phrase [ip] boundaries versus intonation phrase boundaries [IPh]), and (ii) the suggestion that the relative strength of competing boundaries (rather than the mere presence of boundaries) may account for prosody effects on sentences interpretation. An alternative to qualitatively distinct boundary categories is the idea of a “gradient quantitative boundary size” (e.g., Wagner & Crivellaro [14]), which may also imply a graded spectrum of relative strength effects. Based on promising behavioral data supporting this view, we propose to study these predictions in more detail using event-related potentials (ERPs). In phonetics and phonology, these electrophysiological measures have been shown to provide an excellent tool to investigate online processes across the entire time course of a spoken utterance, with a temporal resolution in the range of milliseconds. Thus, ERPs are expected to reflect both the real-time processing and integration at the boundary positions as well as its subsequent effects on sentence interpretation.

Link to article

-- (Prévost, A.E., Goad, H., & Steinhauer, K.) (2011). Prosodic transfer: An event-related potentials approach. Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznan, Poland, 1-3 May 2010. (eds. Katarzyna Dziubalska-Kolaczyk, Magdalena Wrembel, Malgorzata Kul), 361-366. (ISBN: 978-83-928167-9-9)

Abstract: This study investigates the possible electrophysiological evidence of the influence of L1 prosodic structure on a speakers second language, specifically in the context of the Prosodic Transfer Hypothesis of Goad & White (2004, 2009), with Turkish as the L1 and English as the L2. Turkish prosodic structure differs from English in its treatment of articles in ways that suggest that Turkish articles are affixal clitics whereas English articles are free clitics. Crucially, it follows that a correct English article-adjective-noun sequence violates Turkish prosody, since adjectives cannot intervene between articles and noun heads in Turkish, and therefore that Turkish speakers will be unable to correctly prosodify the sequence. Behavioural production evidence in which Turkish speakers delete, substitute, or stress the English article in asymmetrical ways predictable by prosodic structure robustly supports this claim. The current experiment uses ERP recording to elucidate the online processing of Turkish speakers hearing English sentences that either do or do not violate Turkish prosodic structure, with the aim of demonstrating real-time neural responses to L1-L2 prosodic mismatch.

Link to article

-- (Steinhauer, K. & Connolly, J.F.) (2011). Event-related potentials in the study of language. In: Whitaker, H. (ed.), 191-203. Concise Encyclopedia of Brain and Language. Oxford: Elsevier.

Book description: This volume descibes, in up-to-date terminology and authoritative interpretation, the field of neurolinguistics, the science concerned with the neural mechanisms underlying the comprehension, production and abstract knowledge of spoken, signed or written language. An edited anthology of 165 articles from the award-winning Encyclopedia of Language and Linguistics 2nd edition, Encyclopedia of Neuroscience 4th Edition and Encyclopedia of the Neorological Sciences and Neurological Disorders, it provides the most comprehensive one-volume reference solution for scientists working with language and the brain ever published.

Book information:
ISBN-10: 080964982
ISBN-13: 9780080964980

Link to article

-- (Prévost, A.E., Goad, H., & Steinhauer, K.) (2011). Prosodic transfer: An event-related potentials approach. Achievements and Perspectives in SLA of speech II: New Sounds 2010 (eds. Magdalena Wrembel, Malgorzata Kul, Katarzyna Dziubalska-Kolaczyk), 217-226. Peter Lang (ISBN 978-3-631-60723-7 hb).

Book description: This publication constitutes a selection of papers presented at the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, held in Poznan, Poland. It consists of two volumes, presenting state-of-the-art achievements and perspectives for future research related to the acquisition of second language phonetics and phonology. The key issues include the development of explanatory frameworks of phonological SLA, the expanded scope of domains under investigation, modern methods applied in phonological research, and a new take on the causal variables related to ultimate proficiency in L2 speech. This second volume contains a selection of 26 articles that cover a wide variety of themes including L2 speech perception and production, segmental and prosodic features, as well as factors related to individual variability and foreign accent.

Book information:
ISBN: 978-3-631-60723-7 hb

Link to book

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2011). The relationship between bilingual exposure and vocabulary development. International Journal of Bilingualism, 14 (5), 426-445., DOI: 10.1177/1367006911403202

Abstract: The relationship between amount of bilingual exposure and performance in receptive and expressive vocabulary in French and English was examined in 5-year-old Montreal children acquiring French and English simultaneously as well as in monolingual children. The children were equated on age, socio-economic status, nonverbal cognition, and on minority/majority language status (both languages have equal status), but differed in the amount of exposure they had received to each language spanning the continuum of bilingual exposure levels. A strong relationship was found between amount of exposure to a language and performance in that language. This relationship was different for receptive and expressive vocabulary. Children having been exposed to both languages equally scored comparably to monolingual children in receptive vocabulary, but greater exposure was required to match monolingual standards in expressive vocabulary. Contrary to many previous studies, the bilingual children were not found to exhibit a significant gap relative to monolingual children in receptive vocabulary. This was attributed to the favorable language-learning environment for French and English in Montreal and might also be related to the fact the two languages are fairly closely related. Children with early and late onset (before 6 months and after 20 months) of bilingual exposure who were equated on overall amount of exposure to each language did not differ significantly on any vocabulary measure.

Link to article

-- (Thordardottir, E., Kehayia, E., Mazer, B., Lessard, N., Majnemer, A., Sutton, A., Trudeau, N., & Chilingarian, G.) (2011). Sensitivity and specificity of French language measures for the identification of Primary Language Impairment at age 5. Journal of Speech, Language and Hearing Research, 54, 580-597.

Abstract:
PURPOSE: Research on the diagnostic accuracy of different language measures has focused primarily on English. This study examined the sensitivity and specificity of a range of measures of language knowledge and language processing for the identification of primary language impairment (PLI) in French-speaking children. Because of the lack of well-documented language measures in French, it is difficult to accurately identify affected children, and thus research in this area is impeded.

METHOD: The performance of 14 monolingual French-speaking children with confirmed, clinically identified PLI (M = 61.4 months of age, SD = 7.2 months) on a range of language and language processing measures was compared with the performance of 78 children with confirmed typical language development (M age = 58.9 months, SD = 5.7). These included evaluations of receptive vocabulary, receptive grammar, spontaneous language, narrative production, nonword repetition, sentence imitation, following directions, rapid automatized naming, and digit span. Sensitivity, specificity, and likelihood ratios were determined at 3 cutoff points: (a) -1 SD, (b) -1.28 SD, and (b) -2 SD below mean values. Receiver operator characteristic curves were used to identify the most accurate cutoff for each measure.

RESULTS: Significant differences between the PLI and typical language development groups were found for the majority of the language measures, with moderate to large effect sizes. The measures differed in their sensitivity and specificity, as well as in which cutoff point provided the most accurate decision. Ideal cutoff points were in most cases between the mean and -1 SD. Sentence imitation and following directions appeared to be the most accurate measures.

CONCLUSIONS: This study provides evidence that standardized measures of language and language processing provide accurate identification of PLI in French. The results are strikingly similar to previous results for English, suggesting that in spite of structural differences between the languages, PLI in both languages involves a generalized language delay across linguistic domains, which can be identified in a similar way using existing standardized measures.

Link to article

2010

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Dwivedi, V., Drury, J., Molnar, M., Phillips, N., Baum, S., & Steinhauer, K. (2010). ERPs reveal sensitivity to hypothetical contexts in spoken discourse. Neuroreport, 21, 791-795.

Abstract: We used event-related potentials to examine the interaction between two dimensions of discourse comprehension: (i) referential dependencies across sentences (e.g. between the pronoun 'it' and its antecedent 'a novel' in: 'John is reading a novel. It ends quite abruptly'), and (ii) the distinction between reference to events/situations and entities/individuals in the real/actual world versus in hypothetical possible worlds. Cross-sentential referential dependencies are disrupted when the antecedent for a pronoun is embedded in a sentence introducing hypothetical entities (e.g. 'John is considering writing a novel. It ends quite abruptly'). An earlier event-related potential reading study showed such disruptions yielded a P600-like frontal positivity. Here we replicate this effect using auditorily presented sentences and discuss the implications for our understanding of discourse-level language processing.

Link to article

-- (Dwivedi, V., Phillips, N., Einagel, S., & Baum, S.) (2010). The neural underpinnings of linguistic ambiguity, Brain Research, 1311, 93-109.

Abstract: We used event-related brain potentials (ERPs) in order to investigate how definite NP anaphors are integrated into semantically ambiguous contexts. Although sentences such as Every kid climbed a tree lack any syntactic or lexical ambiguity, these structures exhibit two possible meanings, where either many trees or only one tree was climbed. This semantic ambiguity is the result of quantifier scope ambiguity. Previous behavioural studies have shown that a plural definite NP continuation is preferred (as reflected in a continuation sentence, e.g., The trees were in the park) over singular NPs (e.g., The tree was in the park). This study aimed to identify the neurophysiological pattern associated with the integration of the continuation sentences, as well as the time course of this process. We examined ERPs elicited by the noun and verb in continuation sentences following ambiguous and unambiguous context sentences. A sustained negative shift was most evident at the Verb position in sentences exhibiting scope ambiguity. Furthermore, this waveform did not differentiate itself until 900 ms after the presentation of the Noun, suggesting that the parser waits to assign meaning in contexts exhibiting quantifier scope ambiguity, such that such contexts are left as underspecified representations.

Link to article

-- (Itzhak, I., Pauker, E., Drury, J., Baum, S., & Steinhauer, K.) (2010). Interactions of prosody and transitivity bias in the processing of closure ambiguities in spoken sentences: ERP evidence. Neuroreport, 21, 8-13.
 
-- (Steinhauer, K., Pauker, E., Itzhak, I., Abada, S., & Baum, S.) (2010). Prosody-syntax interactions in aging: Event-related potentials reveal dissociations between on-line and off-line measures. Neuroscience Letters, 472, 133-138.

Abstract: This study used ERPs to determine whether older adults use prosody in resolving early and late closure ambiguities comparably to young adults. Participants made off-line acceptability judgments on well-formed sentences or those containing prosody-syntax mismatches. Behaviorally, both groups identified mismatches, but older subjects accepted mismatches significantly more often than younger participants. ERP results demonstrate CPS components and garden-path effects (P600s) in both groups, however, older adults displayed no N400 and more anterior P600 components. The data provide the first electrophysiological evidence suggesting that older adults process and integrate prosodic information in real-time, despite off-line behavioral differences. Age-related differences in neurocognitive processing mechanisms likely contribute to this dissociation.

Link to article

Dr. Meghan Clayards
CLAYARDS, M. (2010). Using probability distributions to account for recognition of canonical and reduced word forms. Proceedings of the Annual Meeting of the Linguistics Society of America, Baltimore, MD.

Abstract: The frequency of a word form influences how efficiently it is processed, but canonical forms often show an advantage over reduced forms even when the reduced form is more frequent. This paper addresses this paradox by considering a model in which representations of lexical items consist of a distribution over forms. Optimal inference given these distributions accounts for item based differences in recognition of phonological variants and canonical form advantage.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Shiller, D., Gracco, V.L., & Rvachew, S.) (2010). Auditory-motor learning during speech production in 9-11 year-old children. PLoS-One, 5(9), e12975.

Abstract:
BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

Link to article

-- (Beal D., Cheyne D., Gracco V.L., & DeNil, L.) (2010). Auditory evoked responses to vocalization during passive listening and active generation in adults who stutter. NeuroImage, 52, 1645-1653.

Abstract: We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering.

Link to article

-- (Tiede, M., Boyce, S., Espy-Wilson, C., & Gracco, V.L .) (2010). Variability of North American English /r/ production in response to palatal perturbation. In B. Maassen & P. H.H.M. van Lieshout (Eds.), Speech Motor Control: New Developments in Basic and Applied Research (pp. 53-67). Oxford University Press.

Book description:
Speaking is not only the basic mode of communication, but also the most complex motor skill humans can perform. Disorders of speech and language are the most common sequelae of brain disease or injury, a condition faced by millions of people each year. Health care practitioners need to interact with basic scientists in order to develop and evaluate new methods of clinical diagnosis and therapy to help their patients overcome or compensate their communication difficulties. In recent years, collaboration between those in the the disciplines of neurophysiology, cognitive psychology, mathematical modelling, neuroscientists, and speech science have helped accelerate progress in the field.

This book presents the latest and most important theoretical developments in the area of speech motor control, offering new insights by leaders in their field into speech disorders. The scope of this book is broad - presenting state-of-the art research in the areas of modelling, genetics, brain imaging, behavioral experimentation in addition to clinical applications.

The book will be valuable for researchers and clinicians in speech-language pathology, cognitive neuroscience, clinical psychology, and neurology.

Book information:
ISBN13: 978-0-19-923579-7
ISBN10: 0-19-923579-1

Link to book

-- (Tremblay, P., & Gracco, V.L.) (2010). On the selection of words and oral motor responses: evidence of a response-independent fronto-parietal network. Cortex, 46(1), 15-28.

Abstract: Several brain areas including the medial and lateral premotor areas, and the prefrontal cortex, are thought to be involved in response selection. It is unclear, however, what the specific contribution of each of these areas is. It is also unclear whether the response selection process operates independent of response modality or whether a number of specialized processes are recruited depending on the behaviour of interest. In the present study, the neural substrates for different response selection modes (volitional and stimulus-driven) were compared, using sparse-sampling functional magnetic resonance imaging, for two different response modalities: words and comparable oral motor gestures. Results demonstrate that response selection relies on a network of prefrontal, premotor and parietal areas, with the pre-supplementary motor area (pre-SMA) at the core of the process. Overall, this network is sensitive to the manner in which responses are selected, despite the absence of a medio-lateral axis, as was suggested by Goldberg (1985). In contrast, this network shows little sensitivity to the modality of the response, suggesting of a domain-general selection process. Theoretical implications of these results are discussed.

Link to article

Dr. Aparna Nadig
NADIG. A. (Nadig, A., Lee, I., Bosshart, K. & Ozonoff, S.) (2010). How does the topice of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism. Neuropsychologia, 48(9), 2730-2739.

Abstract: Conversation is a primary area of difficulty for individuals with high-functioning autism (HFA) although they have unimpaired formal language abilities. This likely stems from the unstructured nature of face-to-face conversation as well as the need to coordinate other modes of communication (e.g. eye gaze) with speech. We conducted a quantitative analysis of both verbal exchange and gaze data obtained from conversations between children with HFA and an adult, compared with those of typically developing children matched on language level. We examined a new question: how does speaking about a topic of interest affect reciprocity of verbal exchange and eye gaze? Conversations on generic topics were compared with those on individuals' circumscribed interests, particularly intense interests characteristic of HFA. Two opposing hypotheses were evaluated. Speaking about a topic of interest may improve reciprocity in conversation by increasing participants' motivation and engagement. Alternatively, it could engender more one-sided interaction, given the engrossing nature of circumscribed interests. In their verbal exchanges HFA participants demonstrated decreased reciprocity during the interest topic, evidenced by fewer contingent utterances and more monologue-style speech. Moreover, a measure of stereotyped behaviour and restricted interest symptoms was inversely related to reciprocal verbal exchange. However, both the HFA and comparison groups looked significantly more to their partner's face during the interest than generic topic. Our interpretation of results across modalities is that circumscribed interests led HFA participants to be less adaptive to their partner verbally, but speaking about a highly practiced topic allowed for increased gaze to the partner. The function of this increased gaze to partner may differ for the HFA and comparison groups.

Link to article

Dr. Marc Pell
Pell, M.D. (Paulmann, S. & Pell, M.D.) (2010). Dynamic emotion processing in Parkinson’s disease as a function of channel availability. Journal of Clinical and Experimental Neuropsychology, 32(8), 822-835.

Abstract: Parkinson's disease (PD) is linked to impairments for recognizing emotional expressions, although the extent and nature of these communication deficits are uncertain. Here, we compared how adults with and without PD recognize dynamic expressions of emotion in three channels, involving lexical-semantic, prosody, and/or facial cues (each channel was investigated individually and in combination). Results indicated that while emotion recognition increased with channel availability in the PD group, patients performed significantly worse than healthy participants in all conditions. Difficulties processing dynamic emotional stimuli in PD could be linked to striatal dysfunction, which reduces efficient binding of sequential information in the disease.

Link to article

-- (Paulmann, S. & Pell, M.D.) (2010). Contextual influences of emotional speech prosody on face processing: how much is enough? Cognitive, Affective and Behavioral Neuroscience, 10, 230-242.

Abstract: The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.

Link to article

-- (Dimoska, A., McDonald, S., Pell, M.D., Tate, R., & James, C.) (2010). Recognizing vocal expressions of emotion in patients with social skills deficits following traumatic brain injury. Journal of the International Neuropsychological Society, 16, 369-382.

Abstract: Perception of emotion in voice is impaired following traumatic brain injury (TBI). This study examined whether an inability to concurrently process semantic information (the "what") and emotional prosody (the "how") of spoken speech contributes to impaired recognition of emotional prosody and whether impairment is ameliorated when little or no semantic information is provided. Eighteen individuals with moderate-to-severe TBI showing social skills deficits during inpatient rehabilitation were compared with 18 demographically matched controls. Participants completed two discrimination tasks using spoken sentences that varied in the amount of semantic information: that is, (1) well-formed English, (2) a nonsense language, and (3) low-pass filtered speech producing "muffled" voices. Reducing semantic processing demands did not improve perception of emotional prosody. The TBI group were significantly less accurate than controls. Impairment was greater within the TBI group when accessing semantic memory to label the emotion of sentences, compared with simply making "same/different" judgments. Findings suggest an impairment of processing emotional prosody itself rather than semantic processing demands which leads to an over-reliance on the "what" rather than the "how" in conversational remarks. Emotional recognition accuracy was significantly related to the ability to inhibit prepotent responses, consistent with neuroanatomical research suggesting similar ventrofrontal systems subserve both functions.

Link to article

-- (Jaywant, A. & Pell, M.D.) (2010). Listener impressions of speakers with Parkinson’s disease. Journal of the International Neuropsychological Society, 16, 49-57.

Abstract: Parkinson’s disease (PD) has several negative effects on speech production and communication. However, few studies have looked at how speech patterns in PD contribute to linguistic and social impressions formed about PD patients from the perspective of listeners. In this study, discourse recordings elicited from nondemented PD speakers (n = 18) and healthy controls (n = 17) were presented to 30 listeners unaware of the speakers’ disease status. In separate conditions, listeners rated the discourse samples based on their impressions of the speaker or of the linguistic content. Acoustic measures of the speech samples were analyzed for comparison with listeners’ perceptual ratings. Results showed that although listeners rated the content of Parkinsonian discourse as linguistically appropriate (e.g., coherent, well-organized, easy to follow), the PD speakers were perceived as significantly less interested, less involved, less happy, and less friendly than healthy speakers. Negative social impressions demonstrated a relationship to changes in vocal intensity (loudness) and temporal characteristics (dysfluencies) of Parkinsonian speech. Our findings emphasize important psychosocial ramifications of PD that are likely to limit opportunities for communication and social interaction for those affected, because of the negative impressions drawn by listeners based on their speaking voice. (JINS, 2010, 16, 49–57.)

Link to article

-- (Dara, C. & Pell, M.D.) (2010). Hemispheric contributions for processing pitch and speech rate cues to emotion: fMRI data. Speech Prosody 5th International Conference Proceedings, Chicago, USA.

Abstract: To determine the neural mechanisms involved in vocal emotion processing, the current study employed functional magnetic resonance imaging (fMRI) to investigate the neural structures engaged in processing acoustic cues to infer emotional meaning. Two critical acoustic cues – pitch and speech rate – were systematically manipulated and presented in a discrimination task. Results confirmed that a bilateral network constituting frontal and temporal regions is engaged when discriminating vocal emotion expressions; however, we observed greater sensitivity to pitch cues in the right mid superior temporal gyrus/sulcus (STG/STS), whereas activation in both left and right mid STG/STS was observed for speech rate processing.

Link to article

-- (Pell, M.D., Jaywant, A., Monetta, L., & Kotz, S.A.) (2010). The contributions of prosody and semantic context in emotional speech processing. Speech Prosody 5th International Conference Proceedings, Chicago, USA.

Abstract: The present study examined the relative contributions of prosody and semantic context in the implicit processing of emotions from spoken language. In three separate tasks, we compared the degree to which happy and sad emotional prosody alone, emotional semantic context alone, and combined emotional prosody and semantic information would prime subsequent decisions about an emotionally congruent or incongruent facial expression. In all three tasks, we observed a congruency effect, whereby prosodic or semantic features of the prime facilitated decisions about emotionally-congruent faces. However, the extent of this priming was similar in the three tasks. Our results imply that prosody and semantic cues hold similar potential to activate emotion-related knowledge in memory when they are implicitly processed in speech, due to underlying connections in associative memory shared by prosody, semantics, and facial displays of emotion.

Link to article

Dr. Linda Polka
POLKA, L. (Mattock, K., Polka, L., & Rvachew, S.) (2010) The first steps in word learning are easier when the shoes fit: Comparing monolingual and bilingual infants. Developmental Science 13(1), 229-243.

Abstract: English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a / b / vs. / g / contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled ‘bowce’ or ‘gowce’ and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Shiller, D., & Gracco, V.) (2010). Auditory-motor learning during speech production in 9-11 year-old children. PLoS-One, 5(9), e12975.

Abstract:
BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children.

METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations.

CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.

Link to article

-- (Shiller, D. M., Rvachew, S., & Brosseau-Lapré, F.) (2010). Importance of the auditory perceptual target to the achievement of speech production accuracy. Canadian Journal of Speech-Language Pathology and Audiology, 34, 181-192.

Abstract: The purpose of this paper is to discuss the clinical implications of a model of the segmental component of speech motor control called the DIVA model (Directions into Velocities of Articulators). The DIVA model is implemented on the assumption that the infant has perceptual knowledge of the auditory targets in place before learning accurate production of speech sounds and suggests that diffi culties with speech perception would lead to imprecise speech and inaccurate articulation. We demonstrate through a literature review that children with speech delay, on average, have signifi cant diffi culty with perceptual knowledge of speech sounds that they misarticulate. We hypothesize, on the basis of the DIVA model, that a child with speech delay who has good perceptual knowledge of a phonological target will learn to make the appropriate articulatory adjustments to achieve phonological goals. We support the hypothesis with two case studies. The fi rst case study involved short-term learning in a laboratory task by a child with speech delay. Although the child misarticulated sibilants, he had good perceptual and articulatory knowledge of vowels. He demonstrated that he was fully capable of spontaneously adapting his articulatory patterns to compensate for altered feedback of his own speech output. The second case study involved longer-term learning during speech therapy. This francophone child received 6 weeks of intervention that was largely directed at improving her perceptual knowledge of /?/, leading to signifi cant improvements in her ability to produce this phoneme correctly, both during minimal pair activities in therapy and during post-treatment testing.

Link to article

-- (Mortimer, J., & Rvachew, S.) (2010). A longitudinal investigation of morpho-syntax in children with Speech Sound Disorders. Journal of Communication Disorders, 43, 61-76.

Abstract:
PURPOSE: The intent of this study was to examine the longitudinal morpho-syntactic progression of children with Speech Sound Disorders (SSD) grouped according to Mean Length of Utterance (MLU) scores.

METHODS: Thirty-seven children separated into four clusters were assessed in their pre-kindergarten and Grade 1 years. Cluster 1 were children with typical development; the other clusters were children with SSD. Cluster 2 had good pre-kindergarten MLU; Clusters 3 and 4 had low MLU scores in pre-kindergarten, and (respectively) good and poor MLU outcomes.

RESULTS: Children with SSD in pre-kindergarten had lower Developmental Sentence Scores (DSS) and made fewer attempts at finite embedded clauses than children with typical development. All children with SSD, especially Cluster 4, had difficulty with finite verb morphology.

CONCLUSIONS: Children with SSD and typical MLU may be weak in some areas of syntax. Children with SSD who have low MLU scores and poor finite verb morphology skills in pre-kindergarten may be at risk for poor expressive language outcomes. However, these results need to be replicated with larger groups.

LEARNING OUTCOMES: The reader should (1) have a general understanding of findings from studies on morpho-syntax and SSD conducted over the last half century (2) be aware of some potential areas of morpho-syntactic weakness in young children with SSD who nonetheless have typical MLU, and (3) be aware of some potential longitudinal predictors of continued language difficulty in young children with SSD and poor MLU.

Link to article

-- (Rvachew, S. & Bernhardt, M.) (2010). Clinical implications of the dynamic systems approach to phonological development. American Journal of Speech-Language Pathology, 19, 34-50.

Abstract:
Purpose: To examine treatment outcomes in relation to the complexity of treatment goals for children with speech sound disorders.

Method: The clinical implications of dynamic systems theory in contrast with learnability theory are discussed, especially in the context of target selection decisions for children with speech sound disorders. Detailed phonological analyses of pre-and posttreatment speech samples are provided for 6 children who received treatment in a previously published randomized controlled trial of contrasting approaches to target selection (Rvachew & Nowak, 2001). Three children received treatment for simple target phonemes that did not introduce any new feature contrasts into the children's phonological systems. Three children received treatment for complex targets that represented feature contrasts that were absent from the children's phonological systems.

Results: Children who received treatment for simple targets made more progress toward the acquisition of the target sounds and demonstrated emergence of complex untreated segments and feature contrasts. Children who received treatment for complex targets made little measurable gain in phonological development.

Conclusions: Treatment outcomes will be enhanced if the clinician selects treatment targets at the segmental and prosodic levels of the phonological system in such a way as to stabilize the child's knowledge of subcomponents that form the foundation for the emergence of more complex phoneme contrasts.

Link to article

-- (Mattock, K., Polka, L., & Rvachew, S.) (2010). The first steps in word learning are easier when the shoes fit: Comparing monolingual and bilingual infants. Developmental Science, 13, 229-243.

Abstract: English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a / b / vs. / g / contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled ‘bowce’ or ‘gowce’ and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.

Link to article

-- (Rvachew, S. & Brosseau-Lapre, F.) (2010). Speech perception intervention. In L.Williams, S. McLeod, & R. McCauley (Eds.), Treatment of Speech Sound Disorders in Children (pp. 295-314). Baltimore, Maryland: Paul Brookes Publishing Co.
 
Dr. Karsten Steinhauer
STEINHAUER, K. (Dwivedi, V., Drury, J., Molnar, M., Phillips, N., Baum, S., & Steinhauer, K. (2010). ERPs reveal sensitivity to hypothetical contexts in spoken discourse. Neuroreport, 21, 791-795.

Abstract: We used event-related potentials to examine the interaction between two dimensions of discourse comprehension: (i) referential dependencies across sentences (e.g. between the pronoun 'it' and its antecedent 'a novel' in: 'John is reading a novel. It ends quite abruptly'), and (ii) the distinction between reference to events/situations and entities/individuals in the real/actual world versus in hypothetical possible worlds. Cross-sentential referential dependencies are disrupted when the antecedent for a pronoun is embedded in a sentence introducing hypothetical entities (e.g. 'John is considering writing a novel. It ends quite abruptly'). An earlier event-related potential reading study showed such disruptions yielded a P600-like frontal positivity. Here we replicate this effect using auditorily presented sentences and discuss the implications for our understanding of discourse-level language processing.

Link to article

-- (Steinhauer, K., Drury, J.E., Portner, P., Walenski, M., & Ullman, M.T.) (2010). Syntax, concepts, and logic in the temporal dynamics of language comprehension: Evidence from event-related potentials. Neuropsychologia, 48(6), 1525-1542.

Abstract: Logic has been intertwined with the study of language and meaning since antiquity, and such connections persist in present day research in linguistic theory (formal semantics) and cognitive psychology (e.g., studies of human reasoning). However, few studies in cognitive neuroscience have addressed logical dimensions of sentence-level language processing, and none have directly compared these aspects of processing with syntax and lexical/conceptual-semantics. We used ERPs to examine a violation paradigm involving "Negative Polarity Items" or NPIs (e.g., ever/any), which are sensitive to logical/truth-conditional properties of the environments in which they occur (e.g., presence/absence of negation in: John hasn't ever been to Paris, versus: John has *ever been to Paris). Previous studies examining similar types of contrasts found a mix of effects on familiar ERP components (e.g., LAN, N400, P600). We argue that their experimental designs and/or analyses were incapable of separating which effects are connected to NPI-licensing violations proper. Our design enabled statistical analyses teasing apart genuine violation effects from independent effects tied solely to lexical/contextual factors. Here unlicensed NPIs elicited a late P600 followed in onset by a late left anterior negativity (or "L-LAN"), an ERP profile which has also appeared elsewhere in studies targeting logical semantics. Crucially, qualitatively distinct ERP-profiles emerged for syntactic and conceptual semantic violations which we also tested here. We discuss how these findings may be linked to previous findings in the ERP literature. Apart from methodological recommendations, we suggest that the study of logical semantics may aid advancing our understanding of the underlying neurocognitive etiology of ERP components.

Link to article

-- (Steinhauer, K., Pauker, E., Itzhak, I., Abada, S., & Baum, S.) (2010). Prosody-syntax interactions in aging: Event-related potentials reveal dissociations between on-line and off-line measures. Neuroscience Letters, 472, 133-138.

Abstract: This study used ERPs to determine whether older adults use prosody in resolving early and late closure ambiguities comparably to young adults. Participants made off-line acceptability judgments on well-formed sentences or those containing prosody-syntax mismatches. Behaviorally, both groups identified mismatches, but older subjects accepted mismatches significantly more often than younger participants. ERP results demonstrate CPS components and garden-path effects (P600s) in both groups, however, older adults displayed no N400 and more anterior P600 components. The data provide the first electrophysiological evidence suggesting that older adults process and integrate prosodic information in real-time, despite off-line behavioral differences. Age-related differences in neurocognitive processing mechanisms likely contribute to this dissociation.

Link to article

-- (Morgan-Short, K., Sanz, C., Steinhauer, K., & Ullman, M.T.) (2010). Second language acquisition of gender agreement in explicit and implicit training conditions: An event-related potential study. Language Learning, 60, 154-193.

Abstract: This study employed an artificial language learning paradigm together with a combined behavioral/event-related potential (ERP) approach to examine the neurocognition of the processing of gender agreement, an aspect of inflectional morphology that is problematic in adult second language (L2) learning. Subjects learned to speak and comprehend an artificial language under either explicit (classroomlike) or implicit (immersionlike) training conditions. In each group, both noun-article and noun-adjective gender agreement processing were examined behaviorally and with ERPs at both low and higher levels of proficiency. Results showed that the two groups learned the language to similar levels of proficiency but showed somewhat different ERP patterns. At low proficiency, both types of agreement violations (adjective, article) yielded N400s, but only for the group with implicit training. Additionally, noun-adjective agreement elicited a late N400 in the explicit group at low proficiency. At higher levels of proficiency, noun-adjective agreement violations elicited N400s for both the explicit and implicit groups, whereas noun-article agreement violations elicited P600s for both groups. The results suggest that interactions among linguistic structure, proficiency level, and type of training need to be considered when examining the development of aspects of inflectional morphology in L2 acquisition.

Link to article

-- (Itzhak, I., Pauker, E., Drury, J.E., Baum, S.R., & Steinhauer, K.) (2010). Event-related potentials show online influence of lexical biases on prosodic processing. NeuroReport, 21, 8-13.

Abstract: This event-related potential study examined how the human brain integrates (i) structural preferences, (ii) lexical biases, and (iii) prosodic information when listeners encounter ambiguous 'garden path' sentences. Data showed that in the absence of overt prosodic boundaries, verb-intrinsic transitivity biases influence parsing preferences (late closure) online, resulting in a larger P600 garden path effect for transitive than intransitive verbs. Surprisingly, this lexical effect was mediated by prosodic processing, a closure positive shift brain response was elicited in total absence of acoustic boundary markers for transitively biased sentences only. Our results suggest early interactive integration of hierarchically organized processes rather than purely independent effects of lexical and prosodic information. As a primacy of prosody would predict, overt speech boundaries overrode both structural preferences and transitivity biases.

Link to article

-- (Royle, P., Drury, J.E., Bourguignon, N. & Steinhauer, K.) (2010). Morphology and word recognition: An ERP approach. In H. Melinda (Ed.), Proceedings of the 2010 annual conference of the Canadian Linguistic Association, 1-13.
-- (Abada, S., Steinhauer, K., Drury, J.E., & Baum, S.R.) (2010). Age differences in electrophysiological correlates of cross-modal interpretation. Speech Prosody 2010 Proceedings,100346 pp.1-4.

Abstract: Research shows that older adults may be more sensitive than young adults to prosody, although performance varies depending on task requirements. Here we used electroencephalography to examine responses to simple phrases produced with an Early or Late boundary, presented with matching or mismatching visual displays. While some older adults successfully detected prosodic mismatches, many failed to do so. Nonetheless, mismatches elicited a P600-like positivity in all participants. Those individuals who accurately judged prosody also displayed a second negative-going prosodic mismatch response. Findings show that older adults vary in their reliance on prosody, as reflected both in behavioral and ERP responses.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2010). Towards evidence based practice in language intervention for bilingual children. Journal of Communication Disorders, 43, 523-537.

Abstract: Evidence-based practice requires that clinical decisions be based on evidence from rigorously controlled research studies. At this time, very few studies have directly examined the efficacy of clinical intervention methods for bilingual children. Clinical decisions for this population cannot, therefore, be based on the strongest forms of research evidence, but must be inferred from other sources. This article reviews the available intervention research on bilingual children, the current clinical recommendations for this population, and the strength of the empirical and theoretical support on which these recommendations are based. Finally, future directions are suggested for documenting current methods of intervention and developing optimal methods for different groups of bilingual children. Although the current research base is limited, the few studies available to date uniformly suggest that interventions that include a focus on both languages are superior to those that focus on only one language. The available research offers little guidance, however, as to the particular treatment methods that may be most appropriate. Further research is required to examine efficacy with larger numbers of children and children of various bilingual backgrounds. It is suggested that efforts to develop and test intervention methods for bilingual children must carefully consider the linguistic heterogeneity of bilingual children and the cultural variation in communication styles, child rearing practices, and child rearing beliefs. This will lead to the development of methods that may involve treatment methods that are more suitable for other languages and cultures. LEARNING OUTCOMES: Readers will become familiar with current recommendations for the treatment of bilingual children with language impairment, including which language or languages to use, the requirement for cultural sensitivity, and specific procedures that may be beneficial for bilingual populations. The heterogeneity of the bilingual population of children is highlighted. Readers will gain an understanding of the strength of research evidence backing up recommended practices, as well as of gaps in our current knowledge base and directions for further development and research.

Link to article

-- (MacLeod, A., Sutton, A., Trudeau, N., & Thordardottir, E.) (2010). Phonological development in québecois French: A cross-sectional study of preschool age children. International Journal of Speech-Language Pathology, Early Online, 1-17.

Abstract:

This study provides a systematic description of French consonant acquisition in a large cohort of pre-school aged children: 156 children aged 20–53 months participated in a picture-naming task. Five analyses were conducted to study consonant acquisition: (1) consonant inventory, (2) consonant accuracy, (3) consonant acquisition, (4) a comparison of consonant inventory to consonant acquisition, and (5) a comparison to English cross-sectional data. Results revealed that more consonants emerge at an earlier age in word initial position, followed by medial position, and then word final position. Consonant accuracy underwent the greatest changes before the age of 36 months, and achieved a relative plateau towards 42 months. The acquisition of consonants revealed that four early consonants were acquired before the age of 36 months (i.e., /t, m, n, z/); 12 intermediate consonants were acquired between 36 and 53 months (i.e., /p, b, d, k, , ?, f, v, , l, w, ?/); and four consonants were acquired after 53 months (/s, ?, ?, j/). In comparison to English data, language specific patterns emerged that influence the order and pace of phonological acquisition. These findings highlight the important role of language specific developmental data in understanding the course of consonant acquisition.

Link to article

-- (Namazi, M. & Thordardottir, E.) (2010). A working memory, not a bilingual advantage in controlled attention. International Journal of Bilingual Education and Bilingualism, 13, 597-616.

Abstract: We explored the relationship between working memory (WM) and visually controlled attention (CA) in young bilingual and monolingual children. Previous research has shown that balanced bilingual children outperform monolinguals in CA. However, it is unclear whether this advantage is truly associated with bilingualism or whether potential WM and/or language differences led to the observed effects. Therefore, we examined whether bilingual and monolingual children differ on a visual measure of CA after potential differences in verbal and visual WM had been accounted for. We also looked at the relationship between visually CA and visual WM. Fifteen French monolingual children, 15 English monolingual children, and 15 early simultaneous bilingual children completed verbal short-term memory, verbal WM, visual WM, and visual CA tasks. Detailed information regarding language exposure was collected and abilities in each language were evaluated. A bilingual advantage was not found; that is, monolingual and bilingual children were equally successful in ignoring the irrelevant perceptual distraction on the Simon Task. However, children with better visual WM scores were also more faster and more accurate on the Simon Task. Furthermore, visual WM correlated significantly with the visual CA task.

Link to article

-- (Thordardottir, E., Kehayia, E., Lessard, N., Sutton, A. & Trudeau, N.) (2010). Typical performance on tests of language knowledge and language processing of French-speaking 5-year-olds. Canadian Journal of Speech Language Pathology and Audiology, 34, 5-16.

Abstract: The evaluation of the language skills of francophone children for clinical and research purposes is complicated by a lack of appropriate norm-referenced assessment tools. The purpose of this study was the collection of normative data for measures assessing major areas of language for 5-year-old monolingual speakers of Quebec French. Children in three age-groups (4;6, 5;0 and 5;6 years, n=78) were administered tests of language knowledge and linguistic processing, addressing vocabulary, morphosyntax, syntax, narrative structure, nonword repetition, sentence imitation, rapid automatized naming, following directions, and short term memory. The assessment measures were drawn from existing tools and from tools developed for this study, and included formal tests as well as spontaneous language measures. Normative data are presented for the three age groups. Results showed a systematic increase with age for most of the measures. Correlational analysis revealed relationships of varying strength between the measures, indicating some overlap between the measures, but also suggesting that the measures differ in the linguistic skills they tap into. The normative data presented will facilitate the language assessment of French-speaking 5-year-olds, permitting their performance to be compared to the normal range of typically developing monolingual French-speaking children and allowing the documentation of children’s profi les of relative strengths and weaknesses within language.

Link to article

2009

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Associate Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Bélanger, N., Baum, S. & Titone, D.) (2009). Use of prosodic cues in the production of idiomatic and literal sentences by individuals with right- and left-hemisphere damage,” Brain & Language, 110, 38-42.

Abstract: The neural bases of prosody during the production of literal and idiomatic interpretations of literally plausible idioms was investigated. Left- and right-hemisphere-damaged participants and normal controls produced literal and idiomatic versions of idioms (He hit the books.) All groups modulated duration to distinguish the interpretations. LHD patients, however, showed typical speech timing difficulties. RHD patients did not differ from the normal controls. The results partially support a differential lateralization of prosodic cues in the two cerebral hemispheres [Van Lancker, D., & Sidtis, J. J. (1992). The identification of affective-prosodic stimuli by left- and right-hemisphere-damaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963-970]. Furthermore, extended final word lengthening appears to mark idiomaticity.

Link to article

-- (Ménard, L., Dupont, S., Baum, S., Aubin, J., & Schwartz, J-L.) (2009). Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 126, 1406-1414.

Abstract: The goal of this study is to investigate the production and perception of French vowels by blind and sighted speakers. 12 blind adults and 12 sighted adults served as subjects. The auditory-perceptual abilities of each subject were evaluated by discrimination tests (AXB). At the production level, ten repetitions of the ten French oral vowels were recorded. Formant values and fundamental frequency values were extracted from the acoustic signal. Measures of contrasts between vowel categories were computed and compared for each feature (height, place of articulation, roundedness) and group (blind, sighted). The results reveal a significant effect of group (blind vs sighted) on production, with sighted speakers producing vowels that are spaced further apart in the vowel space than those of blind speakers. A group effect emerged for a subset of the perceptual contrasts examined, with blind speakers having higher peak discrimination scores than sighted speakers. Results suggest an important role of visual input in determining speech goals.

Link to article

-- (Shiller, D., Sato, M., Gracco, V., & Baum, S.) (2009). Perceptual recalibration of speech sounds following speech motor learning. Journal of the Acoustical Society of America, 125, 1103-1113

Abstract: The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L. (Almor, A., Aronoff, J.M., MacDonald, M.C., Gonnerman, L.M., Kempler, D., Hintiryan, H., Hayes, U.L., Arunachalam, S., & Andersen, E.S.) (2009). A common mechanism in verb and noun naming deficits in Alzheimer's patients. Brain and Language, 111, 8-19.

Abstract: We tested the ability of Alzheimer's patients and elderly controls to name living and non-living nouns, and manner and instrument verbs. Patients' error patterns and relative performance with different categories showed evidence of graceful degradation for both nouns and verbs, with particular domain-specific impairments for living nouns and instrument verbs. Our results support feature-based, semantic representations for nouns and verbs and support the role of inter-correlated features in noun impairment, and the role of noun knowledge in instrument verb impairment.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Tremblay, P., & Gracco, V.L.) (2009). The essential role of the pre-SMA in the production of words and non-speech oral motor gestures, as revealed by repetitive transcranial magnetic stimulation (rTMS). Brain Research, 1268, 112-24.

Abstract: An emerging theoretical perspective, largely based on neuroimaging studies, suggests that the pre-SMA is involved in planning cognitive aspects of motor behavior and language, such as linguistic and non-linguistic response selection. Neuroimaging studies, however, cannot indicate whether a brain region is equally important to all tasks in which it is activated. In the present study, we tested the hypothesis that the pre-SMA is an important component of response selection, using an interference technique. High frequency repetitive TMS (10 Hz) was used to interfere with the functioning of the pre-SMA during tasks requiring selection of words and oral gestures under different selection modes (forced, volitional) and attention levels (high attention, low attention). Results show that TMS applied to the pre-SMA interferes selectively with the volitional selection condition, resulting in longer RTs. The low- and high-attention forced selection conditions were unaffected by TMS, demonstrating that the pre-SMA is sensitive to selection mode but not attentional demands. TMS similarly affected the volitional selection of words and oral gestures, reflecting the response-independent nature of the pre-SMA contribution to response selection. The implications of these results are discussed.

Link to article

-- (Shiller, D., Sato, M., Gracco, V., & Baum, S.) (2009). Perceptual recalibration of speech sounds following speech motor learning. Journal of the Acoustical Society of America, 125, 1103-1113

Abstract: The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.

Link to article

-- (Sato, M., Tremblay, P., & Gracco, V.L.) (2009). A mediating role of the premotor cortex in phoneme segmentation. Brain and Language, 111, 1-7.

Abstract: Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech perception under normal listening conditions remains debated. To directly test this hypothesis, we applied rTMS to the left ventral premotor cortex and participants performed auditory speech tasks involving the same set of syllables but differing in the use of phonemic segmentation processes. Compared to sham stimulation, rTMS applied over the ventral premotor cortex resulted in slower phoneme discrimination requiring phonemic segmentation. No effect was observed in phoneme identification and syllable discrimination tasks that could be performed without need for phonemic segmentation. The findings demonstrate a mediating role of the ventral premotor cortex in speech segmentation under normal listening conditions and are interpreted in relation to theories assuming a link between perception and action in the human speech processing system.

Link to article

Dr. Aparna Nadig
NADIG, A. (Nadig, A., Vivanti, G. & Ozonoff, S.) (2009). Adaptation of object descriptions to a partner under increasing communicative demands: A comparison of children with and without autism. Autism Research, 2, 1-14.

Abstract: This study compared the object descriptions of school-age children with high-functioning autism (HFA) with those of a matched group of typically developing children. Descriptions were elicited in a referential communication task where shared information was manipulated, and in a guessing game where clues had to be provided about the identity of an object that was hidden from the addressee. Across these tasks, increasingly complex levels of audience design were assessed: (1) the ability to give adequate descriptions from one's own perspective, (2) the ability to adjust descriptions to an addressee's perspective when this differs from one's own, and (3) the ability to provide indirect yet identifying descriptions in a situation where explicit labeling is inappropriate. Results showed that there were group differences in all three cases, with the HFA group giving less efficient descriptions with respect to the relevant context than the comparison group. More revealing was the identification of distinct adaptation profiles among the HFA participants: those who had difficulty with all three levels, those who displayed Level 1 audience design but poor Level 2 and Level 3 design, and those demonstrated all three levels of audience design, like the majority of the comparison group. Higher structural language ability, rather than symptom severity or social skills, differentiated those HFA participants with typical adaptation profiles from those who displayed deficient audience design, consistent with previous reports of language use in autism.

Link to article

Dr. Marc Pell
PELL, M. (Paulmann, S. & Pell, M.D.) (2009). Facial expression decoding as a function of emotional meaning status: ERP evidence. NeuroReport, 20, 1603-1608.

Abstract: To further specify the time course of (emotional) face processing, this study compared event-related potentials elicited by faces conveying prototypical basic emotions, nonprototypical affective expressions (grimaces), and neutral faces. Results showed that prototypical and nonprototypical facial expressions could each be differentiated from neutral expressions in three different event-related potential component amplitudes (P200, early negativity, and N400), which are believed to index distinct processing stages in facial expression decoding. On the basis of the distribution of effects, our results suggest that early processing is mediated by shared neural generators for prototypical and nonprototypical facial expressions; however, later processing stages seem to engage distinct subsystems for the three facial expression types investigated according to their emotionality and meaning status.

Link to article

-- (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2009). Comparative processing of emotional prosody and semantics following basal ganglia infarcts: ERP evidence of selective impairments for disgust and fear. Brain Research, 1295, 159-169.

Abstract: There is evidence from neuroimaging and clinical studies that functionally link the basal ganglia to emotional speech processes. However, in most previous studies, explicit tasks were administered. Thus, the underlying mechanisms substantiating emotional speech are not separated from possibly process-related task effects. Therefore, the current study tested emotional speech processing in an event-related potential (ERP) experiment using an implicit emotional processing task (probe verification). The interactive time course of emotional prosody in the context of emotional semantics was investigated using a cross-splicing method. As previously demonstrated, combined prosodic and semantic expectancy violations elicit N400-like negativities irrespective of emotional categories in healthy listeners. In contrast, basal ganglia patients show this negativity only for the emotions of happiness and anger, but not for fear or disgust. The current data serve as first evidence that lesions within the left basal ganglia affect the comparative online processing of fear and disgust prosody and semantics. Furthermore, the data imply that previously reported emotional speech recognition deficits in basal ganglia patients may be due to misaligned processing of emotional prosody and semantics.

Link to article

-- (Pell, M.D., Paulmann, S., Dara, C., Alasseri, A., & Kotz, S.A.) (2009). Factors in the recognition of vocally expressed emotions: a comparison of four languages. Journal of Phonetics, 37, 417-435.

Abstract: To understand how language influences the vocal communication of emotion, we investigated how discrete emotions are recognized and acoustically differentiated in four language contexts—English, German, Hindi, and Arabic. Vocal expressions of six emotions (anger, disgust, fear, sadness, happiness, pleasant surprise) and neutral expressions were elicited from four native speakers of each language. Each speaker produced pseudo-utterances (“nonsense speech”) which resembled their native language to express each emotion type, and the recordings were judged for their perceived emotional meaning by a group of native listeners in each language condition. Emotion recognition and acoustic patterns were analyzed within and across languages. Although overall recognition rates varied by language, all emotions could be recognized strictly from vocal cues in each language at levels exceeding chance. Anger, sadness, and fear tended to be recognized most accurately irrespective of language. Acoustic and discriminant function analyses highlighted the importance of speaker fundamental frequency (i.e., relative pitch level and variability) for signalling vocal emotions in all languages. Our data emphasize that while emotional communication is governed by display rules and other social variables, vocal expressions of ‘basic’ emotion in speech exhibit modal tendencies in their acoustic and perceptual attributes which are largely unaffected by language or linguistic similarity.

Link to article

-- (Monetta, L., Grindrod, C. & Pell, M.D.) (2009). Irony comprehension and theory of mind deficits in patients with Parkinson’s disease. Cortex, 45(8), 972-981. (Special Issue on “Parkinson’s disease, Language, and Cognition”)

Abstract: The goal of this study was to identify acoustic parameters associated with the expression of sarcasm by Cantonese speakers, and to compare the observed features to similar data on English [Cheang, H. S. and Pell, M. D. (2008). Speech Commun. 50, 366-381]. Six native Cantonese speakers produced utterances to express sarcasm, humorous irony, sincerity, and neutrality. Each utterance was analyzed to determine the mean fundamental frequency (F0), F0-range, mean amplitude, amplitude-range, speech rate, and harmonics-to-noise ratio (HNR) (to probe voice quality changes). Results showed that sarcastic utterances in Cantonese were produced with an elevated mean F0, and reductions in amplitude- and F0-range, which differentiated them most from sincere utterances. Sarcasm was also spoken with a slower speech rate and a higher HNR (i.e., less vocal noise) than the other attitudes in certain linguistic contexts. Direct Cantonese-English comparisons revealed one major distinction in the acoustic pattern for communicating sarcasm across the two languages: Cantonese speakers raised mean F0 to mark sarcasm, whereas English speakers lowered mean F0 in this context. These findings emphasize that prosody is instrumental for marking non-literal intentions in speech such as sarcasm in Cantonese as well as in other languages. However, the specific acoustic conventions for communicating sarcasm seem to vary among languages.

Link to article

-- (Monetta, L., Grindrod, C. & Pell, M.D.) (2009). Irony comprehension and theory of mind deficits in patients with Parkinson’s disease. Cortex, 45(8), 972-981. (Special Issue on “Parkinson’s disease, Language, and Cognition”)

Abstract: Many individuals with Parkinson's disease (PD) are known to have difficulties in understanding pragmatic aspects of language. In the present study, a group of eleven non-demented PD patients and eleven healthy control (HC) participants were tested on their ability to interpret communicative intentions underlying verbal irony and lies, as well as on their ability to infer first- and second-order mental states (i.e., theory of mind). Following Winner et al. (1998), participants answered different types of questions about the events which unfolded in stories which ended in either an ironic statement or a lie. Results showed that PD patients were significantly less accurate than HC participants in assigning second-order beliefs during the story comprehension task, suggesting that the ability to make a second-order mental state attribution declines in PD. The PD patients were also less able to distinguish whether the final statement of a story should be interpreted as a joke or a lie, suggesting a failure in pragmatic interpretation abilities. The implications of frontal lobe dysfunction in PD as a source of difficulties with working memory, mental state attributions, and pragmatic language deficits are discussed in the context of these findings.

Link to article

-- (Pell, M.D., Monetta, L.,Paulmann, S., & Kotz, S.A.) (2009). Recognizing emotions in a foreign language. Journal of Nonverbal Behavior, 33(2), 107-120.

Abstract: Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker’s voice, regardless of an individual’s culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances (“nonsense speech”) produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language (“in-group advantage”). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables.

Link to article

Dr. Linda Polka
POLKA, L. (Shahnaz, N., Bork, L., Polka, L. Longridge, N., Westerberg, B., & Bell, D.) (2009). Energy reflectance (ER) and tympanometry in normal and otosclerotic ears. Ear and Hearing, 30, 219-233.

Abstract: Objective: The major goal of this study was to examine differences in the middle ear mechano-acoustical properties of normal ears and ears with surgically confirmed otosclerosis using conventional and multifrequency tympanometry (MFT) as well as energy reflectance (ER). Second, we sought to compare ER, standard tympanometry, and MFT in their ability to distinguish healthy and otosclerotic ears examining both overall test performance (sensitivity and specificity) and receiver- operating characteristic analyses.

Design: Sixty-two normal-hearing adults and 28 patients diagnosed with otosclerosis served as subjects. Tympanometric data were gathered on a clinical immittance machine, the Virtual 310 equipped with a high-frequency option. Two of the parameters, static admittance and tympanometric width, were measured automatically at a standard 226 Hz frequency. The remaining two parameters, resonant frequency and frequency corresponding to admittance phase angle of 45 degree (F45°), were derived from MFT, multicomponent tympanometry, using a mathematical approach similar to the method used in GSI Tympstar Version 2. ER data were gathered using Mimosa Acoustics (RMS-system v4.0.4.4) equipment.

Results: Analyses of receiver-operating characteristic plots confirmed the advantage of MFT measures of resonant frequency and F45° over the standard low-frequency measures of static admittance and tympanometric width with respect to distinguishing otosclerotic ears from normal ears. The F45° measure was also found to be the best single index for making this distinction among tympanometric parameters. ER less than 1 kHz was significantly higher in otosclerotic ears than normal ears. This indicates that most of the incident energy below 1 kHz is reflected back into the ear canal in otosclerotic ears. ER patterns exceeding the 90th percentile of the normal ears across all frequencies correctly identify 82% of the otosclerotic ears while maintaining a low false alarm rate (17.2%); thus, this measure outperforms the other individual tympanometric parameters. Combination of ER and F45° were able to distinguish all otosclerotic ears. Correlations and the individual patterns of test performance revealed that information provided by ER is supplemental to the information provided by conventional and MFT with respect to distinguishing otosclerotic ears from normal ears.

Conclusion: The present findings show that the overall changes of ER across frequencies can distinguish otosclerotic ears from normal ears and from other sources of conductive hearing loss. Incorporating ER in general practice will improve the identification of otosclerotic ears when conventional tympanometry and MFT may fail to do so. To further improve the false alarm rate, ER should be interpreted in conjunction with other audiologic test batteries because it is unlikely that signs of a conductive component, including abnormal middle ear muscle reflex and ER responses, would be observed in an ear with normal middle ear function.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (2009). Perceptually based interventions. In C. Bowen, Children's speech sound disorders (pp. 152-155). Oxford: Wiley-Blackwell.

Book description: Caroline Bowen’s Children’s Speech Sound Disorders will be welcomed by experienced and novice clinicians, clinical educators, and students in the field of speech-language pathology/speech and language therapy for its practical, clinical focus. Drawing on the evidence base where possible, and making important theory to practice links overt, Bowen enhances her comprehensive account of assessment and clinical management of children with protracted or problematic speech development, with the addition of forty nine expert essays. These unique contributions are authored by fifty one internationally respected academicians, clinicians, researchers and thinkers representing a range of work settings, expertise, paradigms and theoretical orientations. In response to frequently asked questions about their work they address key theoretical, assessment, intervention, and service delivery issues.

Book information:
Publication Date: June 18, 2009
ISBN-10: 0470723645
ISBN-13: 978-0470723647
Edition: 1st

Dr. Karsten Steinhauer
STEINHAUER, K. (Palmer, C., Jewett, L., Steinhauer, K.) (2009). Contextual effects on electrophysiological response to musical accents. The Neurosciences and Music III: Disorders and Plasticity, Annals of the New York Academy of Sciences, 1169, 470-480.

Abstract: Listeners' aesthetic and emotional responses to music typically occur in the context of long musical passages that contain structures defined in terms of the events that precede them. We describe an electrophysiological study of listeners' brain responses to musical accents that coincided in longer musical sequences. Musically trained listeners performed a timbre-change detection task in which a single-tone timbre change was positioned within 4-bar melodies composed of 350-ms tones to coincide or not with melodic contour accents and temporal accents (induced with temporal gaps). Event-related potential responses to (task-relevant) attended timbre changes elicited an early negativity (MMN/N2b) around 200 ms and a late positive component around 350 ms (P300), reflecting updating of the timbre change in working memory. The amplitudes of both components changed systematically across the sequence, consistent with expectancy-based context effects. Furthermore, melodic contour changes modulated the MMN/N2b response (but not the P300) to timbre changes in later sequence positions. In contrast, task-irrelevant temporal gaps elicited an MMN that was not modulated by position within the context; absence of a P300 indicated that temporal-gap accents were not updated in working memory. Listeners' neural responses to musical structure changed systematically as sequential predictability and listeners' expectations changed across the melodic context.

Link to article

-- (Steinhauer, K., White, E. & Drury, J.E.) (2009). Temporal dynamics of late second language acquisition: Evidence from event-related brain potentials. Second Language Research, 25(1), 13-41.

Abstract: The ways in which age of acquisition (AoA) may affect (morpho)syntax in second language acquisition (SLA) are discussed. We suggest that event-related brain potentials (ERPs) provide an appropriate online measure to test some such effects. ERP findings of the past decade are reviewed with a focus on recent and ongoing research. It is concluded that, in contrast to previous suggestions, there is little evidence for a strict critical period in the domain of late acquired second language (L2) morphosyntax. As illustrated by data from our lab and others, proficiency rather than AoA seems to predict brain activity patterns in L2 processing, including native-like activity at very high levels of proficiency. Further, a strict distinction between linguistic structures that late L2 learners can vs. cannot learn to process in a native-like manner (Clahsen and Felser, 2006a; 2006b) may not be warranted. Instead, morphosyntactic real-time processing in general seems to undergo dramatic, but systematic, changes with increasing proficiency levels. We describe the general dynamics of these changes (and the corresponding ERP components) and discuss how ERP research can advance our current understanding of SLA in general.

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2009). Fallorðaspilið [The case-marking game] (S.Guðmundsdóttir, Ed.). Kópavogur, Iceland: Námsgagnastofnun [The National Centre for Educational Materials]. Educational game.

 

2008

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Abada, S., Baum, S., & Titone, D.) ( 2008). The effects of central and peripheral feature semantic biasing contexts on phonetic identification in younger and older listeners. Experimental Aging Research, 34, 232-250.

Abstract: It has often been reported that older listeners have difficulty discriminating between phonetically similar items, but may rely on contextual cues as a compensatory mechanism. The present study examined the effects of different degrees of semantic bias on speech perception in groups of younger and older listeners. Stimuli from two /g/-/k/ voice onset time (VOT) continua were presented at the end of biasing and neutral sentences. Results indicated that context strongly influenced phonetic identification in older listeners; this was true for younger listeners only in the case of less-than-ideal stimuli. Findings are discussed in relation to theories concerning age-related changes in speech processing.

Link to article

-- (Gracco, V., Klepousniotou, E., Itzhak, I., & Baum, S.) (2008). Sensorimotor and motorsensory interactions in speech. In Sock, Fuchs, & Laprie (Eds), Proceedings of the 8th International Seminar on Speech Production. Strasbourg, France: INRIA.

Abstract: A long-standing issue in psycholinguistics is whether language production and language comprehension share a common neural substrate. Recent neuroimaging studies of speech appear to support overlap of brain regions for both production and perception. However, what is not known is how to interpret the perceptual activation of motor regions. In the following, the brain regions associated with producing heard speech are described to identify the sensorimotor components of the speech motor network. The brain regions associated with speech production are then examined for their activation during passive perception of lexical items presented as heard words, pictures and printed text. A number of overlapping cortical and subcortical areas were activated during both perception and production. Interestingly, all brain areas associated with passive perception increased their activation for speech production. The increased activation in the classical sensory/perceptual areas for production suggests an interactive process in which motor areas project back to sensory/perceptual areas reflecting a binding of perception (sensory) and production (motor) regions within the network.

Link to article

-- (Taler, V., Baum, S., Saumier, D., & Chertkow, H.) (2008). Comprehension of grammatical and emotional prosody is impaired in Alzheimer’s disease. Neuropsychology, 22, 188-195.

Abstract: Previous research has demonstrated impairment in comprehension of emotional prosody in individuals diagnosed with Alzheimer's disease (AD). The present pilot study further explored the prosodic processing impairment in AD, aiming to extend our knowledge to encompass both grammatical and emotional prosody processing. As expected, impairments were seen in emotional prosody. AD individuals were also found to be impaired in detecting sentence modality, suggesting that impairments in affective prosody processing in AD may be ascribed to a more general prosodic processing impairment, specifically in comprehending prosodic information signaled across the sentence level. AD participants were at a very mild stage of the disease, suggesting that prosody impairments occur early in the disease course.

Link to article

Dr. Vincent Gracco
GRACCO, V. (DeNil, L.F., Beal, D.S., Lafaille, S.J., Kroll, R.M., Crawley, A.P., & Gracco, V.L.) (2008). The effects of simulated stuttering and prolonged speech on neural activation patterns of stuttering and nonstuttering speakers. Brain and Language, 107(2), 114-123.

Abstract: Functional magnetic resonance imaging was used to investigate the neural correlates of passive listening, habitual speech and two modified speech patterns (simulated stuttering and prolonged speech) in stuttering and nonstuttering adults. Within-group comparisons revealed increased right hemisphere biased activation of speech-related regions during the simulated stuttered and prolonged speech tasks, relative to the habitual speech task, in the stuttering group. No significant activation differences were observed within the nonstuttering participants during these speech conditions. Between-group comparisons revealed less left superior temporal gyrus activation in stutterers during habitual speech and increased right inferior frontal gyrus activation during simulated stuttering relative to nonstutterers. Stutterers were also found to have increased activation in the left middle and superior temporal gyri and right insula, primary motor cortex and supplementary motor cortex during the passive listening condition relative to nonstutterers. The results provide further evidence for the presence of functional deficiencies underlying auditory processing, motor planning and execution in people who stutter, with these differences being affected by speech manner.

Link to article

-- (Tremblay, P., Shiller, D., & Gracco, V.L.) (2008). On the time-course and frequency selectivity of the EEG for different modes of response selection: evidence from speech production and keyboard pressing. Clinical Neurophysiology, 119, 88-99.

Abstract:
OBJECTIVE: To compare brain activity in the alpha and beta bands in relation to different modes of response selection, and to assess the domain generality of the response selection mechanism using verbal and non-verbal tasks.

METHODS: We examined alpha and beta event-related desynchronization (ERD) to analyze brain reactivity during the selection of verbal (word production) and non-verbal motor actions (keyboard pressing) under two different response modes: externally selected and self-selected.

RESULTS: An alpha and beta ERD was observed for both the verbal and non-verbal tasks in both the externally and the self-selected modes. For both tasks, the beta ERD started earlier and was longer in the self-selected mode than in the externally selected mode. The overall pattern of results between the verbal and non-verbal motor behaviors was similar.

CONCLUSIONS: The pattern of alpha and beta ERD is affected by the mode of response selection suggesting that the activity in both frequency bands contributes to the process of selecting actions. We suggest that activity in the alpha band may reflect attentional processes while activity in the beta band may be more closely related to the execution and selection process.

SIGNIFICANCE: These results suggest that a domain general process contributes to the planning of speech and other motor actions. This finding has potential clinical implications, for the use of diverse motor tasks to treat disorders of motor planning.

Link to article

-- (Gracco, V., Klepousniotou, E., Itzhak, I., & Baum, S.) (2008). Sensorimotor and motorsensory interactions in speech. In Sock, Fuchs, & Laprie (Eds), Proceedings of the 8th International Seminar on Speech Production. Strasbourg, France: INRIA.

Abstract: A long-standing issue in psycholinguistics is whether language production and language comprehension share a common neural substrate. Recent neuroimaging studies of speech appear to support overlap of brain regions for both production and perception. However, what is not known is how to interpret the perceptual activation of motor regions. In the following, the brain regions associated with producing heard speech are described to identify the sensorimotor components of the speech motor network. The brain regions associated with speech production are then examined for their activation during passive perception of lexical items presented as heard words, pictures and printed text. A number of overlapping cortical and subcortical areas were activated during both perception and production. Interestingly, all brain areas associated with passive perception increased their activation for speech production. The increased activation in the classical sensory/perceptual areas for production suggests an interactive process in which motor areas project back to sensory/perceptual areas reflecting a binding of perception (sensory) and production (motor) regions within the network.

Link to article

-- (Sato, M., Troille, E., Ménard, L., Cathiard, M.A., & Gracco, V.L.) (2008). Listening while speaking: new behavioral evidence for articulatory-to-auditory feedback projections. Proceedings of the International Conference on Auditory-Visual Speech Processing. Tangalooma, Australia.

Abstract: The existence of feedback control mechanisms from motor to sensory systems is a central idea in speech production research. Consistent with the view that articulation modulates the activity of the auditory cortex, it has been shown that silent articulation improved identification of concordant speech sounds [1]. In the present study, we replicated and extended this finding by demonstrating that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable may speed the perceptual processing of auditory and auditory visual speech stimuli. These results provide new behavioral evidence for the existence of motor-to-sensory discharge in speech production and suggest a functional connection between action and perception systems.

Link to article

Dr. Aparna Nadig
NADIG, A. (Vivanti, G., Nadig., A., Ozonoff, S., & Rogers, S.J.) (2008). What to children with autism attend to during imitation tasks? Journal of Experimental Child Psychology, Special issue on Imitation in Autism, 101, 186-205.

Abstract: Individuals with autism show a complex profile of differences in imitative ability, including a general deficit in precision of imitating another's actions and special difficulty in imitating nonmeaningful gestures relative to meaningful actions on objects. Given that they also show atypical patterns of visual attention when observing social stimuli, we investigated whether possible differences in visual attention when observing an action to be imitated may contribute to imitative difficulties in autism in both nonmeaningful gestures and meaningful actions on objects. Results indicated that (a) a group of 18 high-functioning 8- to 15-year-olds with autistic disorder, in comparison with a matched group of 13 typically developing children, showed similar patterns of visual attention to the demonstrator's action but decreased attention to his face when observing a model to be imitated; (b) nonmeaningful gestures and meaningful actions on objects triggered distinct visual attention patterns that did not differ between groups; (c) the autism group demonstrated reduced imitative precision for both types of imitation; and (d) duration of visual attention to the demonstrator's action was related to imitation precision for nonmeaningful gestures in the autism group.

Link to article

Dr. Marc Pell
PELL, M. (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2008). Functional contributions of the basal ganglia to emotional prosody: evidence from ERPs. Brain Research, 1217, 171-178.

Abstract: The basal ganglia (BG) have been functionally linked to emotional processing [Pell, M.D., Leonard, C.L., 2003. Processing emotional tone form speech in Parkinson's Disease: a role for the basal ganglia. Cogn. Affec. Behav. Neurosci. 3, 275-288; Pell, M.D., 2006. Cerebral mechanisms for understanding emotional prosody in speech. Brain Lang. 97 (2), 221-234]. However, few studies have tried to specify the precise role of the BG during emotional prosodic processing. Therefore, the current study examined deviance detection in healthy listeners and patients with left focal BG lesions during implicit emotional prosodic processing in an event-related brain potential (ERP)-experiment. In order to compare these ERP responses with explicit judgments of emotional prosody, the same participants were tested in a follow-up recognition task. As previously reported [Kotz, S.A., Paulmann, S., 2007. When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Res. 1151, 107-118; Paulmann, S. & Kotz, S.A., 2008. An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo- and lexical sentence context. Brain Lang. 105, 59-69], deviance of prosodic expectancy elicits a right lateralized positive ERP component in healthy listeners. Here we report a similar positive ERP correlate in BG-patients and healthy controls. In contrast, BG-patients are significantly impaired in explicit recognition of emotional prosody when compared to healthy controls. The current data serve as first evidence that focal lesions in left BG do not necessarily affect implicit emotional prosodic processing but evaluative emotional prosodic processes as demonstrated in the recognition task. The results suggest that the BG may not play a mandatory role in implicit emotional prosodic processing. Rather, executive processes underlying the recognition task may be dysfunctional during emotional prosodic processing.

Link to article

-- (Monetta, L., Grindrod, C.M., & Pell, M.D.) (2008). Effects of working memory capacity on inference generation during story comprehension in adults with Parkinson’s disease. Journal of Neurolinguistics, 21, 400-417.

Abstract: A group of non-demented adults with Parkinson's disease (PD) were studied to investigate how PD affects pragmatic-language processing, and, specifically, to test the hypothesis that the ability to draw inferences from discourse in PD is critically tied to the underlying working memory (WM) capacity of individual patients [Monetta, L., & Pell, M. D. (2007). Effects of verbal working memory deficits on metaphor comprehension in patients with Parkinson's disease. Brain and Language, 101, 80–89]. Thirteen PD patients and a matched group of 16 healthy control (HC) participants performed the Discourse Comprehension Test [Brookshire, R. H., & Nicholas, L. E. (1993). Discourse comprehension test. Tucson, AZ: Communication Skill Builders], a standardized test which evaluates the ability to generate inferences based on explicit or implied information relating to main ideas or details presented in short stories. Initial analyses revealed that the PD group as a whole was significantly less accurate than the HC group when comprehension questions pertained to implied as opposed to explicit information in the stories, consistent with previous findings [Murray, L. L., & Stout, J. C. (1999). Discourse comprehension in Huntington's and Parkinson's diseases. American Journal of Speech–Language Pathology, 8, 137–148]. However, subsequent analyses showed that only a subgroup of PD patients with WM deficits, and not PD patients with WM capacity within the control group range, were significantly impaired for drawing inferences (especially predictive inferences about implied details in the stories) when compared to the control group. These results build on a growing body of literature, which demonstrates that compromise of frontal–striatal systems and subsequent reductions in processing/WM capacity in PD are a major source of pragmatic-language deficits in many PD patients.

Link to article

-- (Pell, M.D. & Monetta, L.) (2008). How Parkinson’s disease affects nonverbal communication and language processing. Language and Linguistics Compass, 2(5), 739-759.

Abstract: In addition to difficulties that affect movement, many adults with Parkinson's disease (PD) experience changes that negatively impact on receptive aspects of their communication. For example, some PD patients have difficulties processing non-verbal expressions (facial expressions, voice tone) and many are less sensitive to ‘non-literal’ or pragmatic meanings of language, at least under certain conditions. This chapter outlines how PD can affect the comprehension of language and non-verbal expressions and considers how these changes are related to concurrent alterations in cognition (e.g., executive functions, working memory) and motor signs associated with the disease. Our summary underscores that the progressive course of PD can interrupt a number of functional systems that support cognition and receptive language, and in different ways, leading to both primary and secondary impairments of the systems that support linguistic and non-verbal communication.

Link to article

-- (Monetta, L., Cheang, H.S., & Pell, M.D.) (2008). Understanding speaker attitudes from prosody by adults with Parkinson’s disease. Journal of Neuropsychology, 2(2), 415-430.

Abstract: The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).

Link to article

-- (Pell, M.D. & Skorup, V.) (2008). Implicit processing of emotional prosody in a foreign versus native language. Speech Communication, 50(6), 519-530.

Abstract: To test ideas about the universality and time course of vocal emotion processing, 50 English listeners performed an emotional priming task to determine whether they implicitly recognize emotional meanings of prosody when exposed to a foreign language. Arabic pseudo-utterances produced in a happy, sad, or neutral prosody acted as primes for a happy, sad, or ‘false’ (i.e., non-emotional) face target and participants judged whether the facial expression represents an emotion. The prosody-face relationship (congruent, incongruent) and the prosody duration (600 or 1000 ms) were independently manipulated in the same experiment. Results indicated that English listeners automatically detect the emotional significance of prosody when expressed in a foreign language, although activation of emotional meanings in a foreign language may require increased exposure to prosodic information than when listening to the native language.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2008). The sound of sarcasm. Speech Communication, 50 (5), 366-381.

Abstract: The present study was conducted to identify possible acoustic cues of sarcasm. Native English speakers produced a variety of simple utterances to convey four different attitudes: sarcasm, humour, sincerity, and neutrality. Following validation by a separate naïve group of native English speakers, the recorded speech was subjected to acoustic analyses for the following features: mean fundamental frequency (F0), F0 standard deviation, F0 range, mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio (HNR, to probe for voice quality changes), and one-third octave spectral values (to probe resonance changes). The results of analyses indicated that sarcasm was reliably characterized by a number of prosodic cues, although one acoustic feature appeared particularly robust in sarcastic utterances: overall reductions in mean F0 relative to all other target attitudes. Sarcasm was also reliably distinguished from sincerity by overall reductions in HNR and in F0 standard deviation. In certain linguistic contexts, sarcasm could be differentiated from sincerity and humour through changes in resonance and reductions in both speech rate and F0 range. Results also suggested a role of language used by speakers in conveying sarcasm and sincerity. It was concluded that sarcasm in speech can be characterized by a specific pattern of prosodic cues in addition to textual cues, and that these acoustic characteristics can be influenced by language used by the speaker.

Link to article

-- (Paulmann, S., Pell, M.D., & Kotz, S.A.) (2008). How aging affects the recognition of emotional speech. Brain and Language, 104, 262-269.

Abstract: To successfully infer a speaker's emotional state, diverse sources of emotional information need to be decoded. The present study explored to what extent emotional speech recognition of 'basic' emotions (anger, disgust, fear, happiness, pleasant surprise, sadness) differs between different sex (male/female) and age (young/middle-aged) groups in a behavioural experiment. Participants were asked to identify the emotional prosody of a sentence as accurately as possible. As a secondary goal, the perceptual findings were examined in relation to acoustic properties of the sentences presented. Findings indicate that emotion recognition rates differ between the different categories tested and that these patterns varied significantly as a function of age, but not of sex.

Link to article

-- (Dara, C., Monetta, L., & Pell, M.D.) (2008). Vocal emotion processing in Parkinson’s disease: reduced sensitivity to negative emotions. Brain Research, 1188, 100-111.

Abstract: To document the impact of Parkinson's disease (PD) on communication and to further clarify the role of the basal ganglia in the processing of emotional speech prosody, this investigation compared how PD patients identify basic emotions from prosody and judge specific affective properties of the same vocal stimuli, such as valence or intensity. Sixteen non-demented adults with PD and 17 healthy control (HC) participants listened to semantically-anomalous pseudo-utterances spoken in seven emotional intonations (anger, disgust, fear, sadness, happiness, pleasant surprise, neutral) and two distinct levels of perceived emotional intensity (high, low). On three separate occasions, participants classified the emotional meaning of the prosody for each utterance (identification task), rated how positive or negative the stimulus sounded (valence rating task), or rated how intense the emotion was expressed by the speaker (intensity rating task). Results indicated that the PD group was significantly impaired relative to the HC group for categorizing emotional prosody and showed a reduced sensitivity to valence, but not intensity, attributes of emotional expressions conveying anger, disgust, and fear. The findings are discussed in light of the possible role of the basal ganglia in the processing of discrete emotions, particularly those associated with negative vigilance, and of how PD may impact on the sequential processing of prosodic expressions.

Link to article

-- (Paulmann, S., Schmidt, P., Pell, M.D., & Kotz, S.A.) (2008). Rapid processing of emotional and voice information as evidenced by ERPs. Speech Prosody 4th International Conference Proceedings, (pp. 205-209). Campinas, Brazil

Abstract: Next to linguistic content, the human voice carries speaker identity information (e.g. female/male, young/old) and can also carry emotional information. Although various studies have started to specify the brain regions that underlie the different functions of human voice processing, few studies have aimed to specify the time course underlying these processes. By means of event-related potentials (ERPs) we aimed to determine the time-course of neural responses to emotional speech, speaker identification, and their interplay. While engaged in an implicit voice processing task (probe verification) participants listened to emotional sentences spoken by two female and two male speakers of two different ages (young and middle-aged). For all four speakers rapid emotional decoding was observed as emotional sentences could be differentiated from neutral sentences already within 200 ms after sentence onset (P200). However, results also imply that individual capacity to encode emotional expressions may have an influence on this early emotion detection as the P200 differentiation pattern (neutral vs. emotion) differed for each individual speaker.

Link to article

Dr. Linda Polka
POLKA, L. (Shahnaz, N., Miranda, T., & Polka, L.) (2008) Multi-frequency tympanometry in neonatal intensive care unit & well babies. Journal of the American Academy of Audiology, 19(5), 392-418.

Abstract:
Conventional low probe tone frequency tympanometry has not been successful in identifying middle ear effusion in newborn infants due to differences in the physiological properties of the middle ear in newborn infants and adults. With a rapid increase in newborn hearing screening programs, there is a need for a reliable test of middle ear function for the infant population. In recent years, new evidence has shown that tympanometry performed at higher probe tone frequencies may be more sensitive to middle ear disease than conventional low probe tone frequency in newborn infants.

PURPOSE: The main goal of this study was to explore the characteristics of the normal middle ear in the NICU (neonatal intensive care unit) and well babies using conventional and multifrequency tympanometry (MFT). It was also within the scope of this study to compare conventional and MFT patterns in NICU and well babies to already established patterns in adults to identify ways to improve hearing assessment in newborns and young infants.

METHODS: Three experiments were conducted using standard and MFT involving healthy babies and NICU babies. NICU babies (n = 33), healthy three-week-old babies (n=16), and neonates on high-priority hearing registry (HPHR) (n=42) were tested. Thirty-two ears of 16 healthy Caucasian adults (compared to well-babies) and 47 ears of 26 healthy Caucasian adults (compared to NICU babies) were also included in this study.

RESULTS: The distribution of the Vanhuyse patterns as well as variation of admittance phase and peak compensated susceptance and conductance at different probe tone frequencies was also explored. In general, in both well babies and NICU babies, 226 Hz tympanograms are typically multipeaked in ears that passed or referred on transient otoacoustic emission (TEOAE), limiting the specificity and sensitivity of this measure for differentiating normal and abnormal middle ear conditions. Tympanograms obtained at 1 kHz are potentially more sensitive and specific to presumably abnormal and normal middle ear conditions. Tympanometry at 1 kHz is also a good predictor of presence or absence of TEOAE.

Link to article

-- (Rvachew, S., Alhaidary, A., Mattock, K., & Polka, L.) (2008), Emergence of corner vowels in the babble produced by infants exposed to Canadian English or Canadian French. Journal of Phonetics, 36, 564-577.

Abstract: This paper examined the emergence of corner vowels ([i], [u], [æ] and [a]) in the infant vowel spaces and the influence of the ambient language on babbling, in particular, on the frequency of occurrence of the corner vowels. Speech samples were recorded from 51 Canadian infants from 8 to 18 months of age, respectively, English-learning infants (n=24) and French-learning infants (n=27). The acoustic parameters (F1 and F2) of each codable infant vowel were analyzed and then used to plot all the vowels along the diffuse–compact (F2-F1) and grave–acute dimensions ([F1+F2]/2). Listener judgments of vowel category were obtained for the most extreme vowels in each infant's vowel space, i.e., the 10% vowels with minimum or maximum diffuse–compact and grave–acute values. The judgments of adult listeners, both anglophone (n=5) and francophone (n=5), confirmed the peripheral expansion of infant vowel space toward the diffuse and grave corners with age. Furthermore, English-learning infants were judged by both English and French-speaking listeners to produce a greater frequency of [u] in the grave corner, in comparison with French-learning infants. The higher proportion of [u] in English sample was observed throughout the age range suggesting the influence of ambient language at a young age.

Link to article

-- (Polka, L., Rvachew, S. & Molnar, M.) (2008). Speech perception by 6- to 8-month-olds in the presence of distracting sound. Infancy, 13(5), 421-439.

Abstract: The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6- to 8-month-olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old-new-old-new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non-speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.

Link to article

-- (Sundara, M., Polka, L., & Molnar, M.) (2008). Development of coronal stop perception: Bilingual infants keep pace with their monolingual peers. Cognition, 108, 232-242.

Abstract: Previous studies indicate that the discrimination of native phonetic contrasts in infants exposed to two languages from birth follows a different developmental time course from that observed in monolingual infants. We compared infant discrimination of dental (French) and alveolar (English) place variants of /d/ in three groups differing in language experience. At 6–8 months, infants in all three language groups succeeded; at 10–12 months, monolingual English and bilingual but not monolingual French infants distinguished this contrast. Thus, for highly frequent, similar phones, despite overlap in cross-linguistic distributions, bilingual infants performed on par with their English monolingual peers and better than their French monolingual peers.

Link to article

-- (Sundara, S. & Polka, L.) (2008). Discrimination of coronal stops by bilingual adults: The timing and nature of language interaction, Cognition, 106, 234-258.

Abstract: The current study was designed to investigate the timing and nature of interaction between the two languages of bilinguals. For this purpose, we compared discrimination of Canadian French and Canadian English coronal stops by simultaneous bilingual, monolingual and advanced early L2 learners of French and English. French /d/ is phonetically described as dental whereas English /d/ is described as alveolar. Using a categorial AXB task, the performance of all four groups was compared to chance and to the performance of native Hindi listeners. Hindi listeners performed well above chance in discriminating French and English /d/-initial syllables. The discrimination performance of advanced early L2 learners, but not simultaneous bilinguals, was consistent with one merged category for coronal stops in the two languages. The data provide evidence for interaction in L2 learners as well as simultaneous bilinguals; however, the nature of the interaction is different in the two groups.

Link to article

-- (Mattock, K. , Molnar, M., Polka, L. & Burnham, D.) (2008). The developmental time course of lexical tone perception in the first year of life. Cognition, 106, 1367-1381.

Abstract: Perceptual reorganisation of infants’ speech perception has been found from 6 months for consonants and earlier for vowels. Recently, similar reorganisation has been found for lexical tone between 6 and 9 months of age. Given that there is a close relationship between vowels and tones, this study investigates whether the perceptual reorganisation for tone begins earlier than 6 months. Non-tone language English and French infants were tested with the Thai low vs. rising lexical tone contrast, using the stimulus alternating preference procedure. Four- and 6-month-old infants discriminated the lexical tones, and there was no decline in discrimination performance across these ages. However, 9-month-olds failed to discriminate the lexical tones. This particular pattern of decline in nonnative tone discrimination over age indicates that perceptual reorganisation for tone does not parallel the developmentally prior decline observed in vowel perception. The findings converge with previous developmental cross-language findings on tone perception in English-language infants [Mattock, K., & Burnham, D. (2006). Chinese and English infants’ tone perception: Evidence for perceptual reorganization. Infancy, 10(3)], and extend them by showing similar perceptual reorganisation for non-tone language infants learning rhythmically different non-tone languages (English and French).

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Mortimer, J., & Rvachew, S.) (2008). Morphosyntax and phonological awareness in children with speech sound disorders. Annals of the New York Academy of Sciences, 1145, 275-282.

Abstract: The goals of the current study were to examine concurrent and longitudinal relationships of expressive morphosyntax and phonological awareness in a group of children with speech sound disorders. Tests of phonological awareness were administered to 38 children at the end of their prekindergarten and kindergarten years. Speech samples were elicited and analyzed to obtain a set of expressive morphosyntax variables. Finite verb morphology and inflectional suffix use by prekindergarten children were found to predict significant unique variance in change in phonological awareness a year later. These results are consistent with previous research showing finite verb morphology to be a sensitive indicator of language impairment in English.

Link to article

-- (Rvachew, S., Alhaidary, A., Mattock, K., & Polka, L.) (2008), Emergence of corner vowels in the babble produced by infants exposed to Canadian English or Canadian French. Journal of Phonetics, 36, 564-577.

Abstract: This paper examined the emergence of corner vowels ([i], [u], [æ] and [a]) in the infant vowel spaces and the influence of the ambient language on babbling, in particular, on the frequency of occurrence of the corner vowels. Speech samples were recorded from 51 Canadian infants from 8 to 18 months of age, respectively, English-learning infants (n=24) and French-learning infants (n=27). The acoustic parameters (F1 and F2) of each codable infant vowel were analyzed and then used to plot all the vowels along the diffuse–compact (F2-F1) and grave–acute dimensions ([F1+F2]/2). Listener judgments of vowel category were obtained for the most extreme vowels in each infant's vowel space, i.e., the 10% vowels with minimum or maximum diffuse–compact and grave–acute values. The judgments of adult listeners, both anglophone (n=5) and francophone (n=5), confirmed the peripheral expansion of infant vowel space toward the diffuse and grave corners with age. Furthermore, English-learning infants were judged by both English and French-speaking listeners to produce a greater frequency of [u] in the grave corner, in comparison with French-learning infants. The higher proportion of [u] in English sample was observed throughout the age range suggesting the influence of ambient language at a young age.

Link to article

-- (Rvachew, S., & Grawburg, M.) (2008). Reflections on phonological working memory, letter knowledge and phonological awareness: A reply to Hartmann (2008). Journal of Speech, Language, and Hearing Research, 51, 1219-1226.

Abstract:
Purpose: S. Rvachew and M. Grawburg (2006) found that speech perception and vocabulary skills jointly predicted the phonological awareness skills of children with a speech sound disorder. E. Hartmann (2008) suggested that the Rvachew and Grawburg model would be improved by the addition of phonological working memory. Hartmann further suggested that the link between phoneme awareness and letter knowledge should be modeled as a reciprocal relationship. In this letter, Rvachew and Grawburg respond to Hartmann's suggestions for modification of the model.

Method: The literature on the role of phonological working memory in the development of vocabulary knowledge and phonological awareness was reviewed. Data presented previously by Rvachew and Grawburg (2006) and Rvachew (2006) were reanalyzed.

Results: The reanalysis of previously reported longitudinal data revealed that the relationship between letter knowledge and specific aspects of phonological awareness was not reciprocal for kindergarten-age children with a speech sound disorder.

Conclusions: Phonological working memory, if measured so that relative performance levels do not reflect differences in articulatory accuracy, may not alter the model because of its close correspondence with speech perception skills. However, further study of the hypothesized causal relationships modeled by Rvachew and Grawburg (2006) would be valuable, especially if experimental research designs were used.

Link to article

-- (Polka, L., Rvachew, S. & Molnar, M.) (2008). Speech perception by 6- to 8-month-olds in the presence of distracting sound. Infancy, 13(5), 421-439.

Abstract: The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6- to 8-month-olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old-new-old-new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non-speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.

Link to article

-- (MacLeod, A., Brosseau-Lapré, F., & Rvachew, S.) (2008). Explorer la relation entre la production et la perception de la parole. Spectrum, 1, 10-18.

Abstract:

Link to article

 

Abstract: Le but de cette réflexion critique est d’explorer la relation entre la production et la perception de la parole chez les enfants présentant un développement typique et chez les enfants présentant des troubles phonologiques. En premier lieu, nous décrivons les trois principales théories de la production et la perception de la parole : théories motrices, théories gestuelles, et théories intégratives. Deuxièmement, nous décrivons les résultats des recherches actuelles qui étudient les liens entre la production et la perception de la parole. Troisièmement, nous évaluons les hypothèses proposées par les trois principales théories et les résultats de projets de recherche afin de suggérer les développements ultérieurs aux niveaux théorique et clinique. Les résultats des recherches actuelles confirment l’hypothèse d’un lien continu entre la production et la perception de la parole tel que proposé par les théories intégratives, qui suggèrent un rôle continu pour la perception de la parole dans la planification et la production de la parole.

Link to article

Dr. Karsten Steinhauer
STEINHAUER, K. (Steinhauer, K. & Connolly, J.F.) (2008). Event-related potentials in the study of language. In B. Stemmer & H. Whsitaker (Eds), Handbook of the Cognitive Neuroscience of Language (pp.91-104). New York: Elsevier

Book description:
In the last ten years the neuroscience of language has matured as a field. Ten years ago, neuroimaging was just being explored for neurolinguistic questions, whereas today it constitutes a routine component. At the same time there have been significant developments in linguistic and psychological theory that speak to the neuroscience of language. This book consolidates those advances into a single reference.

The Handbook of the Neuroscience of Language provides a comprehensive overview of this field. Divided into five sections, section one discusses methods and techniques including clinical assessment approaches, methods of mapping the human brain, and a theoretical framework for interpreting the multiple levels of neural organization that contribute to language comprehension. Section two discusses the impact imaging techniques (PET, fMRI, ERPs, electrical stimulation of language cortex, TMS) have made to language research. Section three discusses experimental approaches to the field, including disorders at different language levels in reading as well as writing and number processing. Additionally, chapters here present computational models, discuss the role of mirror systems for language, and cover brain lateralization with respect to language. Part four focuses on language in special populations, in various disease processes, and in developmental disorders. The book ends with a listing of resources in the neuroscience of language and a glossary of items and concepts to help the novice become acquainted with the field.

Book information:
ISBN: 9780080453521

Link to book

Dr. Elin Thordardottir
THORDARDOTTIR, E. (2008). L’évaluation du langage des enfants bilingues. Fréquences : revue de l’ordre des orthophonistes et audiologistes du Québec.
 
-- (Webster, R., Erdos, C., Evans, K., Majnemer, A., Saigal, G., Kehayia, E., Thordardottir, E., & Shevell, M.) (2008). Neurological and magnetic resonance imaging findings in children with developmental language impairment. Journal of Child Neurology, 23 (8), 870-877.

Abstract: Neurologic and radiologic findings in children with well-defined developmental language impairment have rarely been systematically assessed. Children aged 7 to 13 years with developmental language impairment or normal language (controls) underwent language, nonverbal cognitive, motor and neurological assessments, standardized assessment for subtle neurological signs, and magnetic resonance imaging. Nine children with developmental language impairment and 12 controls participated. No focal abnormalities were identified on standard neurological examination. Age and developmental language impairment were independent predictors of neurological subtle signs scores (r(2) = 0.52). Imaging abnormalities were identified in two boys with developmental language impairment and no controls (P = .17). Lesions identified were predicted neither by history nor by neurological examination. Previously unsuspected lesions were identified in almost 25% of children with developmental language impairment. Constraints regarding cooperation and sedation requirements may limit the clinical application of imaging modalities in this population.

Link to article

-- (Thordardottir, E.) (2008). Language specific effects of task demands on the manifestation of specific language impairment: A comparison of English and Icelandic. Journal of Speech, Language and Hearing Research, 51, 922-937.

Abstract:
Purpose: Previous research has indicated that the manifestation of specific language impairment (SLI) varies according to factors such as language, age, and task. This study examined the effect of task demands on language production in children with SLI cross-linguistically.

Method: Icelandic- and English-speaking school-age children with SLI and normal language (NL) peers (n = 42) were administered measures of verbal working memory. Spontaneous language samples were collected in contexts that vary in task demands: conversation, narration, and expository discourse. The effect of the context-related task demands on the accuracy of grammatical inflections was examined.

Results: Children with SLI in both language groups scored significantly lower than their NL peers in verbal working memory. Nonword repetition scores correlated with morphological accuracy. In both languages, mean length of utterance (MLU) varied systematically across sampling contexts. Context exerted a significant effect on the accuracy of grammatical inflection in English only. Error rates were higher overall in English than in Icelandic, but whether the difference was significant depended on the sampling context. Errors in Icelandic involved verb and noun phrase inflection to a similar extent.

Conclusions: The production of grammatical morphology appears to be more taxing for children with SLI who speak English than for those who speak Icelandic. Thus, whereas children with SLI in both language groups evidence deficits in language processing, cross-linguistic differences are seen in which linguistic structures are vulnerable when processing load is increased. Future research should carefully consider the effect of context on children's language performance.

Link to article

-- (Royle, P. & Thordardottir, E.) (2008). Elicitation of the passé compose in French preschoolers with and without SLI. Applied Psycholinguistics, 29, 341-365.

Abstract: This study examines inflectional abilities in French-speaking children with specific language impairment (SLI) using a verb elicitation task. Eleven children with SLI and age-matched controls (37–52 months) participated in the experiment. We elicited the passé composé using eight regular and eight irregular high frequency verbs matched for age of acquisition. Children with SLI showed dissimilar productive verb inflection abilities to control children (even when comparing participants with similar verb vocabularies and mean length of utterance in words). Control children showed evidence of overregularization and sensitivity to morphological structure, whereas no such effects were observed in the SLI group. Error patterns observed in the SLI group demonstrate that, at this age, they cannot produce passé composé forms in elicitation tasks, even though some participants used them spontaneously. Either context by itself might therefore be insufficient to fully evaluate productive linguistic abilities in children with SLI.

Link to article

2007

Shari Baum, Ph.D., Professor
Laura Gonnerman, Ph.D., Assistant Professor
Vincent Gracco, Ph.D., Associate Professor
Aparna Nadig, Ph.D., Assistant Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Klepousniotou, E., & Baum, S.) (2007). Disambiguating the ambiguity advantage effect in word recognition: An advantage for polysemous but not homonymous words, Journal of Neurolinguistics, 20, 1-24.

Abstract: Previous lexical decision studies reported a processing advantage for words with multiple meanings (i.e., the “ambiguity advantage” effect). The present study further specifies the source of this advantage by showing that it is based on the extent of meaning relatedness of ambiguous words. Four types of ambiguous words, balanced homonymous (e.g., “panel”), unbalanced homonymous (e.g., “port”), metaphorically polysemous (e.g., “lip”), and metonymically polysemous (e.g., “rabbit”), were used in auditory and visual simple lexical decision experiments. It was found that ambiguous words with multiple related senses (i.e., polysemous words) are processed faster than frequency-matched unambiguous control words, whereas ambiguous words with multiple unrelated meanings (i.e., homonymous words) do not show such an advantage. In addition, a distinction within polysemy (into metaphor and metonymy) is demonstrated experimentally. These results call for a re-evaluation of models of word recognition, so that the advantage found for polysemous, but not homonymous, words can be accommodated.

Link to article

Dr. Laura Gonnerman
GONNERMAN, L.M. (Gonnerman, L.M., Seidenberg, M.S., & Andersen, E.S.) (2007). Graded semantic and phonological similarity effects in priming: Evidence for a distributed connectionist approach to morphology. Journal of Experimental Psychology: General, 136, 323-345.

Abstract: A considerable body of empirical and theoretical research suggests that morphological structure governs the representation of words in memory and that many words are decomposed into morphological components in processing. The authors investigated an alternative approach in which morphology arises from the interaction of semantic and phonological codes. A series of cross-modal lexical decision experiments shows that the magnitude of priming reflects the degree of semantic and phonological overlap between words. Crucially, moderately similar items produce intermediate facilitation (e.g., lately-late). This pattern is observed for word pairs exhibiting different types of morphological relationships, including suffixed-stem (e.g., teacher-teach), suffixed-suffixed (e.g., saintly-sainthood), and prefixed-stem pairs (preheat-heat). The results can be understood in terms of connectionist models that use distributed representations rather than discrete morphemes.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Beal, D.S., Gracco, V. L., Lafaille, S. J., & DeNil, L. F.) (2007). Voxel-based morphometry of auditory and speech-related cortex in stutterers. NeuroReport, 18 (12), 1102-1110.

Abstract: Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.

Link to article

Dr. Aparna Nadig
NADIG, A. (Nadig, A., Ozonoff, S., Young, G., Rozga, A., Sigman, M., & Rogers, S. J.) (2007). A prospective study of response-to-name in infants at risk for autism. Archives of Pediatrics and Adolescent Medicine, Theme issue on Autism, 161(4), 378-383.

Abstract:
OBJECTIVE: To assess the sensitivity and specificity of decreased response to name at age 12 months as a screen for autism spectrum disorders (ASD) and other developmental delays.

DESIGN: Prospective, longitudinal design studying infants at risk for ASD.

SETTING: Research laboratory at university medical center.

PARTICIPANTS: Infants at risk for autism (55 six-month-olds, 101 twelve-month-olds) and a control group at no known risk (43 six-month-olds, 46 twelve-month-olds). To date, 46 at-risk infants and 25 control infants have been followed up to 24 months. Intervention Experimental task eliciting response-to-name behavior.

MAIN OUTCOME MEASURES: Autism Diagnostic Observation Schedule, Mullen Scales of Early Learning.

RESULTS: At age 6 months, there was a nonsignificant trend for control infants to require a fewer number of calls to respond to name than infants at risk for autism. At age 12 months, 100% of infants in the control group "passed," responding on the first or second name call, while 86% in the at-risk group did. Three fourths of children who failed the task were identified with developmental problems at age 24 months. Specificity of failing to respond to name was 0.89 for ASD and 0.94 for any developmental delay. Sensitivity was 0.50 for ASD and 0.39 for any developmental delay.

CONCLUSIONS: Failure to respond to name by age 12 months is highly suggestive of developmental abnormality but does not identify all children at risk for developmental problems. Lack of responding to name is not universal among infants later diagnosed with ASD and/or other developmental delays. Poor response to name may be a trait of the broader autism phenotype in infancy.

Link to article

-- (Nadig, A., Ozonoff, S., Singh, L., Young, G., & Rogers, S. J.) (2007). Do 6-month-old infants at risk for autism display an infant-directed speech preference? Proceedings of the 31st annual Boston University Conference on Language Development. Somerville: Cascadilla Press.
 
Dr. Marc Pell
PELL, M. (Berney, A., Panisset, M., Sadikot, A.F., Ptito, A., Dagher, A., Fraraccio, M., Savard, G., Pell, M.D. & Benkelfat, C.) (2007). Mood stability during acute stimulator challenge in Parkinson’s disease patients under long-term treatment with subthalamic deep brain stimulation. Movement Disorders, 22 (8), 1093-1096.

Abstract: Acute and chronic behavioral effects of subthalamic stimulation (STN-DBS) for Parkinson's disease (PD) are reported in the literature. As the technique is relatively new, few systematic studies on the behavioral effects in long-term treated patients are available. To further study the putative effects of STN-DBS on mood and emotional processing, 15 consecutive PD patients under STN-DBS for at least 1 year, were tested ON and OFF stimulation while on or off medication, with instruments sensitive to short-term changes in mood and in emotional discrimination. After acute changes in experimental conditions, mood core dimensions (depression, elation, anxiety) and emotion discrimination processing remained remarkably stable, in the face of significant motor changes. Acute stimulator challenge in long-term STN-DBS-treated PD patients does not appear to provoke clinically relevant mood effects.

Link to article

-- (Pell, M.D.) (2007). Reduced sensitivity to prosodic attitudes in adults with focal right hemisphere brain damage. Brain and Language, 101, 64-79.

Abstract: Although there is a strong link between the right hemisphere and understanding emotional prosody in speech, there are few data on how the right hemisphere is implicated for understanding the emotive "attitudes" of a speaker from prosody. This report describes two experiments which compared how listeners with and without focal right hemisphere damage (RHD) rate speaker attitudes of "confidence" and "politeness" which are signalled in large part by prosodic features of an utterance. The RHD listeners displayed abnormal sensitivity to both the expressed confidence and politeness of speakers, underscoring a major role for the right hemisphere in the processing of emotions and speaker attitudes from prosody, although the source of these deficits may sometimes vary.

Link to article

-- (Cheang, H.S. & Pell, M.D.) (2007). An acoustic investigation of Parkinsonian speech in linguistic and emotional contexts. Journal of Neurolinguistics, 20, 221-241.

Abstract: The speech prosody of a group of patients in the early stages of Parkinson's disease (PD) was compared to that of a group of healthy age- and education-matched controls to quantify possible acoustic changes in speech production secondary to PD. Both groups produced standardized speech samples across a number of prosody conditions: phonemic stress, contrastive stress, and emotional prosody. The amplitude, fundamental frequency, and duration of all tokens were measured. PD speakers produced speech that was of lower amplitude than the tokens of healthy speakers in many conditions across all production tasks. Fundamental frequency distinguished the two speaker groups for contrastive stress and emotional prosody production, and duration differentiated the groups for phonemic stress production. It was concluded that motor impairments in PD lead to adverse and varied acoustic changes which affect a number of prosodic contrasts in speech and that these alterations appear to occur in earlier stages of disease progression than is often presumed by many investigators.

Link to article

-- (Monetta, L. & Pell, M.D.) (2007). Effects of verbal working memory deficits on metaphor comprehension in patients with Parkinson's disease. Brain and Language, 101, 80-89.

Abstract: This research studied one aspect of pragmatic language processing, the ability to understand metaphorical language, to determine whether patients with Parkinson disease (PD) are impaired for these abilities, and whether cognitive resource limitations/fronto-striatal dysfunction contributes to these deficits. Seventeen PD participants and healthy controls (HC) completed a series of neuropsychological tests and performed a metaphor comprehension task following the methods of Gernsbacher and colleagues [Gernsbacher, M. A., Keysar, B., Robertson, R. R. W., & Werner, N. K. (2001). The role of suppression and enhancement in understanding metaphors. Journal of Memory and Language, 45, 433-450.] When participants in the PD group were identified as "impaired" or "unimpaired" relative to the control group on a measure of verbal working memory span, we found that only PD participants with impaired working memory were simultaneously impaired in the processing of metaphorical language. Based on our findings we argue that certain "complex" forms of language processing such as metaphor interpretation are highly dependent on intact fronto-striatal systems for working memory which are frequently, although not always, compromised during the early course of PD.

Link to article

-- (Dara, C. & Pell, M.D.) (2007, Spring). Intonation in tone languages. ASHA Kiran: Newsletter of the Asian Indian Caucus, 8.
 
Dr. Linda Polka
POLKA, L. (Polka, L. Rvachew, S. & Mattock, K.) (2007). Experiential influences on speech perception and production during infancy. In E. Hoff & M. Shatz (Eds), Handbook of Child Language. Oxford: Blackwell.

Abstract: Mature language users are highly specialized, expert, and efficient perceivers and producers of their native language. This expertise begins to develop in infancy, a time when the infant acquires language-specific perception of native language phonetic categories and learns to produce speech-like syllables in the form of canonical babble. The emergence of these skills is well described by past research but the precise mechanisms by which these foundational abilities develop have not been identified. This chapter provides an overview of what is currently known about the impact of language experience on the development of speech perception and production during infancy. Throughout we affirm that experiential influences on phonetic development cannot be understood without considering the interaction between the constraints that the child brings to the task and the nature of the environmental input. In the perception and production domains our current understanding of this interaction is incomplete and tends to focus on the child as a passive receiver of input. In our review, we signal a recent shift in research attention to the infant’s role in actively selecting and learning from the input. We begin this chapter by describing what is currently known about the determinants of speech perception and speech production development during infancy while highlighting important gaps to be filled within each domain. We close by emphasizing the need to integrate research across the perception and production domains.

Link to book

Dr. Susan Rvachew
RVACHEW, S. (Chiang, P. & Rvachew, S. (2007). English-French bilingual children’s phonological awareness and vocabulary skills. Canadian Journal of Applied Linguistics, 10, 293-308.

Abstract: This study examined the relationship between English-speaking children’s vocabulary skills in English and in French and their phonological awareness skills in both languages. Forty-four kindergarten-aged children attending French immersion programs were administered a receptive vocabulary test, an expressive vocabulary test and a phonological awareness test in English and French. Results showed that French phonological awareness was largely explained by English phonological awareness, consistent with previous findings that phonological awareness skills transfer across languages. However, there was a small unique contribution from French expressive vocabulary size to French phonological awareness. The importance of vocabulary skills to the development of phonological awareness is discussed.

Link to article

-- (Rvachew, S.) (2007). Phonological processing and reading in children with speech sound disorders. American Journal of Speech-Language Pathology, 16, 260-270.

Abstract:
Purpose: To examine the relationship between phonological processing skills prior to kindergarten entry and reading skills at the end of 1st grade, in children with speech sound disorders (SSD).

Method: The participants were 17 children with SSD and poor phonological processing skills (SSD-low PP), 16 children with SSD and good phonological processing skills (SSD-high PP), and 35 children with typical speech who were first assessed during their prekindergarten year using measures of phonological processing (i.e., speech perception, rime awareness, and onset awareness tests), speech production, receptive and expressive language, and phonological awareness skills. This assessment was repeated when the children were completing 1st grade. The Test of Word Reading Efficiency was also conducted at that time. First-grade sight word and nonword reading performance was compared across these groups.

Results: At the end of 1st grade, the SSD-low PP group achieved significantly lower nonword decoding scores than the SSD-high PP and typical speech groups. The 2 SSD groups demonstrated similarly good receptive language skills and similarly poor articulation skills at that time, however. No between-group differences in sight word reading were observed. All but 1 child (in the SSD-low PP group) obtained reading scores that were within normal limits.

Conclusion: Weaknesses in phonological processing were stable for the SSD-low PP subgroup over a 2-year period.

Link to article

-- (Grawburg, M. & Rvachew, S.) (2007). Phonological awareness intervention for children with speech sound disorders. Journal of Speech-Language Pathology and Audiology, 31, 19-26.

Abstract: Phonological awareness (PA) development is related to the development of decoding and reading skills. PA can be measured in young children before the commencement of school and formal reading instruction. Compared to normally developing children, these children with speech sound disorders (SSD) are at increased risk for delayed PA. Children with poor PA, who are atrisk for developing poor decoding skills, can be identifi ed and treated before poor PA negatively impacts their future literacy development. This intervention program was developed as a form of early intervention for preschool-aged children with delayed PA. Ten 4-year-old children with poor PA and SSD participated in the study. The program consisted of eight sessions, which included both a PA and a speech perception component. The PA portion focused on matching words that shared either the same onset or rime. The speech perception portion focused on the identifi cation of correctly articulated or misarticulated words containing the target onset. Participants made signifi cant improvements in their PA, raising their post-treatment test scores to the level of normally developing children. The unique and important role of speech-language pathologists in the stimulation of PA in children prior to the commencement of formal schooling is highlighted.

Link to article

-- (Rvachew, S., Chiang, P., & Evans, N.) (2007). Characteristics of speech errors produced by children with and without delayed phonological awareness skills. Language, Speech, and Hearing Services in Schools, 38, 1-12.

Abstract:
PURPOSE: The purpose of this study was to examine the relationship between the types of speech errors that are produced by children with speech-sound disorders and the children's phonological awareness skills during their prekindergarten and kindergarten years.

METHOD: Fifty-eight children with speech-sound disorders were assessed during the spring of their prekindergarten year and then again at the end of their kindergarten year. The children's responses on the Goldman–Fristoe Test of Articulation (R. Goldman & M. Fristoe, 2000) were described in terms of match ratios for the features of each target sound and the type of error produced. Match ratios and error type frequencies were then examined as a function of the child's performance on a test of phonological awareness.

RESULTS: Lower match ratios for +distributed and higher frequencies of typical syllable structure errors and atypical segment errors were associated with poorer phonological awareness test performance. However, no aspect of the children's error patterns proved to be a reliable indicator of which individual child would pass or fail the test. The best predictor of test performance at the end of the kindergarten year was test performance 1 year earlier. Children who achieved age-appropriate articulation skills by the end of kindergarten also achieved age-appropriate phonological awareness skills.

CONCLUSION: Children who enter kindergarten with delayed articulation skills should be monitored to ensure age-appropriate acquisition of phonological awareness and literacy skills.

Link to article

-- (Rvachew, S.) (2007). Perceptual foundations of speech acquisition. In S. McLeod (Ed.), International Guide to Speech Acquisition (pp. 26 – 30). Clifton Park, NY: Thomson Delmar Learning.

Book description: The International Guide to Speech Acquisition is a comprehensive guide that is ideal for speech-language pathologists working with children from a wide variety of language backgrounds. Offering coverage on 12 English-speaking dialects and 24 languages other than English, you will find the information you need to identify children who are having speech difficulties and provide age-appropriate prevention and intervention targets.

Book information:
ISBN 13: 9781418053604
ISBN 10: 1418053600

Link to book

-- (Polka, L., Rvachew, S., & Mattock, K.) (2007). Experiential influences on speech perception and production in infancy. In E. Hoff & M. Shatz (Eds.). Blackwell Handbook of Language Development (pp. 153-172). Malden, MA: Blackwell Publishing

Book description: The Blackwell Handbook of Language Development provides a comprehensive treatment of the major topics and current concerns in the field; exploring the progress of 21st century research, its precursors, and promising research topics for the future.

    • Provides comprehensive treatments of the major topics and current concerns in the field of language development
    • Explores foundational and theoretical approaches
    • Focuses on the 21st century's research into the areas of brain development, computational skills, bilingualism, education, and cross-cultural comparison
    • Looks at language development in infancy through early childhood, as well as atypical development
    • Considers the past work, present research, and promising topics for the future.
    • Broad coverage makes this an excellent resource for graduate students in a variety of disciplines

 

Book information:
ISBN 13: 978-1405132534
ISBN 10: 1405132531

Link to article

Dr. Elin Thordardottir
THORDARDOTTIR, E. (Thordardottir, E.) (2007). Móðurmál og tvítyngi (Mother tongue and bilingualism). In H. Ragnarsdóttir, E. Sigríður Jónsdóttir & M. Þorkell Bernharðsson (Eds.), Fjölmenning á Íslandi (Multiculturalism in Iceland) (pp. 101-128). Reykjavik, Iceland: Rannsóknastofa í fjölmenningarfræðum KHÍ & Háskólaútgáfan (College of Education Research Center on Multiculturalism, and University of Iceland Press).
-- (Thordardottir, E. & Namazi, M.) (2007), Specific language impairment in French-speaking children: Beyond grammatical morphology. Journal of Speech, Language, and Hearing Research, 50, 698-715.

Abstract:
Purpose: Studies on specific language impairment (SLI) in French have identified specific aspects of morphosyntax as particularly vulnerable. However, a cohesive picture of relative strengths and weaknesses characterizing SLI in French has not been established. In light of normative data showing low morphological error rates in the spontaneous language of French-speaking preschoolers, the relative prominence of such errors in SLI in young children was questioned.

Method: Spontaneous language samples were collected from 12 French-speaking preschool-age children with SLI, as well as 12 children with normal language development matched on age and 12 children with normal language development matched on mean length of utterance. Language samples were analyzed for length of utterance; lexical diversity and composition; diversity of grammatical morphology and morphological errors, including verb finiteness; subject omission; and object clitics.

Results: Children with SLI scored lower than age-matched children on all of these measures but similarly to the mean length of utterance–matched controls. Errors in grammatical morphology were very infrequent in all groups, with no significant group differences.

Conclusion: The results indicate that the spontaneous language of French-speaking children with SLI in the preschool age range is characterized primarily by a generalized language impairment and that morphological deficits do not stand out as an area of particular vulnerability, in contrast with the pattern found in English for this age group.

Link to article

-- (Thordardottir, E.) (2007). Effective intervention for specific language impairment. In E. Thordardottir (Ed.), Encyclopedia of Language and Literacy Development (pp. 1-8). London, ON: Canadian Language and Literacy Research Network. http/www.literacyencyclopedia.ca

2006

Shari Baum, Ph.D., Professor
Vincent Gracco, Ph.D., Associate Professor
Marc Pell, Ph.D., Associate Professor
Linda Polka, Ph.D., Associate Professor
Susan Rvachew, Ph.D., Associate Professor
Karsten Steinhauer, Ph.D., Assistant Professor
Elin Thordardottir, Ph.D., Associate Professor

Dr. Shari Baum
BAUM, S. (Aasland, W., Baum, S., & McFarland, D.) (2006). Electropalatographic, acoustic, and perceptual data on adaptation to a palatal perturbation. Journal of the Acoustical Society of America, 119, 2372-2381.

Abstract: Exploring the compensatory responses of the speech production system to perturbation has provided valuable insights into speech motor control. The present experiment was conducted to examine compensation for one such perturbation-a palatal perturbation in the production of the fricative /s/. Subjects wore a specially designed electropalatographic (EPG) appliance with a buildup of acrylic over the alveolar ridge as well as a normal EPG palate. In this way, compensatory tongue positioning could be assessed during a period of target specific and intense practice and compared to nonperturbed conditions. Electropalatographic, acoustic, and perceptual analyses of productions of /asa/ elicited from nine speakers over the course of a one-hour practice period were conducted. Acoustic and perceptual results confirmed earlier findings, which showed improvement in production with a thick artificial palate in place over the practice period; the EPG data showed overall increased maximum contact as well as increased medial and posterior contact for speakers with the thick palate in place, but little change over time. Negative aftereffects were observed in the productions with the thin palate, indicating recalibration of sensorimotor processes in the face of the oral-articulatory perturbation. Findings are discussed with regard to the nature of adaptive articulatory skills.

Link to article

-- (Dwivedi, V., Philips, N., Lague-Beauvais, M., & Baum, S.) (2006). An electrophysiological study of mood, modal context, and anaphora. Brain Research, 1117, 135-153.

Abstract: We investigated whether modal information elicited empirical effects with regard to discourse processing. That is, like tense information, one of the linguistic factors shown to be relevant in organizing a discourse representation is modality, where the mood of an utterance indicates whether or not it is asserted. Event-related potentials (ERPs) were used in order to address the question of the qualitative nature of discourse processing, as well as the time course of this process. This experiment investigated pronoun resolution in two-sentence discourses, where context sentences either contained a hypothetical or actual Noun Phrase antecedent. The other factor in this 2 × 2 experiment was type of continuation sentence, which included or excluded a modal auxiliary (e.g., must, should) and contained a pronoun. Intuitions suggest that hypothetical antecedents followed by pronouns asserted to exist present ungrammaticality, unlike actual antecedents followed by such pronouns. Results confirmed the grammatical intuition that the former discourse displays anomaly, unlike the latter (control) discourse. That is, at the Verb position in continuation sentences, we found frontal positivity, consistent with the family of P600 components, and not an N400 effect, which suggests that the anomalous target sentences caused a revision in discourse structure. Furthermore, sentences exhibiting modal information resulted in negative-going waveforms at other points in the continuation sentence, indicating that modality affects the overall structural complexity of discourse representation.

Link to article

-- (Shah, A. & Baum, S.) (2006). Perception of lexical stress by brain-damaged individuals: Effects on lexical-semantic activation. Applied Psycholinguistics, 27, 143-156.

Abstract: A semantic priming, lexical-decision study was conducted to examine the ability of left- and right-brain damaged individuals to perceive lexical-stress cues and map them onto lexical–semantic representations. Correctly and incorrectly stressed primes were paired with related and unrelated target words to tap implicit processing of lexical prosody. Results conformed with previous studies involving implicit perception of lexical stress, in that the left-hemisphere damaged individuals showed preserved sensitivity to lexical stress patterns as indicated by priming patterns mirroring those of the normal controls. An increased sensitivity to the varying stress patterns of the primes was demonstrated by the right-hemisphere damaged patient group, however. Results are discussed in relation to current theories of prosodic lateralization, with a particular focus on the nature of task demands in lexical stress perception studies.

Link to article

-- (Shah, A., Baum, S., & Dwivedi, V.) (2006). Neural substrates of linguistic prosody: Evidence from syntactic disambiguation in the productions of brain-damaged patients. Brain & Language, 96, 78-89.

Abstract: The present investigation focussed on the neural substrates underlying linguistic distinctions that are signalled by prosodic cues. A production experiment was conducted to examine the ability of left- (LHD) and right- (RHD) hemisphere-damaged patients and normal controls to use temporal and fundamental frequency cues to disambiguate sentences which include one or more Intonational Phrase level prosodic boundaries. Acoustic analyses of subjects' productions of three sentence types-parentheticals, appositives, and tags-showed that LHD speakers, compared to RHD and normal controls, exhibited impairments in the control of temporal parameters signalling phrase boundaries, including inconsistent patterns of pre-boundary lengthening and longer-than-normal pause durations in non-boundary positions. Somewhat surprisingly, a perception test presented to a group of normal native listeners showed listeners experienced greatest difficulty in identifying the presence or absence of boundaries in the productions of the RHD speakers. The findings support a cue lateralization hypothesis in which prosodic domain plays an important role.

Link to article

-- (Sundara., M., Polka, L., & Baum, S.) (2006). Production of coronal stops by simultaneous bilingual adults. Bilingualism: Language & Cognition, 9, 97-114.

Abstract: This study investigated acoustic-phonetics of coronal stop production by adult simultaneous bilingual and monolingual speakers of Canadian English (CE) and Canadian French (CF). Differences in the phonetics of CF and CE include voicing and place of articulation distinctions. CE has a two-way voicing distinction (in syllable initial position) contrasting shortand long-lag VOT; coronal stops in CE are described as alveolar. CF also has a two-way voicing distinction, but contrasting lead and short-lag VOT; coronal stops in CF are described as dental. Acoustic analyses of stop consonants for both VOT and dental/alveolar place of articulation are reported. Results indicate that simultaneous bilingual as well as monolingual adults produce language-specific differences, albeit not in the same way, across CF and CE for voicing and place. Similarities and differences between simultaneous bilingual and monolingual adults are discussed to address phonological organization in simultaneous bilingual adults.

Link to article

Dr. Vincent Gracco
GRACCO, V. (Tremblay, P. & Gracco, V. L.) (2006). Contribution of the frontal lobe to externally and internally specified verbal responses: fMRI evidence. NeuroImage, 33, 947-957.

Abstract: It has been suggested that within the frontal cortex there is a lateral to medial shift in the control of action, with the lateral premotor area (PMA) involved in externally specified actions and the medial supplementary motor areas (SMA) involved in internally specified actions. Recent brain imaging studies demonstrate, however, that the control of externally and internally specified actions may involve more complex and overlapping networks involving not only the PMA and the SMA, but also the pre-SMA and the lateral prefrontal cortex (PFC). The aim of the present study was to determine whether these frontal regions are differentially involved in the production of verbal responses, when they are externally specified and when they are internally specified. Participants engaged in three overt speaking tasks in which the degree of response specification differed. The tasks involved reading aloud words (externally specified), or generating words aloud from narrow or broad semantic categories (internally specified). Using fMRI, the location and magnitude of the BOLD activity for these tasks was measured in a group of ten participants. Compared with rest, all tasks activated the primary motor area and the SMA-proper, reflecting their common role in speech production. The magnitude of the activity in the PFC (Brodmann area 45), the left PMAv and the pre-SMA increased for word generation, suggesting that each of these three regions plays a role in internally specified action selection. This confirms previous reports concerning the participation of the pre-SMA in verbal response selection. The pattern of activity in PMAv suggests participation in both externally and internally specified verbal actions.

Link to article

Dr. Marc Pell
PELL, M. (Cheang, H.S. & Pell, M.D.) (2006). A study of humour and communicative intention following right hemisphere stroke. Clinical Linguistics & Phonetics, 20 (6), 447-462.

Abstract: This research provides further data regarding non-literal language comprehension following right hemisphere damage (RHD). To assess the impact of RHD on the processing of non-literal language, ten participants presenting with RHD and ten matched healthy control participants were administered tasks tapping humour appreciation and pragmatic interpretation of non-literal language. Although the RHD participants exhibited a relatively intact ability to interpret humour from jokes, their use of pragmatic knowledge about interpersonal relationships in discourse was significantly reduced, leading to abnormalities in their understanding of communicative intentions (CI). Results imply that explicitly detailing CI in discourse facilitates RHD participants' comprehension of non-literal language.

Link to article

-- (Pell, M.D.) (2006). Judging emotion and attitudes from prosody following brain damage. Progress in Brain Research, 156, 307-321.

Abstract: Research has long indicated a role for the right hemisphere in the decoding of basic emotions from speech prosody, although there are few data on how the right hemisphere is implicated in processes for understanding the emotive "attitudes" of a speaker from prosody. We describe recent clinical studies that compared how well listeners with and without focal right hemisphere damage (RHD) understand speaker attitudes such as "confidence" or "politeness," which are signaled in large part by prosodic features of an utterance. We found that RHD listeners as a group were abnormally sensitive to both the expressed confidence and expressed politeness of speakers, and that these difficulties often correlated with impairments for understanding basic emotions from prosody in many RHD individuals. Our data emphasize a central role for the right hemisphere in the ability to appreciate emotions and speaker attitudes from prosody, although the precise source of these social-pragmatic deficits may arise in different ways in the context of right hemisphere compromise.

Link to article

-- (Pell, M.D. Cheang, H.S., & Leonard, C.L.) (2006). The impact of Parkinson’s disease on vocal prosodic communication from the perspective of listeners. Brain and Language, 97 (2), 123-134.

Abstract: An expressive disturbance of speech prosody has long been associated with idiopathic Parkinson's disease (PD), but little is known about the impact of dysprosody on vocal-prosodic communication from the perspective of listeners. Recordings of healthy adults (n=12) and adults with mild to moderate PD (n=21) were elicited in four speech contexts in which prosody serves a primary function in linguistic or emotive communication (phonemic stress, contrastive stress, sentence mode, and emotional prosody). Twenty independent listeners naive to the disease status of individual speakers then judged the intended meanings conveyed by prosody for tokens recorded in each condition. Findings indicated that PD speakers were less successful at communicating stress distinctions, especially words produced with contrastive stress, which were identifiable to listeners. Listeners were also significantly less able to detect intended emotional qualities of Parkinsonian speech, especially for anger and disgust. Emotional expressions that were correctly recognized by listeners were consistently rated as less intense for the PD group. Utterances produced by PD speakers were frequently characterized as sounding sad or devoid of emotion entirely (neutral). Results argue that motor limitations on the vocal apparatus in PD produce serious and early negative repercussions on communication through prosody, which diminish the social-linguistic competence of Parkinsonian adults as judged by listeners.

Link to article

-- (Pell, M.D.) (2006). Implicit recognition of vocal emotions in native and non-native speech. In R. Hoffman and H. Mixdorff (Eds.), Speech Prosody 3rd International Conference Proceedings (pp. 62-64).

Abstract: There is evidence for both cultural-specificity and 'universality' in how listeners recognize vocal expressions of emotion from speech. This paper summarizes some of the early findings using the Facial Affect Decision Task which speak to the implicit processing of vocal emotions as inferred from "emotion priming" effects on a conjoined facial expression. We provide evidence that English listeners register the emotional meanings of prosody when processing sentences spoken by native (English) as well as non-native (Arabic) speakers who encoded vocal emotions in a culturallyappropriate manner. As well, we discuss the timecourse for activating emotion-related knowledge in a native and nonnative language which may differ due to cultural influences on vocal emotion expression.

Link to article

-- (Pell, M.D.) (2006). Cerebral mechanisms for understanding emotional prosody in speech. Brain and Language, 96 (2), 221-234.

Abstract: Hemispheric contributions to the processing of emotional speech prosody were investigated by comparing adults with a focal lesion involving the right (n = 9) or left (n = 11) hemisphere and adults without brain damage (n = 12). Participants listened to semantically anomalous utterances in three conditions (discrimination, identification, and rating) which assessed their recognition of five prosodic emotions under the influence of different task- and response-selection demands. Findings revealed that right- and left-hemispheric lesions were associated with impaired comprehension of prosody, although possibly for distinct reasons: right-hemisphere compromise produced a more pervasive insensitivity to emotive features of prosodic stimuli, whereas left-hemisphere damage yielded greater difficulties interpreting prosodic representations as a code embedded with language content.

Link to article

-- (Monetta, L. & Pell, M.D.) (2006). La maladie de Parkinson et les déficits pragmatiques et prosodiques du langage. Fréquences: revue de l'ordre des orthophonistes et audiologistes du Québec, 18, 27-29.
 
Dr. Linda Polka
POLKA, L. (Rvachew, S., Mattock, K., Polka, L. & Menard, L.) (2006). Developmental and cross-linguistic variation in the infant vowel space: The case of Canadian English and Canadian French. Journal of the Acoustical Society of America, 120, 2250-2259.

Abstract: This article describes the results of two experiments. Experiment 1 was a cross-sectional study designed to explore developmental and cross-linguistic variation in the vowel space of 10- to 18-month-old infants, exposed to either Canadian English or Canadian French. Acoustic parameters of the infant vowel space were described (specifically the mean and standard deviation of the first and second formant frequencies) and then used to derive the grave, acute, compact, and diffuse features of the vowel space across age. A decline in mean F1 with age for French-learning infants and a decline in mean F2 with age for English-learning infants was observed. A developmental expansion of the vowel space into the high-front and high-back regions was also evident. In experiment 2, the Variable Linear Articulatory Model was used to model the infant vowel space taking into consideration vocal tract size and morphology. Two simulations were performed, one with full range of movement for all articulatory paramenters, and the other for movement of jaw and lip parameters only. These simulated vowel spaces were used to aid in the interpretation of the developmental changes and cross-linguistic influences on vowel production in experiment 1.

Link to article

-- (Sundara, M., Polka, L., & Baum, S.) (2006). Production of coronal stops by simultaneous bilingual adults. Bilingualism: Language & Cognition, 9, 97-114.

Abstract: This study investigated acoustic-phonetics of coronal stop production by adult simultaneous bilingual and monolingual speakers of Canadian English (CE) and Canadian French (CF). Differences in the phonetics of CF and CE include voicing and place of articulation distinctions. CE has a two-way voicing distinction (in syllable initial position) contrasting shortand long-lag VOT; coronal stops in CE are described as alveolar. CF also has a two-way voicing distinction, but contrasting lead and short-lag VOT; coronal stops in CF are described as dental. Acoustic analyses of stop consonants for both VOT and dental/alveolar place of articulation are reported. Results indicate that simultaneous bilingual as well as monolingual adults produce language-specific differences, albeit not in the same way, across CF and CE for voicing and place. Similarities and differences between simultaneous bilingual and monolingual adults are discussed to address phonological organization in simultaneous bilingual adults.

Link to article

-- (Ilari, B., & Polka, L.) (2006). Music cognition in early infancy: Infant’s preferences and long-term memory for Ravel. International Journal of Music Education, 24, 7-20.

Abstract: Listening preferences for two pieces, Prelude and Forlane from Le tombeau de Couperin by Maurice Ravel (1875-1937), were assessed in two experiments conducted with 8-month-old infants, using the Headturn Preference Procedure (HPP). Experiment 1 showed that infants, who have never heard the pieces, could clearly make a distinction between the Prelude and Forlane when the latter are played in multiple (i.e. orchestral) but not single (i.e. piano) timbres. In Experiment 2 infants were exposed repeatedly to one of the two piano pieces over a 10-day period. Concurrent with previous studies, results suggested that babies can recognize a familiar piece after a 2-week delay. Implications for early childhood music education are outlined at the end of the article.

Link to article

-- (Sundara, M. Polka, L. & Genesee, F.) (2006). Language experience facilitates discrimination of /d – ð/ in monolingual and bilingual acquisition of English. Cognition, 100, 369-388.

Abstract: To trace how age and language experience shape the discrimination of native and non-native phonetic contrasts, we compared 4-year-olds learning either English or French or both and simultaneous bilingual adults on their ability to discriminate the English /d-Image/ contrast. Findings show that the ability to discriminate the native English contrast improved with age. However, in the absence of experience with this contrast, discrimination of French children and adults remained unchanged during development. Furthermore, although simultaneous bilingual and monolingual English adults were comparable, children exposed to both English and French were poorer at discriminating this contrast when compared to monolingual English-learning 4-year-olds. Thus, language experience facilitates perception of the English /d-Image/ contrast and this facilitation occurs later in development when English and French are acquired simultaneously. The difference between bilingual and monolingual acquisition has implications for language organization in children with simultaneous exposure.

Link to article

Dr. Susan Rvachew
RVACHEW, S. (Rvachew, S., Mattock, K., Polka, L., & Menard, L.) (2006). Developmental and cross-linguistic variation in the infant vowel space: The case of Canadian English and Canadian French. Journal of the Acoustical Society of America, 120 (4), 2250-2259.

Abstract: This article describes the results of two experiments. Experiment 1 was a cross-sectional study designed to explore developmental and cross-linguistic variation in the vowel space of 10- to 18-month-old infants, exposed to either Canadian English or Canadian French. Acous