McCall MacBain Scholarships - Master’s and Professional Programs

A full graduate scholarship and community to help you make a difference in the world.

Find out when applications open

Colloquium Series

Upcoming Colloquiums

Fall 2020

Laura Dilley (Michigan State University)

"Language and social brains: Toward understanding mechanisms and typologies of prosody and tone"

Friday, September 18 at 3:30 pm

Abstract: The past ~70 years of linguistic research have seen dramatic changes in the way researchers frame and conceptualize language as a human capacity and activity. In this talk I will present a synthesis of key insights from these past decades which leads to a view that language structure and meaning is grounded in social dynamics of perception, action, and cognition within ecological niches. Language perception does not entail, as some have argued, mere recovery of abstract linguistic units; rather, the very process of what those units are understood to be depends on social and ecological contexts. Framed in this way, innate brain mechanisms tuned to extraction of information over language-relevant timescales, together with the history of short- and long-term experiences over a lifetime, give rise to emergent understandings of meaning, as well as the apprehension of linguistic form and content. I will present the case of prosody, long held to be a mere overlay on the implicitly more foundational segmental underpinning, and challenge some long-held assumptions about the structure of prosody and how it contributes to meaning. With the benefit of insights of original thinkers who have come before, as well as the principle of Ockham’s Razor, I will argue that viewing human linguistic capacities as grounded in inherent temporal dynamics of social brains and bodies fosters novel connections among linguistic sub-disciplines and brings new questions into focus. Viewed through this lens, I assert that is possible to make headway toward understanding some of the most challenging domains of linguistic inquiry, namely typology, meaning and structure of tone and prosody.


Peter Jenks (UC Berkeley)

"Are indices syntactically represented?"

Friday October 16, at 3:30 pm

Abstract: The status of indices in syntactic representations is unclear. While indices are frequently used for expository purposes, they have no syntactic status in the copy theory of movement (Corver & Nunes 2007) or Agree-based analyses binding phenomena (Reuland 2011, Vanden Wyngaerd 2011). In this talk I argue that the presence versus absence of indices explain language-internal splits in definiteness and pronouns in different languages, while the ability of names to violate condition C in Thai receives a natural explanation if we treat names in Thai but not English as contextually restricted indices. The resulting view is one where indices are a component of linguistic representations, but not all referential expressions contain them. This view is consistent with Tayna Reinhart’s approach to Conditions B and C (Grodzinsky & Reinhart 1993), and entails that indices should play a more important role in syntactic theory than they currently do.


Emily Elfner (Dept. of Languages, Literatures and Linguistics, York University)

"Evaluating evidence for recursive prosodic structure"

Friday November 20, at 3:30 pm (register here)

Abstract: In much recent work on the syntax-prosody interface, the question of whether recursion is present in prosodic structure has played a key role (for example, Wagner 2005, 2010; Selkirk 2009, 2011, among others). In particular, in theories of the syntax-prosody interface such as Match Theory (Selkirk 2009, 2011), which derive prosodic constituents directly from syntactic structure, prosodic structure is predicted to show by default a degree of recursion that arguably is comparable with the depth of the nested hierarchical structure found in syntax.

One major question which has surfaced is the extent to which the level of recursive prosodic structure predicted by syntactic structure is universal. For example, some languages have been argued to show overt phonological and phonetic reflexes of recursion, thus providing apparent empirical support for the recursive structures predicted by syntactic structure in a number of languages, such as Irish (Elfner 2012, 2015), Basque (Elordieta 2015), and Swedish (Myrberg 2013). However, other languages may not show such overt evidence, as it has long been assumed that the ways that languages mark prosodic phrase edges and heads is language-specific; for example, some of the predicted prosodic phrases may be marked overtly only on one edge (left or right), or not at all. Conversely, we cannot always assume that overt evidence of a prosodic boundary indicates the presence of a syntactic boundary.

Therefore, the question remains: if there is no overt evidence of the edges of certain prosodic constituents in a particular language, to what extent can we posit their existence based on theoretical predictions relating to hierarchical structure and syntax-prosody mapping alone? In this talk, I will explore this question in relation to a case study on the prosodic structure of Irish, which provides an apparent conflict between prosodic cues which provide evidence for hierarchal syntactic structure and domain juncture (Elfner 2012, 2016)


Winter 2021

Viola Schmitt (Institute for German Studies, University of Vienna) - February 26, 2021

Yael Sharvit (Dept. of Linguistics, University of California, Los Angeles ) - March 26, 2021

Lisa Matthewson (Dept. of Linguistics, University of British Columbia) - April 16, 2021

Duane Watson (Dept. of Psychology, Vanderbilt University) - April 23, 2021


Previous Colloquiums

Winter 2020

Andrés Salanova : February 28th, 2020 3:30 to 5:00 pm

Location: Wilson Hall - Wendy Patrick Room (118) 

Title: PDF icon A semantics for frustratives

Laura Dilley: March 13, 2020
Yael Sharvit : April 3, 2020
 

Fall 2019

John Alderete : Nov 15, 2019  3:30 to 5:00 pm   

Location:  BIRKS room 111

Title: Speech errors and phonological patterns: Integrating insights from psycholinguistic and linguistic theory

Abstract:   In large collections of speech errors, phonological patterns emerge. Speech errors are shaped by phonotactic constraints, cross-linguistic markedness, frequency, and phonological representations of prosodic and segmental structure. While insights from both linguistic theory and psycholinguistic models have been brought to bear on these patterns, research on phonological patterns in speech errors rarely attempts to compare and contrast analyses from these different perspectives, much less integrate them as a coherent whole. This talk investigates the phonological patterns in the SFU Speech Error Database (SFUSED) with the goal of combining both processing and linguistic assumptions in an integrated model of speech production. In particular, it examines the impact of language particular phonotactics on speech errors, competing explanations from markedness and frequency, and the role of linguistic representations for syllables and tone. The empirical findings support a model that includes both production processing impacted by frequency and explicit representations of tone and syllables from phonological theory.


Jason Brenier : Dec 6, 2019 3:30 to 5:00 pm

Location:  ARTS Bldg. W-20

Abstract:   From knowledge representation to speech processing and human-computer interaction, linguistic research has been critical to the development of information technology and its widespread adoption in the business world. Using examples from technology startups, enterprise businesses and the venture capital industry, this talk will review the many contributions that linguists have made to the rising AI economy and will explore their increasingly important role in the future.

Winter 2019

Speaker: Susi Wurmbrand (Universität Wien)
Date & Time: March 22nd at 3:30 pm
Place: Education Bldg. rm. 434
Title: Proper and Improper A-Dependencies

Abstract: This talk provides an overview of case and agreement dependencies that are established across clause-boundaries, such as raising to subject or object and cross-clausal agreement. We will see that cross-clausal A-dependencies (CCADs) in several languages can apply not only across non-finite but also across finite clause boundaries. Furthermore, it will be shown that the DP entering a CCAD is situated in the specifier of the embedded CP. This poses a challenge for the traditional 'truncation' approach to CCADs according to which CCADs are restricted to reduced (CP-less) complements. It also poses a challenge for the view that A-dependencies cannot follow A'-dependencies involving the same element. Lastly, we can observe that a clause across which a CCAD applies functions as true, non-deficient, A'-CP for other purposes. The direction proposed to bring the observed properties together is to maintain a universal improper A-after-A′ constraint, but allow certain positions in certain CPs to qualify as A-positions from which further A-dependencies can be established.


Speaker: Scott Anderbois (Brown University)
Date & Time: April 12th at 3:30 pm
Place: Education Bldg. rm. 434
Title: At-issueness in direct quotation: the case of Mayan quotatives

Abstract: In addition to verba dicendi, languages have a bunch of different other grammatical devices for encoding reported speech. While not common in Indo-European languages, two of the most common such elements cross-linguistically are reportative evidentials and quotatives. Quotatives have been much less discussed then either verba dicendi or reportatives, both in descriptive/typological literature and especially in formal semantic work. While quotatives haven't been formally analyzed in detail previously to my knowledge, several recent works on reported speech constructions in general have suggested in passing that they pattern either with verba dicendi or with reportatives. Drawing on data from Yucatec Maya, I argue that they differ from both since they present direct quotation (like verba dicendi) but make a conventional at-issueness distinction (like reportatives). To account for these facts, I develop an account of quotatives by combining an extended Farkas & Bruce 2010-style discourse scoreboard with bicontextualism (building on Eckardt 2014's work on Free Indirect Discourse).

Fall 2018

Speaker: Jane Stuart-Smith (University of Glasgow)
Date & Time: October 12th at 3:30pm
Place: Education Bldg. rm. 211
Title: Sound perspectives? Speech and speaker dynamics over a century of Scottish English

Abstract: As in many disciplines, in linguistics too, perspective matters. Structured variability in language occurs at all linguistic levels and is governed by a large range of diverse factors. Viewed through a synchronic lens, such variation informs our understanding of linguistic and social-cognitive constraints on language at particular points in time; a diachronic lens expands the focus across time. And, as Weinreich et al (1968) pointed out, structured variability is integral to linguistic description and explanation as a whole, by being at once both the stuff of the present, the reflexes of the past, and the potential for changes in the future. There is a further dimension which is often not explicit, the role of analytical perspective on linguistic phenomena.

This paper considers a particular kind of structured variability, phonetic and phonological variation, within the sociolinguistic context of the recorded history of Glaswegian vernacular across the 20th century. Two aspects of perspective frame my key research questions:

1. What are the ‘things’ which we observe? How do different analytical perspectives on phonetic variation affect how we interpret that variation? Specifically, how do different kinds of observation — within segment/across a phonological contrast/even beyond segments — auditory/acoustic/articulatory phonetic — shape our interpretations?

2. How are these ‘things’ embedded in time and social space? Specifically, how is this variation linked to contextual perspective, shifts in social events and spaces over the history of the city of Glasgow? How do we know whether, or when, these ‘things’ might be sound changes (following Milroy 2003)?

I consider these questions by reviewing a series of studies (including some ongoing and still unpublished) on two segments in Glaswegian English, the first thought to be stable and not undergoing sound change (/s/), the second thought to be changing (postvocalic /r/).


Speaker: Nico Baier (McGill University)
Date & Time: November 2nd at 3:30 pm
Place: Education Bldg. rm. 211
Title: Unifying Anti-Agreement and wh-Agreement

Abstract: In this talk, I investigate the sensitivity of φ-agreement to features typically associated with Ā-extraction, including those related towh-questioning, relativization, focus and topicalization. This phenomenon has been referred to as anti-agreement (Ouhalla 1993) orwh-agreement (Chung and Georgopoulos 1988; Georgopoulos 1991; Chung 1994) in the literature. While anti-agreement is commonly held to result from constraints on the Ā-movement of agreeing DPs, I argue that it reduces to an instance ofwh-agreement, or the appearance of particular morphological forms in the presence of Ā-features. I develop a unified account of theseĀ-sensitive φ-agreement effects in which they arise from the ability of φ-probes to copy both φ-features and Ā-features in the syntax. In the morphological component, partial or totalimpoverishmentmay apply to feature bundles containing both φ- and Ā-features, deleting some or all of the φ-features. Impoverishment blocks insertion of an otherwise appropriate, more highly specified agreement exponent. I present case studies of the effect of Ā-features on φ-agreement in three languages: the West Caucasian language Abaza (O’Herin 2002); the Berber language Tarifit (Ouhalla 1993; El Hankari 2010); and the Northern Italian dialect Fiorentino (Brandi and Cordin 1989; Suñer 1992). I show that in all three languages, the agreement exponents that appear in the context of Ā-features are systematically underspecified.

Winter 2018

Speaker: Sharon Goldwater (University of Edinburgh)
Date & Time: January 12th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Bootstrapping language acquisition

Abstract: The semantic bootstrapping hypothesis proposes that children break into the syntactic system of their native language by inferring from the situational context a structured semantic representation for (some) words or utterances. Assuming a correspondence between semantic structure and syntactic structure allows the child to begin to acquire native language syntax. In this talk I will describe a Bayesian probabilistic model of semantically bootstrapped child language acquisition. The model learns from pairs of sentences and their (noisy) meaning representations, extracted from a real child-directed corpus. It *jointly* models both (a) word learning: the mapping between components of the givensentential meaning and wordforms, and (b) syntax learning: word order and the mapping between wordforms and their syntactic categories. I will show how this joint model accounts for several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena.


Speaker: Karen Jesney (University of Southern California)
Date & Time: April 27th at 3:30 pm
Place: Leacock 210
Title: Constraint Scaling Factors and Patterns of Variation in Phonology

Abstract: Language systems characterized by high levels of variability offer unique possibilities for probing the structure of the phonological grammar. This talk examines data from developing L1 phonologies and loanword adaptation patterns, and argues that scaling of constraint values within a system of weighted constraints offers the most direct means of encoding the attested effects. Two case studies are presented. The first case study looks at words that contain multiple sources of syllable-structure markedness, focusing on data from the twelve Dutch- acquiring children in the CLPF corpus (Fikkert 1994, Levelt 1994). The overall finding is that accurate realization of marked coda structures increases the probability that marked onset structures will be accurately realized by the child. These effects cannot be reduced to either age or the frequency with which the marked structures are attempted. The second case study examines the realization of marginal segments in a corpus of Québec French borrowings from English (Roy 1992), and finds evidence for similar interactions at the level of segmental realization. Given that one marked structure is realized accurately, the probability increases that other marked structures will also be realized accurately. Other loanword data show related implicational patterns. I argue these interactions are best modeled through scaling of constraint values within a probabilistic weighted constraint grammar – either Noisy Harmonic Grammar (Boersma & Pater 2008) or Maximum Entropy OT (Goldwater & Johnson 2003). Constraint scaling factors co-exist with basic constraints weights, and can be keyed both to grammatical factors like prosodic position, and to non-phonological factors like word frequency and attention. The result is a model that captures the attested interactions between marked structures within words while avoiding the pitfalls of previous accounts that are too restrictive to accurately model the full range of variation.


Speaker: Susana Béjar (University of Toronto)
Date & Time: February 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Person, Agree, and Derived Predicates

Abstract: Person features have played a prominent role in models of argument licensing, case and agreement over the past two decades. Within the theory of Agree, person features have been manipulated to account for a range of intricate patterns including non-canonical locality effects (e.g. hierarchy effects), ineffabilities (e.g. PCC effects) and differential argument marking (e.g. DOM). Overwhelmingly, work in this area has been based on structures with verbal predicates. In this talk I put the spotlight on verbless structures — specifically, copular clauses with nominal complements — and challenges that they present to person-driven approaches, in particular unexpected locality patterns and ineffabilities. I argue that both challenges benefit from viewing nominal complements of copular clauses as derived predicates in a sense similar to Landau (2011), that is to say I take them to involve (reduced) clausal complements. The distribution of φ-features in clausal complements, and the operations these are subject to, can explain the unusual locality patterns and ineffabilities alluded to above.


Speaker: Elizabeth Coppock (Boston University)
Date & Time: March 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Speedbumps on the compositional route to proportional MOST

Abstract: Recent work has suggested that the proportional interpretation of English "most" is not lexically arbitrary but rather compositionally derived as the superlative of "many". Based on broad cross-linguistic evidence, we caution that the compositional route there is fraught. Investigation of a geographically, genetically, and typologically diverse set of languages shows that proportional readings of quantity superlatives are highly typologically marked, and relative readings are universal. We argue that proportional interpretations are marked because they depend on violations of certain default semantic principles: (i) quantity words denote gradable predicates of degrees, rather than individuals, and (ii) comparison among any set of entities involves comparison among a set of individuals. We also find that proportional readings arise with a quite limited range of morphosyntactic strategies for forming superlatives, suggesting that analogical pressure from other quantifiers in the lexicon may help in overcoming these hindrances.


Speaker: Daniel Pape (McMaster University)
Date & Time: April 13th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Linking speech production to speech perception: A cross-linguistic comparison of the phonological voicing contrast and its phonetic realization

Abstract: How do we form phonemic categories? How is speech perception linked to the articulatory and acoustic production of speech? These are classic phonetic questions but are still controversially debated today. In my talk I will present a number of phonetic experiments to approach these questions. I will discuss (1) how the cognitive system is intricately linked to the speech production system for the phonological voicing contrast; (2) how cross-linguistic differences in Romance languages surface in perception compared to production; and (3) how several acoustic cues of the speech signal are used with varying weights to form a robust phoneme identification. I conclude my talk with an excursion into audio-visual speech perception by presenting a phonetic experiment examining the effect of facial hair on speech intelligibility.

Fall 2017

Speaker: Jie Li (Shantou University)
Date & Time: September 15th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Grammatical Metaphor Theory in Pursuit of Metaphorical Competence

Abstract: Grammatical Metaphor is one of the important concepts in Systemic-Functional Grammar. Halliday (1994) took grammatical metaphor as a linguistic strategy for “variation in the expression of a meaning”. The language system provides language users with a system of meaning potential, from which language users make a series of choices to realize a certain semantic function. The relation between the chosen linguistic structure and the meaning expressed can be either congruent or incongruent/ metaphorical. Children gradually learn to speak metaphorically, and the emergence of more metaphorical expressions is an important feature of adult language. Denesi (1993) claimed that speaking metaphorically is a basic characteristic of native speaker’s linguistic competence. In other words, the ability of understand and use metaphors can be taken as an important symbol for the good mastery of a language. Therefore, it is both necessary and important to value metaphorical competence in language education. With the guidelines of the grammatical metaphor theory, this talk is going to analyze the nature, the complexity and the functions of metaphorical forms so as to help language learners with their knowledge and mastery of the metaphorical phenomena in their target language, and finally reach the goal of improving their linguistic competence by enhancing their ability to understand and use metaphors.


Speaker: Aron Hirsch (McGill University)
Date & Time: October 6th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Only and pseudo-clefts

Abstract: This talk motivates a revision to the semantics of only. Only composes with a proposition (its prejacent, p) and a set of alternative propositions. Only presupposes that p is true, and asserts that alternatives are false. In deciding which alternatives to negate, only is selective: to avoid creating a logical contradiction, only negates an alternative q only if ¬q is logically consistent with p. I will argue that only is even more selective than previously thought: in addition to avoiding contradictions, only avoids creating certain meanings which are intuitively paradoxical, though logically contingent. The argument comes from novel data studying the interaction of only with epistemic modals and conditionals. In the second part of the talk, I show how the new, more selective only can shed light on a wider range of data: in particular, I argue that the source of exhaustivity in pseudo-clefts is a covert only, crucially with the revised semantics.


Speaker: Christian DiCanio (University of Buffalo)
Date & Time: November 10th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Phonetic variation and the construction of a Mixtec spoken language corpus

Abstract: The documentation of endangered languages frequently involves the collection and analysis of a corpus of speech data. To ensure continued access to the corpus, researchers must construct additional layers of annotation. This process is often constrained by patterns of phonetic variation, but such patterns also open up new areas of research in both speech production and phonology. In this talk, I discuss the interplay between the construction of a spoken language corpus of Yoloxóchitl Mixtec (Otomanguean: Mexico) from a language documentation project and the patterns of phonetic variation which have been investigated along the way. I address three main issues of relevance to linguistic theory and phonetics: (1) How does speech style influence speech production and how might this affect the creation of a spoken language corpus? (2) How do variable morphophonological rules impact corpus segmentation? and (3) What principles account for surface phonetic variation? Can such variation be predicted and automatically annotated? Together, these topics address issues of increasing importance in the fields of corpus phonetics, speech processing, and language documentation.


Speaker: Lucie Ménard (Université de Québec à Montreal)
Date & Time: December 1st at 3:30 pm
Place: Arts Bldg. W-20
Title: Reaching goals with limited means: Production-perception relationships in blind children and adults

Abstract: In face-to-face conversation, speech is produced and perceived through various modalities. Movements of the lips, jaw, and tongue, for instance, modulate air pressure to produce a complex waveform perceived by the listener’s ears. Visually salient articulatory movements (of the lips and jaw) also contribute to speech perception. Although many studies have been conducted on the role of visual components in speech perception, much less is known about their role in speech production. In this presentation, we discuss the emergence and refinement of production-perception relationships through a series of studies conducted with typically developing and blind individuals (children and adults). Acoustic, kinematic, and perceptual data collected in contexts representing various degrees of saliency requirements will be presented. We will show how sensory templates built from impoverished input influence production strategies.

Winter 2017

Speaker: Dan Lassiter (Stanford University)
Date & Time: January 27th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Epistemic language in indicative and counterfactual conditionals

Abstract: In this talk I'll report on a series of experiments which examine judgments about epistemic modals, both in unembedded contexts and in indicative and counterfactual conditionals. Building on these results and recent probabilistic theories of epistemic language, I propose a probabilistic version of Kratzer's restrictor theory of conditionals that identifies the indicative/counterfactual distinction with Pearl's distinction between conditioning and intervening in probabilistic graphical models. Combining this theory with recent accounts of must, we can also derive a theory of bare conditionals; I describe the predictions and consider their plausibility in light of the experimental data.


Speaker: Jeremy Hartman (UMass Amherst)
Date & Time: February 3rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Negation and factivity in acquisition and beyond

Abstract: In this talk, I present joint work with Magda Oiry on the interaction between negation and two types of factive predicates in acquisition. Following work by Léger (2008), we examine children’s understanding of sentences with the factive predicates know and be happy, in combination with negation--in the matrix clause, as well as in the embedded clause. In addition to anasymmetry in the understanding of know vs. be happy, we find a new and revealing pattern of errors across different sentence-types with know. We also show that a similar error pattern is found even with adult subjects. I discuss how these findings relate to recent work on the processing of negation.


Speaker: Boris Harizanov (Stanford University)
Date & Time: February 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: On the nature of syntactic head movement

Abstract: In Harizanov and Gribanova 2017, we argue that head movement phenomena having to do with word formation (affixation, compounding, etc.) must be empirically distinguished from head movement phenomena having to do purely with the displacement of heads or fully formed words (verb initiality, verb-second, etc.). We suggest that the former, word-formation type should be implemented as post-syntactic amalgamation, while the latter, displacement-type should be implemented as regular syntactic movement.

In this talk, I take this result as a starting point for an investigation of the latter, syntactic type of head movement. I show in some detail that such movement has the properties of (Internal) Merge and that it always targets the root. In addition, I suggest that, once a head is merged with the root, there are two available options (traditionally assumed to be incompatible with one another or with other grammatical principles): either (i) the target of movement projects or (ii) the moved head projects. The former scenario yields head movement to a specifier position, while the latter yields head reprojection. I offer participle fronting in Bulgarian as a case study of head movement to a specifier position and show how this analysis explains the apparently dual X- and XP-movement properties of participle fronting in Bulgarian, without stipulating a structure-preservation constraint on movement. As a case study of head reprojection, I discuss free relativization in Bulgarian. A treatment of this phenomenon in terms of reprojection allows for an understanding of why an element that has the distribution of a relative complementizer C in Bulgarian free relatives looks like a determiner D morphologically.

This work brings together and reconciles two strands of research, usually viewed, at least to some degree, as incompatible: head movement to specifier position and head movement as reprojection. Such synthesis is afforded, in large part, by the exclusion of the word-formation type of head movement phenomena from the purview of syntactic head movement, as in Harizanov and Gribanova 2017.


Speaker: Stephanie Shih (University of California Merced)
Date & Time: March 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: A multilevel approach to lexically-conditioned phonology

Abstract: Lexical classes often exhibit different phonological behaviours, in alternations or phonotactics. This talk takes up two interrelated issues for lexically-conditioned phonological patterns: (1) how the grammar captures the range of phonological variation that stems from lexical conditioning, and (2) whether the relevant lexical classes needed by the grammar can be learned from surface patterns. Previous approaches to lexically-sensitive phonology have focused largely on constraining it; however, only a limited understanding currently exists of the quantitative space of variation possible (i.e., entropy) within a coherent grammar.

In this talk, I present an approach that models lexically-conditioned phonological patterns as a multilevel grammar: each lexical class is a cophonology subgrammar of indexed constraint weight adjustments (i.e., varying slopes) in a multilevel Maximum Entropy Harmonic Grammar. This approach leverages the structure of multilevel statistical models to quantify the space of lexically-conditioned variation in natural language data. Moreover, the approach allows for the deployment of information-theoretic model comparison to assess competing hypotheses of what the phonologically-relevant lexical classes are. I’ll show that under this approach, the relevant lexical classes need not be a priori assumed but can instead be induced from noisy surface input via feature discovery.

Two case studies are examined: part of speech-conditioned tone patterns in Mende and content versus function word prosodification in English. Both case studies bring to bear new quantitative evidence on classic category-sensitive phenomena. The results illustrate how the multilevel approach proposed here can capture the probabilistic heterogeneity and learnability of lexical conditioning in a phonological system, with potential ramifications for understanding the structure of the developing lexicon in grammar acquisition.

Fall 2016

Speaker: Michael McAuliffe (McGill University)
Date & Time: September 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Dual nature of perceptual learning: Robustness and specificity

Abstract: In perceiving speech and language, listeners need to both perceive specific, highly variable utterances, and generalize to larger linguistic categories. One large source of the variability is in how individual speakers produce sounds, but another source of variation is the way in which speech and language are used in a particular task to accomplish a goal. Perceptual learning is a phenomenon in which listeners update their perceptual sound categories when exposed to a novel speaker. Perceptual learning is robust in the sense that most listeners show perceptual learning effects, most sound categories can be easily updated, and most tasks involving speech facilitate perceptual learning. In this talk, I focus more on the ways that perceptual learning can be task-specific. I present a series of perceptual learning experiments for exposing listeners to a novel talker through single words or longer sentences, varying tasks and the linguistic context. The instructions and goals of the task exert a size-able influence over the amount of perceptual learning that listeners exhibit. In general, listeners adapt less in the course of an experiment if they do not have to rely on the acoustic signal as much. For instance, if listeners are presented the orthography of the word along with the audio, they will not learn as much as if they had heard the audio alone. In sentence tasks, listeners matching pictures to a word at the end of a predictable sentence (i.e., A deep moat protected the old castle) will not learn as much from the final word as from an unpredictable sentence (i.e., He dreaded the long walk to the castle). However, the inverse is true for sentence transcription tasks, with larger perceptual learning effects from predictable sentences than unpredictable. Perceptual learning effects can generally be seen for all listeners and all tasks, but the size of the effects are dependent on the exposure task and how the linguistic system is engaged.


Speaker: Yvan Rose (Memorial University)
Date & Time: October 28th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Perceptual-Articulatory Relationships in Phonological Development: Implications for Feature Theory

Abstract: In this presentation, I discuss a series of asymmetries in phonological development, the nature of which is difficult to address from a strictly phonological perspective. In particular, I focus on transitional periods between developmental stages. I show that these transitions are best interpreted in terms of phonological categories at both prosodic and segmental levels of representation, including segmental features. Using computer-assisted methods of data classification, I describe the detail of these transitions, highlighting both perceptual and articulatory pressures on the child's developing system of phonological representation. I discuss implications of these findings for Phonological Theory, in particular for traditional models of segmental representation relying on phonological features. While the data support the need for sub-segmental units of phonological representation, these units do not appear to match fully the set of features typically used in the analysis of adult phonological systems.


Speaker: Judith Degen (Stanford University)
Date & Time: November 4th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Beyond "overinformativeness": rationally redundant referring expressions

Abstract: What guides the choice of a referring expression like "the box", "the big box", or "the big red box"? Speakers have a well-documented tendency to add redundant modifiers in referring expressions (e.g., "the big red box" when "the big box" would suffice for uniquely picking out the intended object). This "overinformativeness" poses a challenge for theories of language production, especially those positing rational language use (e.g., in the Gricean tradition). We present a novel production model of referring expressions in the Rational Speech Act framework. Speakers are modeled as rationally trading off the cost of additional modifiers with the amount of information added about the intended referent. The innovation is assuming that truth functions are probabilistic rather than deterministic.

This model captures a number of production phenomena in the realm of overinformativeness, including the color-size asymmetry in probability of overmodification (speakers overmodify more with color than size adjectives); visual scene variation effects on probability of overmodification (increased visual scene variation increases the probability of overmodifying with color); and color typicality effects on probability of overmodification (speakers overmodify less with more typical colors). In addition to demonstrating how the model accounts for these qualitative effects, we present fine-grained quantitative predictions that are beautifully borne out in data from interactive free production reference game experiments.

We conclude that the systematicity with which speakers redundantly use modifiers implicates a system geared towards communicative efficiency rather than towards wasteful overinformativeness.


Speaker: Jackie Cheung (McGill University)
Date & Time: December 2nd at 3:30 pm
Place: Education Bldg. rm. 624
Title: Generalized Natural Language Generation

Abstract: In popular language generation tasks such as machine translation, automatic systems are typically given pairs of expected input and output (e.g., a sentence in some source language and its translation in the target language). A single task-specific model is then learned from these samples using statistical techniques. However, such training data exists in sufficient quantity and quality for only a small number of high-profile, standardized generation tasks. In this talk, I argue for the need for generic tools in natural language generation, and discuss my lab's work on developing generic generation tasks and methods to solve them. First, I discuss progress on defining a task in sentence aggregation, which involves predicting whether units of semantic content can be meaningfully expressed in the same sentence. Then, I present a system for predicting noun phrase definiteness, and show that an artificial neural network model achieves state-of-the-art performance on this task, learning relevant syntactic and semantic constraints. 

Winter 2016

Speaker: Lisa Pearl (UC Irvine)
Date & Time: Friday, March 18th at 3:30 pm
Place: ARTS Bldg. room 260
Title: How to know what’s necessary: Using computational modeling to specify Universal Grammar

Abstract: One explicit motivation for Universal Grammar (UG) is that it’s what allows children to acquire language as effectively and as rapidly as they do. Proposals for the contents of UG typically come from characterizing a learning problem precisely and identifying a potential solution to that problem. One benefit of computational modeling is to see if that solution works when it’s embedded in a learning strategy used during the acquisition process. This includes specifying (i) what the child knows already, (ii) what data the child is learning from, (iii) how long the child has to learn, and (iv) what the child needs to learn along the way.

When we identify successful learning strategies this way, we can then examine their components to see if any are necessarily both innate and domain-specific (and so part of UG). I have previously used this approach to propose new UG components (and remove the necessity of others) for learning both syntactic islands and English anaphoric one. In this talk, I investigate what’s been called the Linking Problem, which concerns where event participants appear syntactically. I’ll discuss some initial findings about when prior (and likely UG) knowledge, such as the Uniformity of Theta Assignment Hypothesis (UTAH), is helpful for learning useful information about the Linking Problem.


Speaker: Pat Keating (UCLA)
Date & Time: Friday, April 8th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Linguistic Voice Quality

Abstract: In this talk I will present several results concerning the production and perception of voice quality (phonation type), from a larger interdisciplinary project at UCLA. First, I compare the acoustic properties of phonation type distinctions in several languages, deriving a simple (low-dimensional) phonetic space for voice quality in which phonation types cluster across languages. Second, I discuss the relation between phonation and lexical tone. In some languages, phonation type is phonemic, and independent of tone, either because the languages are non-tonal (e.g. Gujarati), or because tones and phonation cross-classify (e.g. Mazatec, Yi languages). In other languages, phonation is non-phonemic, instead conditioned by voice pitch and segmental/prosodic contexts (e.g. English). In some such languages (e.g. Mandarin), this relation between voice pitch and voice quality gives voice quality a secondary role in tonal contrasts, increasing the effective size of the tone space. Still other tone languages have both independent phonation and pitch-related phonation (e.g. Hmongic languages); we show that in one such language, White Hmong, the perceptual role of phonation is different for different tones. These cases will be illustrated with acoustic and physiological measures of voice production, obtained with our freely-available tools for voice analysis.

Fall 2015

Speaker: Kie Zuraw (UCLA)
Date & Time: Friday, September 11th at 3:30 pm
Place: Education Building, room 338
Title: Polarized variation

Abstract: The normal distribution--the bell curve--is common in all kinds of data, and is often expected when the quantity being measured results from multiple independent factors. The distribution of phonologically varying words, however, is sharply non-normal in the cases examined in this talk (from English, French, Hungarian,Tagalog, and Samoan). Instead of most words' showing some medial rate of variation (say, 50% of a word's tokens are regular and 50% irregular), with smaller numbers of words having extreme behavior, words cluster at the extremes of behavior. That is, a histogram of variant rates is shaped like a U (or sometimes J) rather than a bell. The U shape cannot be accounted for by positing a binary distinction with some amount of noise over tokens, because some items (though the minority) clearly are variable, even speaker-internally. In some cases (e.g., French "aspirated" words) there is a diachronic explanation: sound change caused some words to become exceptional, so that the starting point for today's situation was already U-shaped. But in other cases, such an explanation is not available, and items seem to be attracted towards extreme behavior.

Two mechanisms for deriving U-shaped distributions will be discussed, with speculation as to why some distributions of variation are U-shaped and others bell-shaped.


Speaker: Matt Goldrick (Northwestern)
Date & Time: Friday, October 2nd at 1:30 pm
Place: Goodman Cancer Auditorium
Title: Phonetic echoes of cognitive processing

Abstract: For many years, theories of language production assumed a strict functional separation between peripheral phonetic encoding processes and more central cognitive processes. The output of lexical access—the processes mapping intended messages to utterance plans—was assumed to yield a plan that was simply executed by more peripheral processes. Recent work has challenged such proposals, showing that on-line disruptions to lexical access can affect gradient phonetic properties (e.g., phonological speech errors influence the phonetic properties of speech sounds; Goldrick & Blumstein, 2006). I'll discuss two sets of projects from my lab that extend this work. Large data sets, enabled by machine-learning based techniques for automated phonetic analysis, provide new insights into the consequences of cognitive disruptions for monolingual speech. I'll then discuss how cognitive disruptions modulate cross-language interactions in multilingual speakers.


Speaker: Danny Fox (MIT)
Date & Time: Friday, October 23rd at 3:30 pm
Place: ARTS Bldg. room 260
Title: Quantifier Raising as Restrictor Sharing – Evidence from Hydra and Extaposition with Split Antecedents

Abstract: To provide an account of Hydra (Every boy and (every) girl who like each other should have a play date) and Extraposition with Split Antecedents (ESA, A boy came in and a girl left who like each other), along the lines of Zhang 2007.

To explain how the account argues for the following conclusions (Johnson 2011):

a. Quantifier Raising involves movement not of a QP but of the quantifiers restrictor. More specifically:

1. Quantifier words are covert and “late merged” in the QPs scope position
2. Quantifier words are morphologically realized on lower heads in the QP.

b. This should be embedded in a theory in which a moved constituent has more than one mother (multi-dominance).

To provide a semantics for the lower hosting head (inspired by Champollion 2015)


Speaker: Mark Baker (Rutgers)
Date & Time: Friday, November 6th at 3:30 pm
Place: ARTS Bldg. room 260
Title: TBA

Abstract: TBA


Speaker: Meaghan Fowlie (McGill)
Date & Time: Friday, November 20th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Modelling and Learning Adjuncts

Abstract: Adjuncts have among their properties optionality and iterability, which are usually accounted for with a grammar in which the presence or absence of an adjunct does not affect the state of the derivation. For example, in a phrase structure grammar with rules like NP -> AP NP, we have an NP whether or not we have an adjective. However, certain adjuncts like adverbs and adjectives are often quite strictly ordered, which cannot be accounted for with a model that treats a phrase the same regardless of the presence of another adjunct: whether or not a particular adjunct has adjoined affects whether or not another adjunct may adjoin. I present a minimalist model that can handle all of these properties.

In terms of learning, I cover three topics: language learning algorithms and how they handle optionality and repetition; an artificial language learning experiment about repetition, and, just for fun, the use of machine learning to analyse the song of the California Thrasher, showing that their unbounded repetition lends itself much better to a human-language-like grammar than simple transitional probabilities.


Speaker: Elizabeth Smith (UQAM)
Date & Time: Friday, December 4th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Just say 'no': Cross-linguistic differences in the felicity of disagreements over issues of taste and possibility

Abstract: Semanticists, pragmaticists, philosophers, and others have recently been interested in disagreements arising from evaluative propositions (especially those containing so-called "predicates of personal taste"), as in (1), and their theoretical implications, especially the mechanism behind the difference between (1) and (2).

(1) A: This soup is tasty. B: No it isn't.
(2) A: This soup is tasty, in my opinion. B: # No it isn't

In this talk, I will present experimental data (in the form of offline felicity judgments) collected from English Catalan, French, and Spanish two-turn oral dialogues showing that there are differences with respect to (1) v. (2) and other similar judgments cross-linguistically that create a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007, von Fintel & Gillies 2007, Bouchard 2012, Umbach 2012 and others. I will further discuss the interplay of various factors in these data, including comparison with another dialect of Spanish with known differences in cultural norms as compared to Iberian Spanish. Finally, I will propose an analysis in which different types of content affect the number and type of propositions attributed to a speaker's discourse commitment set v. those being proposed for admission to the conversational common ground.

Winter 2015

Speaker: James Kirby (University of Edinburgh)
Date & Time: Monday, January 19 at 3:30 pm
Place: Education Building, room 627
Title: Dialect variation and phonetic change: Incipient tonogenesis in Khmer

Abstract: Unlike many languages of Southeast Asia, Khmer (Cambodian) is not a tone language, but an incipient tone contrast has been noted in several Khmer dialects for at least 50 years. While the process of tonogenesis is reasonably well-understood, the manner by which it seems to be taking place in Khmer - conditioned by loss of onset /r/ - has not been reported for any other language. In this talk, I will compare new acoustic and perceptual data on the emergence of tone in two varieties of Khmer: the colloquial speech of the capital Phnom Penh, and the dialect spoken in Kiên Giang province, Vietnam. I will show how this sound change may have been set in motion by devoicing of /r/, and sketch a statistical learning account of how differences in the perception of devoicing might help explain the observed differences between dialects. Finally, I will briefly discuss the implications of these findings for our understanding of tonogenesis and phonetic change more generally.


Speaker: Chris Carignan (North Carolina State University)
Date & Time: Friday, January 23 at 3:30 pm
Place: Education Building, room 433
Title: An oral articulatory approach to vowel nasalization: Searching for the "oral" in "nasal"

Abstract: Vowel nasalization, by definition, is characterized by some degree of coupling of the nasal cavity to the oral cavity via an opening of the velo-pharyngeal (VP) port, otherwise referred to as VP coupling, a lowering of the velum or, more generally, “nasalization”. In acoustic studies of vowel nasalization, it is sometimes assumed that the primary articulatory difference between an oral vowel and a nasal(ized) vowel is VP coupling and, thus, observed acoustic changes are customarily attributed to the effect of nasalization itself on the acoustic signal. The work presented in this talk takes the assumption that the production of vowel nasalization may also involve changes to the shape of the oral tract. Inferring these oral articulatory changes from the acoustic signal may be an intractable problem due to the conflation of the respective acoustic transfer functions associated with the nasal and oral tracts. Because of this issue, I explore the oral articulation of vowel nasalization by studying the shape of the oral tract itself. The findings from four such studies are presented in this talk---two studies on phonemic vowel nasalization (European French) and two studies on phonetic vowel nasalization (American English). The results suggest that---without being deterministic---the effect of nasalization on a vowel's acoustic output creates a condition where misapprehension of the articulatory source is possible and, as a result, modification of the oral tract is likely. In this framework, explanations for diachronic patterns of nasal vowel systems can be reasoned, understanding of synchronic effects of nasalization on vowel production and perception can be enlightened, and plausible predictions for nasal vowel systems can be made.


Speaker: Holger Mitterer (University of Malta)
Date & Time: Monday, February 2 at 3:30 pm
Place: Education Building, room 627
Title: When is a phone a phoneme?

Abstract: The glottal stop is viewed as a phoneme in some languages (e.g., Maltese) but as an optional prosodic boundary marker in others (e.g., Dutch). German is an intermediate case, in which the glottal stop is assumed to form the onset of “vowel-initial” words canonically (in contrast to in Dutch). Nevertheless, most phonological analyses agree that the phonotactic restrictions for the glottal stop—mostly restricted to morpheme-initial position—make it unnecessary to view it as a phoneme in German (in contrast to Maltese). Such assumptions are critical for our understanding of what is “lexical”, “phonological”, and “phonetic”. In this talk, I will present several production and perception studies in Maltese, German, and Dutch investigating this issue. The production experiments showed that glottalization of vowel-initial words functions similarly in German and Dutch, contrasting with the view that glottal stops are canonical in German and optional in Dutch. The perception experiments then tested the consequences of deleting an initial glottal stop or an initial /h/. The comparison with /h/ is motivated by the fact that /h/ is considered a phoneme in German, despite similar phonotactic restrictions as for the glottal stop. The results showed that deleting the Dutch glottal stop, the German glottal stop, the Maltese glottal stop, and German /h/ have very similar consequences in perception. These results thus favour the assumption that the glottal stop is part of the lexical representation of words in these three languages rather than lexically represented in Maltese and post-lexically inserted in the Germanic languages.


Speaker: Jessamyn Schertz (University of Toronto)
Date & Time: Friday, February 6 at 3:30 pm
Place: Education Building, room 433
Title: Learning different things from the same input: How initial category structure shapes phonetic adaptation

Abstract: Listeners are confronted with a large amount of redundancy in the language input. On the level of phonetic categories, sound contrasts often covary systematically on multiple dimensions, providing listeners with options of what to pay attention to (and what to ignore), in principle allowing for different individual “grammars.” In this talk, I present a series of experiments demonstrating the different choices made by native Korean listeners when categorizing the (L2) English stop voicing contrast. Korean speakers used both pitch and VOT to distinguish the contrast, showing relatively homogenous use of the two cues in production. However, perceptual patterns varied widely, with some listeners using pitch as a primary cue, some using VOT, and some using a combination of the two. These different choices were stable across sessions and determined how listeners modified their phonetic categories when confronted with a novel accent. The fact that individual differences in phonetic structure predict categorically different adaptation patterns highlights the importance of integrating initial listener biases into models of distributional learning and phonetic adaptation.


Speaker: Francisco Torreira (Max Planck Institute for Psycholinguistics)
Date & Time: Monday, February 9 at 3:30 pm
Place: Education Building, room 627
Title: Unraveling the time course of language production in conversational interaction

Abstract: In conversation, turn transitions between speakers often occur smoothly, most typically within a time window of 100 to 300 milliseconds. Since speech planning usually takes over half a second (ca. 600 ms for picture naming, Indefrey & Levelt, 2004; ca. 1500 ms for simple sentences, Griffin & Bock, 2000), it appears that participants in conversation often plan their utterances in overlap with their interlocutor’s turns. It is not clear, however, how they manage to launch their own turns in a timely manner (i.e., without excessive overlaps or long silent gaps). On the basis of psycholinguistic experiments (e.g., De Ruiter, Mitterer & Enfield, 2006), and against a long tradition of observational studies, it has been argued that participants in conversation rely mainly on anticipating morphosyntactic structure when timing and producing their turns, and that they do not need to make use of prosodic information in order to achieve smooth floor transitions. In this talk, I will present a series of new psycholinguistic, phonetic, and corpus studies challenging this view, and sketch an efficient turn-taking mechanism of language production involving two separate processes: a) early planning of content, based among other things on morphosyntactic prediction, and often carried out in overlap with the incoming turn, and b) late launching of articulation, mainly based on the identification of turn-final prosodic cues (e.g., phrase-final melodic patterns, final lengthening, sharp intensity drops).


Speaker: Florian Jaeger (University of Rochester)
Date & Time: Friday, February 20 at 3:30 pm
Place: Education Building, room 433
Title: The doubly-hierarchical structure of linguistic knowledge

Abstract: It is now broadly recognized that language understanding and production are probabilistic. For example, multiple instances of the same sound produced in the same context by the same speaker form a distribution over acoustic dimensions, rather than a signle point. I discuss data from speech perception and language processing that suggests that the ideas of gradience and inference over noisy input, while an important step forward, do not go far enough in characterizing the cognitive architecture underlying language.

Much of the noise and variability in linguistic behavior is structured: part of the differences in speakers’ gradient preferences are systematically conditioned on social indexical variables (e.g., gender, age, dialects and accents). This structure variability contributes to what is known as the infamous ‘lack of invariance’ problem in speech perception.

Listeners overcome the lack of invariance by learning to represent environment-specific linguistics statistics (e.g., talker-specific pronunciation, lexical, and syntactic preferences). Specifically, I propose that comprehenders recognize previously encountered language environments (such as a familiar speaker) and adapt to the statistics of novel environments while generalizing based on similar previous experiences. In this view, grammatical knowledge is conditioned on hierarchically organized indexical structure that captures speaker-specificity as well as generalizations across groups of speakers (sociolects, dialects, etc.). These representations can be thought of as allowing the efficient parameterizations (in the stochastic sense) of grammars for different language environments.

For this talk I will first briefly summarize evidence from speech perception (Kleinschmidt and Jaeger, in press). Then I will focus on sentence processing to demonstrate rapid expectation adaptation during language understanding (Fine et al., 2010, 2013; Farmer et al., 2014). Finally, I’ll present evidence from implicit motor learning that we can indeed learn the indexical structure underlying varying statistics in our environment (Qian et al, submitted).


Speaker: Lisa Matthewson (University of British Columbia)
Date & Time: Friday, April 10 at 3:30 pm
Place: Education Building, room 433
Title: TBA

Abstract: TBA

Fall 2014

Speaker: Anne-Michelle Tessier (University of Alberta)
Date & Time: Friday, September 12 at 3:30 pm
Place: Education Building, room 433
Title: Lexical Avoidance and Sources of Complexity in Phonological Acquisition

Abstract: This talk is about the phenomenon of lexical avoidance in children’s early linguistic development, whereby a child avoids producing words which contain some complex (or marked?) phonological structure (as discussed in Ferguson and Farwell, 1975; Menn 1976, 1983; Schwarz and Leonard, 1982, Schwartz et al, 1987; Storkel 2004, 2006; Adam and Bat-El, 2009; interalia). This research’s basic question is to what extent a child’s developing grammar is responsible for lexical avoidance, and more specifically what kinds of linguistic complexity can drive this avoidance. The increase in complexity I will focus on is the transition from one word to two word utterances – which might be either driven or delayed by a child’s phonology – and I will assess the nature of lexical avoidance related to this transition in two case studies: one taken from Donahue (1986), and another in a novel corpus analysis. The central claim will be that phonological grammar is indeed crucial to explaining the kinds of lexical avoidance which are attested and unattested, illustrated using OT constraint interaction to yield typologically-reasonable patterns, and I will discuss some of the predictions, implications and open questions that emerge from this approach.


Speaker: Kristine Onishi (McGill)
Date & Time: Friday, September 26 at 3:30 pm
Place: Education Building, room 433
Title: Infants' understanding of communicative intention

Abstract: Language is a tool that allows us to convey information quickly and efficiently. For example, to let you know where I left your keys, saying "your keys are on the table" is often more efficient than grunting and waving my arms. Even when we do not understand a language, as adults we infer that speakers of that unknown language can use it to convey information. When observing interactions between two people, what types of behavior do infants think can be used to convey information and what types of information do they think can be conveyed? I will describe some recent experiments demonstrating that infants, even before speaking much, understand that speech can be used to convey information, suggesting that realize that speech can be a tool for gathering knowledge.


Speaker: Benjamin Bruening (University of Delaware)
Date & Time: Friday, October 3 at 3:30 pm
Place: Education Building, room 433
Title: Subject-Verb Inversion as Generalized Alignment

Abstract: I suggest that the driving force behind subject-verb inversion, which takes place in questions in many languages, is the need for phonological alignment, as in the theory of Generalized Alignment in phonology and morphology. Specifically, I propose that many languages have a version of the following constraint:

Align V-C: Align(C(x), L/R, V(tense), L/R)

This constraint says that the left/right edge of some projection of C must be aligned with the left/right edge of the tensed verb. In the relevant context, say questions, this constraint holds. If the subject is in between the relevant projection of C and the tensed verb, they have to invert or the constraint is violated. The specifics of the inversion will vary from language to language and even from context to context within a language. For instance, in English the inversion is sometimes head movement, sometimes phrasal movement. In the Romance languages it is generally phrasal movement. I show that variation in how the constraint is stated in each language and how the language responds to meet it can account for an array of facts both within a single language and across languages. Languages vary in exactly the way this theory predicts they should, and a variety of seemingly obscure adjacency constraints simply falls out.


Speaker: Hadas Kotek (McGill)
Date & Time: Friday, October 24 at 3:30 pm
Place: Education Building, room 433
Title: TBA

Abstract: TBA


Speaker: Yoonjung Kang (University of Toronto)
Date & Time: Friday, November 14 at 3:30 pm
Place: Education Building, room 433
Title: Laryngeal classification of Korean fricatives: evidence from sound change and dialect variation

Abstract: Korean has a three-way contrast of voiceless stops among aspirated, lenis, and fortis stops. Recent studies converge to show that Seoul Korean is undergoing a tonogenetic sound change whereby the VOT distinction between lenis and aspirated stops is neutralized and the tone on the following vowel becomes the primary phonetic distinction. Korean fricatives, on the hand, show a two-way contrast between a fortis and a “non-fortis” fricative. The laryngeal classification of the non-fortis fricative has been a topic of much debate, as its phonetic patterning is ambiguous between aspirated and lenis categories. In this talk, I will bring additional evidence to the debate by examining the patterning of the fricatives in the on-going sound change in Seoul. I will also compare the Seoul data with the data collected from two major North Korean dialects as spoken by ethnic Koreans in China, where the stop contrast retains the “older” VOT pattern.

Winter 2014

Speaker: Julie Legate (University of Pennsylvania)
Date & Time: Friday, January 10, 3:30 pm
Place: Education Building Rm. 433
Title: Acehnese causatives and the structure of the verb phrase

Abstract: In this talk, I provide evidence from Acehnese (Malayo-Chamic: Aceh Province, Indonesia) for a distinction between VoiceP, which introduces the external argument and assigns accusative case, and causative vP, which introduces causative semantics (Alexiadou, Anagnostopoulou, & Shafer 2006; Pylkkanen 2008; inter alia). In Acehnese, VoiceP and causative vP are morphologically overt and occur both independently and simultaneously. Focussing on causativization of roots that are normally used as unergative or transitive verbs, I argue that the causative head does not embed an active, passive, or object voice VoiceP, but instead embeds an applicative VoiceP. Thus, the causee is introduced as an applicative object, not as an agent. Implications for the general theory of causatives and the structure of the verb phrase are considered.


Speaker: Jakob Leimgruber (McGill)
Date & Time: Friday, February 7, 3:30 pm
Place: Education Building Rm. 433
Title: Language policy in multilingual cities: effects on the linguistic landscape of Singapore and Montreal


Speaker: Marc Brunelle (University of Ottawa)
Date & Time: Friday, February 21, 3:30 pm
Place: Education Building Rm. 433
Title: An incipient tone sandhi in Northern Vietnamese?

Abstract: Synchronic tone sandhis are well attested and described, but their development is largely a matter of speculation. In this study, we look at an instance of apparent tone sandhi in progress and examine the interplay between coarticulation, reduction and perception in its formation.

In Northern Vietnamese (NVN), the low rising tone (sắc) often loses its rise in non-final position, making it perceptually very similar to the low falling tone (huyền). This gradient change does not normally result in contrast neutralization, as the rise is recoverable from a strong progressive coarticulation on the following tone. However, over the past decade, the authors have noticed that many speakers neutralize the rising tone and the low falling tone before the high level tone (ngang), an observation confirmed by native speaking linguists. This is characteristic of young female Hanoians, but seems more and more common among other gender and age groups, as well as outside Hanoi.

We conducted an acoustic investigation of this incipient sandhi in six young female NVN speakers. They were recorded while completing a map task designed to obtain targets words controlled for tone and microprosody in semi-spontaneous speech. Our results show that although none of our speakers exhibits full neutralization, they all show some degree of tone change. Based on these results and those of previous studies, we infer phonetic scenarios that could account for the initial development of the tone change. We then highlight similarities between this incipient sandhi and more established cases in Chinese and Hmong.


Speaker: Norvin Richards (MIT)
Date & Time: Friday, February 28, 3:30 pm
Place: Education Building Rm. 433
Title: Pied-piping and Selectional Contiguity

Abstract: Cable (2007, 2010) argues, on the basis of data from Tlingit, that wh-questions involve three participants: an interrogative C, a wh-word, and a head Q, which is visible in Tlingit but invisible in English. In Cable's account, QP standardly dominates the wh-word, and wh-movement is always of QP. The question of how much material pied-pipes under wh-movement, on Cable's account, is essentially a question about the distribution of QP. Cable offers several conditions and parameters governing the distribution of QP.

I will try to derive Cable's conditions on the distribution of QP from Contiguity Theory, a series of proposals about the interaction of syntax with phonology that I have been developing in recent work.


Speaker: Thomas Ede Zimmerman (University of Frankfurt)
Date & Time: Friday, March 14, 3:30 pm
Place: Education Building Rm. 433
Title: On the ontological status of semantic values

Abstract: The following three theses will be defended, and connections between them will be established:

1. Model-theoretic natural language semantics is not a theory of meaning.
2. Extensions ("generalized“ quantifiers, truth values,…) must be distinguished from referents.
3. Intension must be distinguished from content.


Speaker: Amy Rose Deal (UC Santa Cruz)
Date & Time: Friday, March 28, 3:30 pm
Place: Education Building Rm. 433
Title: Cyclicity and connectivity in Nez Perce relative clauses

Abstract: This talk centers on two aspects of movement in relative clauses, focusing on evidence from Nez Perce.

First, I argue that relativization involves _cyclic_ A’ movement, even in monoclausal relatives. Rather than moving directly to Spec,CP, the relative element moves there via an intermediate position in an A’ outer specifier of the TP immediately subjacent to relative C. Cyclicity of this type suggests that the TP sister of relative C constitutes a phase – a result whose implications extend to an ill-understood corner of the English that-trace effect.

Second, I argue that Nez Perce relativization provides new evidence for an ambiguity thesis for relative clauses, according to which some but not all relatives are derived by a head-raising analysis. The argument comes from connectivity and anticonnectivity in morphological case. These new data complement the range of standard arguments for head-raising, which draw primarily on connectivity effects at the syntax-semantics interface.

Fall 2013

Speaker: Richard Compton (McGill)
Date & Time: Friday, September 20, 3:30 pm
Place: Education Building Rm. 433
Title: Evidence for phrasal words in Inuit

Abstract: In this talk I argue that data from noun incorporation, conjunction, ellipsis, and a VP pro-form in Inuit provide evidence for word-internal XPs inside polysynthetic words. Such data provide a potential counter-example to Piggott & Travis’s (2012) proposal (following Baker 1996) that phonological words cross-linguistically correspond to syntactic heads—simplex or complex—with morphologically complex words being derived via head movement, head-adjunction, or PF movement.


Speaker: Emily Elfner (McGill)
Date & Time: Friday, October 4, 3:30 pm
Place: Education Building Rm. 433
Title: Recursion in prosodic phrasing: Evidence from Connemara Irish

Abstract: One function of prosodic phrasing is its role in aiding the recoverability of syntactic structure. In recent years, a growing body of work suggests it is possible to find concrete phonetic and phonological evidence that recursion in syntactic structure is preserved in the prosodic organization of utterances (Ladd 1986, 1988; Kubozono 1989, 1992; Féry & Truckenbrodt 2005; Wagner 2005, 2010). In this talk, I argue that the distribution of phrase-level tonal accents in Connemara Irish provides a new type of evidence in favour of this hypothesis: that, under ideal conditions, syntactic constituents are mapped onto prosodic constituents in a one-to-one fashion, such that information about the nested relationships between syntactic constituents is preserved through the recursion of prosodic domains. Through an empirical investigation of both clausal and nominal constructions, I argue that the distribution of phrase accents in Connemara Irish can be used to identify recursive bracketing in prosodic structure.


Speaker: Alan Yu (University of Chicago)
Date & Time: Friday, November 15, 3:30 pm
Place: Education Building Rm. 433
Title: Individual differences in speech perception and production

Abstract: Linguists often discuss language in terms of groups of speakers, even though it is also acknowledged that no two individuals speak alike. The focus on language as a group-level phenomenon can obscure important insights that are only apparent when systematic individual variation is taken into account. In this talk, I offer cross-linguistic experimental evidence, showing that speakers vary significantly and systematically along certain individual-difference dimensions, including autistic-like traits, in their responses to the effects of the lexicon and coarticulation in speech perception and production. I will argue that understanding the nature of such individual linguistic differences is crucial for the understanding the inception (and possibly the propagation) of sound change, the primary source of sound patterns in language.


Speaker: Laurent Dekydtspotter (Indiana University)
Date & Time: Friday, November 22, 3:30 pm
Place: Education Building Rm. 216
Title: Parsing second languages: Anaphora in real time cycles of computations

Abstract: A body of research proposes that second language (L2) sentence processing is strongly semantically guided as a result of shallow structures lacking syntactic details in real time (Clahsen & Felser, 2006a, b; Felser & Roberts, 2007; Felser, Cunnings, Batterham, & Clahsen, 2012; Felser, Roberts, Gross, & Marinis, 2003; Felser, Sato, & Bertenshaw, 2009; Marinis, Roberts, Felser, & Clahsen, 2005; Papadopoulou & Clahsen, 2003). A second body of research argues for a strong structural reflex (Dekydtspotter & Miller, 2012; Juffs, 2005; Juffs & Harrington, 1995; Hopp, 2006; Williams, Möbius & Kim, 2001; Williams, 2006; inter alia). In this case, working memory capacity, proficiency, lexical access, etc. qualify the manner in which such information is acted upon in the conceptual-intentional and in sensory-motor systems in a L2 (Dekydtspotter & Miller, 2012; Dekydtspotter & Renaud, 2009; Dekydtspotter, Schwartz, & Sprouse, 2006; Hopp, 2012; Miller, 2011; Williams, 2006).

The talk addresses the etiology of L2 sentence processing in a modular system consisting of autonomous components in view of new experimental evidence. The empirical focus is on anaphora under reconstruction as in (1) for instance.

(1) Which story about him(self) did Ben say that Anna told?

New evidence from reading experiments strongly suggests that L2 sentence processing includes an incremental syntactic analysis according to cycles of computations. Specifically, I argue that such L2 parsing follows default structural computations that select specified information and guide aspects of the deployment of semantic processes in real time. Hence, to the extent that minimality, locality and chains supporting binding constitute good-design signatures of language architecture given limited processing resources (Chomsky, 2005; Reuland, 2001, 2011; Rizzi 2013), these design features seem available in L2 sentence processing. A path of research in view of these findings will be charted.

Winter 2013

Speaker: Kai von Fintel (MIT)
Date & Time: Friday, January 25 at 3:30 pm
Place: Education Building, room 433
Title: Hedging your ifs and vice versa (Kai von Fintel & Anthony S. Gillies)

Abstract: How does the word “if” help things we say mean what they mean? It can work together with other words like “maybe” and “probably” to make things we say less strong. But how does it do that? Many people have tried to find out how this works, but we will show that they face a big problem when one looks at people talking to each other and pointing to things the other said. Can we do better?


Speaker: Jennifer Cole (Univ. of Illinois, Urbana-Champaign)
Date & Time: Friday, February 22 at 3:30 pm
Place: Education Building, room 433
Title: Memory for prosody


Speaker: Kevin Russell (Manitoba/McGill)
Date & Time: Friday, March 1 at 3:30 pm
Place: Education Building, room 433
Title: When phonology goes bad

Abstract: The consensus on dyslexia, to the extent there is one, is that the core deficit lies in the reader having poor phonological representations or poor ability to use their phonological representations. Yet most dyslexic readers show no obvious problems in using phonology during everyday speaking and listening. This talk addresses the question of what it could possibly mean for a phonological representation to be poor. It synthesizes current findings in spoken word recognition and the development of phonological categories in infants and young children, to determine what the phonological representations of beginning readers are probably like and how, in some, they can be adequate for spoken communication but still be a poor match for the assumptions of an alphabetic orthography.


Speaker: Gillian Gallagher (NYU)
Date & Time: Friday, March 22 at 3:30 pm
Place: Education Building, room 433
Title: Identity bias and phonetic grounding in Quechua phonotactics

Abstract: Many languages distinguish between identical and non-identical segments with respect to some phonotactic restriction. For example, in several unrelated languages, roots with pairs of non-identical ejectives are unattested while pairs of identical ejectives are common (e.g., Bolivian Aymara t'ant'a 'bread' *t'ank'a). In other languages, like Cochabamba Quechua, pairs of non-identical and identical ejectives are both unattested. This talk explores the basis for an identity exemption to phonotactics by testing Quechua speakers' production and perception of non-identical and identical ejective pairs. If identical pairs of ejectives (or segments in general) benefit from some bias, then this bias should be latent in speakers of languages that don't grammatically distinguish identical from non-identical ejectives. It is found that Quechua speakers are more accurate at repeating nonce words with pairs of identical ejectives (e.g., p'ap'u) than pairs of non-identical ejectives (e.g. k'ap'u), though no distinction is found in a perception task. These results suggest that identical ejectives have an articulatory advantage over non-identical ejectives. Further evidence that articulation is central to the cooccurrence restriction comes from a production task with real phrases of Quechua. Ejectives can cooccur across word boundaries in Quechua (e.g., misk'i t'anta 'good bread'), though speakers de-ejectivize one of the two ejectives in phrases of this type at a small but significant rate. Implications of these results for the analysis of cooccurrence restrictions and the role of phonetic effects in the grammar are discussed.


Speaker: Colin Phillips (Univ. of Maryland); CRBLM/Linguistics Distinguished Lecturer
Date & Time: Friday, April 12 at 3:30 pm
Place: Education Building, room 433
Title: Generating Expectations and Meanings in Language Comprehension and Production

Abstract: We often have expectations about utterances before they are uttered. How we do this, in language production and comprehension alike, has implications for practical concerns and for theoretical questions about language architecture. The ability to generate reliable expectations may be a key enabler of robust language understanding in noisy environments. Understanding the (non-)parallels between the generative mechanisms engaged in comprehension and production is essential for any attempt to close the gap between grammatical 'knowledge' and language use systems. In this talk I explore how we generate expectations about word-level and sentence-level meanings. One set of studies uses behavioral interference paradigms to examine the time-course of verb generation when Japanese speakers plan their utterances. Two other series of studies focus on electrophysiological evidence for the generation of verb expectations in Chinese, Spanish, and English. Evidence for advance generation of verb meanings is found in comprehension and production alike. But we find that different types of linguistic information drive expectations on different time scales. In verb-final clauses, verb expectations are initially driven only by lexical associations, and effects of compositional interpretations are observed only after a delay. Similar mechanisms operate in production and comprehension, but they yield different outputs, depending on the information available to the language user in a specific task.

Fall 2012

Speaker: Robert Henderson (UCSC/McGill)
Date & Time: Friday, September 7 at 3:30 pm
Place: Education Building, room 211
Title: The morphosemantics of Mayan positional derivation


Speaker: Bryan Gick (UBC/CRBLM/McGill)
Date & Time: Friday, September 14 at 3:30 pm
Place: Education Building, room 211
Title: How humans don't have lips

Abstract: Researchers concerned with speech and related functions of the vocal tract have long relied on lay conceptions of terms like "lips" and "tongue" to describe ostensible parts of the anatomy. Close examination of these and other vocal tract structures strongly suggests that they are anatomically ill-defined, culture-specific concepts (which partly explains why researchers have never agreed on how to describe them). Nevertheless, they remain fundamental building blocks in our otherwise highly formalized theories of phonology, phonetics, sound change, language acquisition, and so on. Biomechanical modeling and production experiments will be used to show that, in addition to being anatomically indistinct, these structures are not straightforwardly definable in terms of their mechanical or articulatory function. So, how DO humans have lips? It will be argued that cultural concepts like "lips" (and concomitant phonological categories like [labial]) are indeed useful and relevant, but only in a robust, mulitdimensional, real-world setting - the setting where language happens. Implications for sound change, language acquisition, and the emergence of phonological categories will be discussed.


Speaker: Alex Drummond (McGill)
Date & Time: Friday, October 5 at 3:30 pm
Place: Education Building, room 211
Title: Parallelism and Dahl's paradigm

Abstract: I will attempt to defend the following two hypotheses: (i) that the binding constraints are stated in terms of a general notion of covaluation which subsumes binding and coreference; and (ii) that VP ellipsis is constrained by a strict parallelism requirement. My starting point is a 2007 paper by Irene Heim, which sketches a formulation of the binding theory consistent with hypothesis (i). The primary empirical problem for Heim’s theory is Dahl’s paradigm, which appears to necessitate the rejection of hypothesis (ii). I will argue that certain proposals in Tanya Reinhart’s 2006 monograph can be adapted to overcome this problem.


Speaker: Martina Wiltschko (UBC)
Date & Time: Friday, November 30 at 3:30 pm
Place: Education Building, room 434
Title: The structure of universal categories: Towards a formal typology

Abstract: When it comes to the nature of categories within syntactic theory, we can identify two opposing positions: i) The universalist position: Categories are universal ii) The variance position: Languages differ in the morpho-syntactic categories they make use of. My goal for this talk is to develop a model of grammar which allows us to reconcile these seemingly contradictory positions. I first show that we want to maintain both positions. On the one hand I review some properties of functional categories that suggest that there is a universal set of hierarchically organized categories. On the other hand, I review properties of categories across different languages that suggest that they are indeed language-specific. In fact, I shall argue that categories defined based on word class (i.e., determiner), morphological type (i.e., inflection), or substantive content (i.e., tense) cannot be universal on principled grounds. Instead I propose a model according to which universal categories are defined based on their core function: classification, anchoring, and discourse linking. I refer to this as the Universal-Spine-Hypothesis. Variance in the inventory of categories across languages arises via different strategies to map form and meaning onto the syntactic spine. This will allow us to formulate a formal typology for functional categories.

Back to top