McIntyre Medical Building 3655 promenade Sir William Osler, Montreal, QC, H3G 1Y6, CA
Recognizing words in fluent speech is a fundamental skill in language processing, allowing listeners to access semantic and grammatical information encoded in the utterances they hear. This is one of the first skills infants must master in learning language. Although word recognition is subjectively instantaneous and effortless (at least when listening to familiar languages), the computations required are exceedingly complex. Two problems in particular must be surmounted. First, whereas we perceive words as discrete units with distinct endpoints, acoustically, words flow into one another, typically with no manifest boundary. Infants must thus learn how to segment words from continuous speech. Second, although we achieve a sort of perceptual constancy for words, different instances (or tokens) of words often vary wildly on any number of dimensions. Infants must learn which aspects of such variation are functionally relevant (i.e., actually signal differences between words) and which are not; i.e., infants must learn the phonological system of their language. In this talk, I will present data illuminating how infants go about solving these two problems, and I will sketch a hybrid Bayesian/attractor model that suggests what may be some of the processes underlying infants' mastery of spoken word recognition.