Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way humans do. But researchers from McGill University, MIT, and Cornell University have taken a step in this direction. They have developed an artificial intelligence (AI) system that can learn the rules and patterns of human languages on its own.
Scientists have long known that while listening to a sequence of sounds, people often perceive a rhythm, even when the sounds are identical and equally spaced. One regularity that was discovered over 100 years ago is the Iambic-Trochaic Law: when every other sound is loud, we tend to hear groups of two sounds with an initial beat. When every other sound is long, we hear groups of two sounds with a final beat. But why does our rhythm perception work this way?
Research also demonstrates brain's plasticity and ability to adapt to new language environments
<p>It’s a scene that plays out every day in Montreal. On the bus, in schools, in the office and at home, conversations weave seamlessly back and forth between French and English, or one of the many other languages represented on this multicultural island. It’s increasingly common to hear not two, but three different languages spoken in one short conversation.</p>