Event

QLS Seminar - SueYeon Chung

Tuesday, April 19, 2022 12:00to13:00

Structure, Function, and Learning in Distributed Neuronal Networks

SueYeon Chung - NYU / Flatiron Institute
Tuesday April 19, 12-1pm
Zoom Link: https://mcgill.zoom.us/j/85428056343

Abstract: A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neural networks. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules.

First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes the shattering capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties.

Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neural networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition.

Finally, we will discuss our recent efforts to develop biologically plausible learning rules for deep neural networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, we hope to contribute towards the long-term challenge of understanding the neuronal basis of tasks and behaviors.

Back to top