Killam Seminar Series: Unsupervised Pretraining of Neural Representations for Task Learning
Supported by the generosity of the Killam Trusts, the MNI's Killam Seminar Series invites outstanding guest speakers whose research is of interest to the scientific community at the MNI and McGill University.
In-person talk only. No virtual option.
Carsen Stringer, PhD
Group leader, HHMI Janelia Research Campus, Virginia, USA
Host: Stuart Trenholm
Abstract: Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the presence or absence of reward feedback. Both types of learning are highly effective in artificial neural networks. In biological systems, task learning has been shown to change sensory neural representations, but it is not known if these changes are due to supervised or unsupervised learning. Here we recorded populations of ~70,000 neurons simultaneously from primary visual cortex (V1) and higher visual areas (HVA), while mice learned multiple tasks as well as during unrewarded exposure to the same stimuli. We find that neural changes due to task learning were concentrated in medial and anterior HVAs. The changes in medial HVAs were also found in mice that did not learn a task, while changes in anterior HVAs were not. Anterior HVAs represented a ramping reward anticipation signal which was abolished by the delivery of reward, consistent with the involvement of this area in supervised learning. Across different tasks, neural changes in all areas including V1 were consistent with a pattern of generalizing to new stimuli according to the rules of the respective task, even when the task was not explicitly instructed. Thus, most changes in neural representations in visual areas are due to unsupervised learning and these changes may support behavioral generalization in ecological scenarios where rewards are rare.