Challenges of building clinical biomarkers from M/EEG: multimodal modeling with missing data and robust regression on power spectra

Recording of Presentation

Event

Speaker: Denis A. Engemann

Bio: I am an experimental psychologist by training and conduct interdisciplinary research at the intersection of computer science, neuroscience, and medicine. My goal is to improve diagnosis and treatment in intensive care medicine through methods development grounded in basic research in neurophysiology, psychology and high dimensional statistics. Over the last seven years, I have specialized in large-scale data analysis and predictive modeling with electrophysiology, i.e., electroencephalography (EEG) and magnetoencephalography (MEG). This has led to three major research topics: a) development of electrophysiology-based biomarkers, b) statistical methods and algorithms for effective learning from large-scale brain data, c) development of open source tools for analyzing neural time series in the form software libraries in Python and R.

Talk Abstract:

In clinical neuroscience, success often depends on reading out multiple modalities, i.e., brain images and physiological signals. However, clinical reality often sets limits on data availability. Is combining multiple modalities for predictive modeling worth the extra effort when data is regularly incomplete? In [1], we proposed a multi-modal machine learning model with explicit support for handling missing modalities. Combining MRI, fMRI and magnetoencephalography on the Cam-CAN database not only significantly enhanced age prediction but also facilitated detection of age-related cognitive decline captured by the estimated brain age delta. In, particular, combining MEG with MRI yielded enhanced detection of changes in fluid intelligence, sleep quality and memory function, highlighting the complementarity of these distinct biomedical signals. Strikingly, the added value of MEG was best explained by relatively simple features, i.e., the spatial distribution of fast brain rhythms in the beta/alpha range. These results potentially open the door to clinical translation via EEG-technology that is widely available in the hospital setting.

Unfortunately, MRI scans are not always available, closing the door to source modeling with individual anatomy. What then? Call linear models for rescue? While very effective for regressing biomedical outcomes on M/EEG signals, they fail systematically if the cortical generator of an observed behavior is oscillatory. In that case, volume conduction induces distortions on extracranial signals mitigating the applicability of linear models. However, accurate modeling volume conduction depends on the availability of individual MRIs in the first place. In [2,3] we demonstrate through mathematical analysis, simulations and prediction of age from MEG (Cam-CAN) and EEG (Temple University Hospital) how to, nevertheless, construct predictive linear models in different data generating scenarios. We conclude that Riemannian geometry offers a practical alternative to source localization when predicting from power spectra, potentially enabling end-to-end learning without preprocessing.

The Neuro logo McGill logoMcGill University Health Centre logoKillam logo

The Neuro is a McGill research and teaching institute; delivering high quality patient care, as part of the Neuroscience Mission of the McGill University Health Centre. We are proud to be a Killam Institution, supported by the Killam Trusts.

Back to top