Publications

Peer-reviewed work, preprints, and academic contributions.

2025

Intrinsic Neural Oscillations Predict Verbal Learning Performance and Encoding Strategy Use

MEG Neural Oscillations Verbal Learning Cognitive Strategy Resting-State
Victor Oswald, Mathieu Landry, Hamza Abdelhedi, Sarah Lippé, Philippe Robaey, Karim Jerbi

Abstract: Individuals adopt different encoding strategies to facilitate learning. However, few studies have investigated the neurophysiological basis that support these different encoding strategies across individuals. The present work addresses this gap by extending our previous findings on the direct relationship between cortical spectral power, measured via resting-state magnetoencephalography, and performance on standard cognitive test results. Our results highlight the complex interactions between endogenous brain oscillations, learning and verbal encoding strategies assessed by the California Verbal Learning Test (CVLT-2). First, we found that resting-state theta oscillations were significantly associated with verbal learning and subjective clustering strategies. Second, we observed that semantic clustering is facilitated by oscillatory patterns in left sensory-motor brain regions. Finally, our analyses revealed that serial and semantic clustering strategies are related to opposite regression patterns, indicating a competitive interaction. Together, these findings provide novel insights into the neural oscillatory dynamics that support diverse encoding strategies in verbal learning.

Exploring aperiodic, complexity and entropic brain changes during non-ordinary states of consciousness

Research Methods Open-source
Victor Oswald, Karim Jerbi, Corine Sombrun, Hamza Abdelhedi, Annen Jitka, Charlotte Martial, Audrey Vanhaudenhuyse, Olivia Gosseries

Abstract: Non-ordinary states of consciousness (NOC) provide an opportunity to experience highly intense, unique, and perceptually rich subjective states. The neural mechanisms supporting these experiences remain poorly understood. This study examined brain activity associated with a self-induced, substance-free NOC known as Auto-Induced Cognitive Trance (AICT). Twenty-seven trained participants underwent high-density electroencephalography (EEG) recordings during rest and AICT. We analyzed the aperiodic component of the power spectrum (1/f), Lempel-Ziv complexity, and sample entropy from five-minute signal segments. A machine learning approach was used to classify rest and AICT, identify discriminative features, and localize their sources. We also compared EEG metrics across conditions and assessed whether baseline activity predicted the magnitude of change during AICT. Classification analyses revealed condition-specific differences in spectral exponents, complexity, and entropy. The aperiodic component showed the strongest discriminative power, followed by entropy and complexity. Source localization highlighted frontal regions, the posterior cingulate cortex, and the left parietal cortex as key contributors to the AICT state. Baseline neural activity in frontal and parietal regions predicted individual variability in the transition from rest to AICT. These findings indicate that AICT engages brain regions implicated in rich subjective experiences and provide mechanistic insights into how self-induced trance states influence neural functioning.

The 2025 PNPL competition: Speech detection and phoneme classification in the LibriBrain dataset

Research Methods Open-source Community
Gilad Landau, Miran Özdogan, Gereon Elvers, Francesco Mantegna, Pratik Somaiya, Dulhan Jayalath, Luisa Kurth, Teyun Kwon, Brendan Shillingford, Greg Farquhar, Minqi Jiang, Karim Jerbi, Hamza Abdelhedi, Yorguin Mantilla Ramos, Caglar Gulcehre, Mark Woolrich, Natalie Voets, Oiwi Parker Jones

Abstract: The advance of speech decoding from non-invasive brain data holds the potential for profound societal impact. Among its most promising applications is the restoration of communication to paralysed individuals affected by speech deficits such as dysarthria, without the need for high-risk surgical interventions. The ultimate aim of the 2025 PNPL competition is to produce the conditions for an 'ImageNet moment' or breakthrough in non-invasive neural decoding, by harnessing the collective power of the machine learning community. To facilitate this vision we present the largest within-subject MEG dataset recorded to date (LibriBrain) together with a user-friendly Python library (pnpl) for easy data access and integration with deep learning frameworks. For the competition we define two foundational tasks (i.e. Speech Detection and Phoneme Classification from brain data), complete with standardised data splits and evaluation metrics, illustrative benchmark models, online tutorial code, a community discussion board, and public leaderboard for submissions. To promote accessibility and participation the competition features a Standard track that emphasises algorithmic innovation, as well as an Extended track that is expected to reward larger-scale computing, accelerating progress toward a non-invasive brain-computer interface for speech.

La reconnaissance faciale par l’IA et par les humains: une étude comparative combinant réseaux de neurones artificiels et l'imagerie cérébrale

Face Recognition Artificial Neural Networks CNN MEG Neuroscience
Hamza Abdelhedi

Abstract: In the past decade, there has been a surge of research at the intersection of neuroscience and artificial intelligence (AI) aimed at advancing our understanding of both artificial and natural cognition. Growing evidence suggests that biological and artificial neural networks trained on similar tasks can exhibit striking functional parallels. Driven by the imperative to model the brain in order to decipher its underlying mechanisms, artificial neural networks (ANNs)—originally inspired by its architecture and functions—have been proposed as effective models of various brain systems. Convolutional Neural Networks (CNNs) trained on object recognition have demonstrated their ability to approximate the human visual system’s processing hierarchy and internal representations. In the context of face perception, neuroscience findings highlight a specialized neural system; yet whether familiar and unfamiliar faces are processed by the same mechanisms or via distinct pathways remains debated. Although numerous studies have compared AI-based face models to human behavior or fMRI data, questions persist about how closely these models capture the temporal dynamics of human face processing. This thesis first reviews current knowledge of the human visual system, focusing on the dedicated face recognition circuitry, and then introduces foundational concepts in AI, including the modeling of face perception with CNNs. The core work compares seven CNN architectures against source-localized magnetoencephalography (MEG) data to probe the neural signatures of face recognition and familiarity over time. These networks were optimized for different tasks—face recognition, object recognition, or both—allowing us to assess how task-specific representations capture the brain’s face processing in distinct ways. Our findings show that FaceNet aligns particularly well with occipital and fusiform regions implicated in face perception, while certain other deep architectures (e.g., ResNet) also achieve comparable levels of neural alignment. In the occipital region, the M170 component associated with familiarity occurs earlier (around 160ms) for familiar faces and later for unfamiliar ones (approximately 180ms), suggesting that novel identities demand more prolonged processing. We additionally observe strong CNN–MEG similarities in theta and gamma frequency bands, with earlier peaks (M170–M200) for familiar stimuli and a shift toward M400 for unfamiliar faces. Comparing multiple training objectives confirms the training task could have an impact on the temporal alignment with brain data. Finally, the discussion addresses potential limitations of CNNs as models of the brain, while highlighting their promise in shedding light on the neural mechanisms underlying face recognition. The insights gained from this work may guide the development of more robust models of face perception for both AI and computational neuroscience.

Predicting cognitive decline in prodromal synucleinopathies using clinical markers and machine learning

Machine Learning Parkinson’s Disease Dementia Cognitive Decline Clinical Markers
Loubna Mekki Berrada, Arthur Dehgan, Ronald Postuma, Yann Harel, Hamza Abdelhedi, Jacques Montplaisir, Karim Jerbi, Jean-François Gagnon

Abstract: Neuroprotective interventions for dementia with Lewy bodies (DLB) and Parkinson’s disease (PD) are still in their early days. Clinical trials are expected to target idiopathic rapid eye movement sleep behavior disorder (iRBD), their strongest predictor. However, the presentation and progression of symptoms within this population show significant heterogeneity. We used machine learning (ML) to identify the clinical markers that are best at distinguishing iRBD patients (n=156) who developed DLB (n=26) from those who developed PD (n=34) at a mean follow-up of 4.37 years. Our model classified subsequent conversion to DLB versus PD with 0.80 accuracy, with mild cognitive impairment as best predictive feature. Cognitive tests of executive functions and verbal learning also played a major role in classifying other related pathological trajectories. These findings support the use of ML with clinical markers in iRBD, paving the way for a more targeted selection of participants in future neuroprotective trials of synucleinopathies.

2023