Understanding how the human brain works is one of the key challenges that science and society face. The Algonauts challenge proposes a test of how well computational models do today. This test is intrinsically open and quantitative. This will allow us to precisely assess the progress in explaining the human brain.
At every instant, we are flooded by a massive amount of visual and auditory (sound and language) information - and yet, we perceive the world as ordered and meaningful actions, events and language. The primary target of the 2025 Challenge is predicting human brain responses to complex naturalistic multimodal movies, using the largest available brain dataset for this purpose.
Watching multimodal movies activates large swathes of the human cortex. We pose the question: How well does your computational model account for these activations?
Watch the first video above for an introduction to the Algonauts 2025 challenge, and the second video for a detailed walkthrough of the development kit. When you are ready to participate, the third video will guide you through the Codabench competition submission process.
The goal of the 2025 challenge is to provide a platform for biological and artificial intelligence scientists to cooperate and compete in developing cutting-edge brain encoding models. Specifically, these models should predict whole brain response to multimodal naturalistic stimulation, and generalize outside their training distribution.
The Challenge data is based on the CNeuroMod dataset which, as of now, most intensively samples single subject neural responses to a variety of naturalistic tasks, including movie watching. The CNeuroMod dataset’s unprecedented size, combined with the multimodal nature and diversity of its stimuli and tasks, makes it an ideal training and testing ground to build robust encoding models of fMRI responses to multimodal stimuli that generalize outside of their training distribution. Learn more about the stimuli and fMRI dataset used in the 2025 Challenge.
We provide:
With that, challenge participants are expected to build computational models to predict brain responses to in-distribution (ID) and out-of-distribution (OOD) movies for which the brain data are withheld.
Challenge participants submit predicted responses in the format described in the development kit. We score the submission by measuring the predictivity for each brain parcel for all the subjects and display on the leaderboard the overall mean predictivity over all parcels and subjects.
The challenge is hosted on Codabench, and consists of two main serial phases:
To enforce strict tests of OOD generalization during the model selection phase, the two phases are based on data from different distributions, and have separate leaderboards. The challenge will be followed by an indefinite post-challenge phase, which will serve as a public benchmark for anyone wishing to test brain encoding models on multimodal movie data (Figure 1).
Figure 1 | Challenge phases. During the model building phase, models are trained using stimuli and corresponding fMRI responses for seasons 1 to 6 of the sitcom Friends and Movie10 (a set of four movies), and tested in-distribution (ID) on Friends season 7 (for which the fMRI responses are withheld) with unlimited submissions. During the model selection phase, the winning models are selected based on the accuracy of their predicted fMRI responses for out-of-distribution (OOD) movie stimuli (for which the fMRI responses are withheld) with up to ten submissions. The challenge will be followed by an indefinite post-challenge phase with unlimited submissions, which will serve as a public benchmark for both ID and OOD model validation.
During this first phase, challenge participants will train and test encoding models using movie stimuli and fMRI responses from the same distribution.
During this second phase, the winning models will be selected based on the accuracy of their predicted fMRI responses for withheld OOD movie stimuli.
Once the challenge is over, we will open an indefinite post-challenge phase which will serve as a public benchmark. This benchmark will consist of two separate leaderboards that will rank encoding models based on their fMRI predictions for ID (Friends season 7) or OOD (OOD movies) multimodal movie stimuli, respectively.
To facilitate participation, we provide a development kit in Python which accompanies users through the challenge process, following four steps:
There are different ways to predict brain data using computational models. We put close to no restrictions on how you do so (see Challenge Rules). However, a commonly used approach is to use linearizing encoding models, and we provide a development kit to implement such a model.
1. Challenge participants can use any encoding model derived from any source and trained on any type of data. However, using recorded brain responses for Friends season 7 or the OOD movie stimuli is prohibited.
2. The winning models will be determined based on their performance in predicting fMRI responses for the OOD movie stimuli during the model selection phase.
3. Challenge participants can make an unlimited number of submissions during the model building phase, and a maximum of ten submissions during the model selection phase (the leaderboard of each phase is automatically updated after every submission). Each challenge participant can only compete using one account. Creating multiple accounts to increase the number of possible submissions will result in disqualification to the challenge.
4. To promote open science, challenge participants who wish to be considered for the winners selection will need to submit a short report (~4-8 pages) describing their encoding algorithm to a preprint server (e.g. arXiv, bioRxiv), and send the PDF or preprint link to the Organizers by filling out this form. You must submit the challenge report by the challenge report submission deadline to be considered for the evaluation of the challenge outcome. Furthermore, while all reports are encouraged to link to their code (e.g. GitHub), the top-3 performing teams are required to make their code openly available. Participants that do not make their approach open and transparent cannot be considered. Along with monetary prizes, the top-3 performing teams will be invited to present their encoding models during a talk at the Cognitive Computational Neuroscience (CCN) conference held in Amsterdam (Netherlands) in August 2025.
Challenge model building phase: | January 6th, 2025 to July 6th, 2025, at 00:00am (UTC-0) |
Challenge model selection phase: | July 6th, 2025 to July 13th, 2025, at 00:00am (UTC-0) |
Challenge report/code submission deadline: | July 25th, 2025 |
Challenge results released:: | August 5th, 2025 |
Sessions at CCN 2025: | August 12th–13th, 2025 |
If you participate in the challenge, use this form to submit the challenge report and code.
The Algonauts Project 2025 challenge is supported by German Research Council (DFG) grants (CI 241/1-3, CI 241/1-7, INST 272/297-1), the European Research Council (ERC) starting grant (ERC-StG-2018-803370), the German Research Foundation (DFG Research Unit FOR 5368 ARENA), Unifying Neuroscience and Artificial Intelligence - Québec (UNIQUE), and The Hessian Center for Artificial Intelligence. The Courtois project on neural modelling was made possible by a generous donation from the Courtois foundation, administered by the Fondation Institut Gériatrie Montréal at CIUSSS du Centre-Sud-de-l’île-de-Montréal and University of Montreal. The CNeuroMod data used in the Algonauts 2025 Challenge has been openly shared under a Creative Commons CC0 license by a subset of CNeuroMod participants through the Canadian Open Neuroscience Platform (CONP), funded by Brain Canada and based at McGill University, Canada.