The 2019 Challenge:

Explaining the Human Visual Brain

Overview

Understanding how the human brain works is one of the greatest challenges that science faces today. The Algonauts challenge proposes an open and quantified test of computational models on human brain data, including both spatial (i.e. fMRI) and temporal (i.e. MEG/EEG) measurements. This will allow us to assess the real progress of the field of cognitive computational neuroscience (see also Brain-Score and the Neural Prediction Challenge).

The primary target of the 2019 challenge is the visual brain: the part of the brain that is responsible for seeing. Currently, particular deep neural networks trained on object recognition (Yamins et al. 2014, Khaligh-Razavi & Kriegeskorte 2014, Cichy et al. 2016 etc.) do best in explaining brain activity. Can your model do better?

Competition Tracks

The main goal of the 2019 Algonauts challenge is to predict brain activity from two sources—fMRI data from two brain regions (Track 1), or MEG data from two temporal windows (Track 2)—using computational models. The brain activity is in response to viewing sets of images; for each image set, fMRI and MEG data are collected from the same 15 human subjects. Participants can choose to play in Track 1 (fMRI), Track 2 (MEG), or both.

Underlying object recognition is a hierarchical processing cascade (called the “ventral visual stream”) in which neural activity unfolds in space and time. At the beginning of the hierarchy, there is a region called the early visual cortex (EVC), shown in red in the figure. Neurons in this area respond to lines or edges with specific orientations. As EVC is early in the processing cascade, it responds early in time.

Later in this hierarchy there is a region called the inferior temporal cortex (IT), shown in yellow. Neurons in this region have been found to respond to images of objects.

In the 2019 edition of the Algonauts challenge, the target is to explain brain activity in two segments of the visual processing cascade. The brain regions to explain are EVC and IT for Track 1, and two intervals in time (early interval around the peak of response in EVC and later interval around the peak of response in IT, with respect to when an image was shown to human subjects) for Track 2.

Given a set of images consisting of everyday objects and corresponding brain activity recorded while human subjects viewed those images, participants will devise computational models which predict brain activity, which will be used to predict the brain activity for a brand new set of images.

TRACK 1 (fMRI)

The goal of Track 1 (fMRI) is to construct models that best predict activity in 2 regions of the brain, early (EVC) and late (IT) in the human visual hierarchy. Participants submit their model responses to a test image set which we will compare against held-out fMRI data. The representational similarity analysis approach (a technique that maps models and fMRI data to a common similarity space to enable comparison) will be used to score your submissions.

We provide the following data (available for download here):

  • Training Set A: 92 images + fMRI human brain data RDMs from 2 regions (EVC and IT) in response to viewing images from this set.
  • Training Set B: 118 images + fMRI human brain data RDMs from 2 regions (EVC and IT) in response to viewing images from this set.
  • Development Kit: Sample code for generating activations from AlexNet/VGG/ResNet, computing RDMs from these networks, and evaluating model RDMs.
  • Test Image Set: 78 images.

Track 1 Challenge Test Set Leaderboard (see detailed table)

EVC IT Score
Rank Team Name Noise Normalized R2 (%) Noise Normalized R2 (%) Average Noise Normalized R2 (%) Detailed Results
Noise Ceiling 100 100 100
1 agustin 32.8833 20.9925 26.9056 Detailed Results
2 Aakash 30.5612 19.2804 24.8901 Detailed Results
3 rmldj 28.3965 20.7690 24.5620 Detailed Results
4 HY 29.5443 16.3535 22.9130 Detailed Results
5 navneedhm 21.4018 20.4753 20.9360 Detailed Results
6 Mukesh_Makwana 21.4343 20.4307 20.9297 Detailed Results
7 ggaziv 24.9314 15.5453 20.2128 Detailed Results
8 drshti 19.0810 17.3859 18.2288 Detailed Results
9 astha736 15.9807 19.6626 17.8317 Detailed Results
10 mfonseca 17.4111 15.9311 16.6671 Detailed Results
11 Team nntw 20.1595 12.1610 16.1385 Detailed Results
12 Team UvABrain 13.6722 17.3659 15.5291 Detailed Results
13 kpmtm 20.9645 9.2171 15.0589 Detailed Results
14 am88 11.2759 16.1158 13.7090 Detailed Results
15 Biggzlar 8.0078 14.2666 11.1542 Detailed Results
16 Wenxin_SU 11.3084 11.0000 11.1534 Detailed Results
17 Team GJ 9.3443 10.1268 9.7377 Detailed Results
18 kashyaph 6.9403 12.3594 9.6646 Detailed Results
19 RobTLange 6.6997 11.4852 9.1055 Detailed Results
20 abhinav 9.8916 6.6810 8.2776 Detailed Results
21 elna 9.3052 6.2630 7.7758 Detailed Results
22 Team Brathering 6.6783 8.6025 7.6456 Detailed Results
23 jomaka 8.0742 7.0917 7.5803 Detailed Results
24 AlexNet-OrganizerBaseline 6.5794 8.2250 7.4066 Detailed Results
25 astrid_zeman 5.8994 6.9351 6.4201 Detailed Results
26 adrianso 6.5053 5.8606 6.1812 Detailed Results
27 ajay14 6.5794 3.2785 4.9200 Detailed Results

The three main columns are EVC and IT (the two brain regions of interest), and Score (average of EVC and IT). The entry with the highest Score is ranked first. R2 represents the submitted model RDM squared correlation (spearman) to the EVC/IT RDM. Noise Normalized represents the fact that R2 values are normalized to the noise ceiling (see here for more details). AlexNet is the organizer baseline.

Track 1 Hidden Test Set Leaderboard (see detailed table)

EVC IT Score
Rank Team Name Noise Normalized R2 (%) Noise Normalized R2 (%) Average Noise Normalized R2 (%) Detailed Results
Noise Ceiling 100 100 100
1 rmldj 12.9969 17.3412 15.2926 Detailed Results
2 Aakash 14.1444 9.6379 11.7630 Detailed Results
3 ggaziv 15.2585 8.4870 11.6802 Detailed Results
4 Mukesh_Makwana 9.8786 13.2405 11.6551 Detailed Results
5 agustin 12.6287 10.5547 11.5327 Detailed Results
6 navneedhm 12.5500 7.8677 10.0757 Detailed Results
7 amir32002 9.2338 8.9633 9.0909 Detailed Results
8 ajay14 11.7817 4.8445 8.1158 Detailed Results
9 adrianso 9.6850 4.0513 6.7079 Detailed Results
10 mfonseca 5.5581 4.5694 5.0356 Detailed Results
11 AlexNet-OrganizerBaseline 5.0564 4.7650 4.9024 Detailed Results

The three main columns are EVC and IT (the two brain regions of interest), and Score (average of EVC and IT). The entry with the highest Score is ranked first. R2 represents the submitted model RDM squared correlation (spearman) to the EVC/IT RDM. Noise Normalized represents the fact that R2 values are normalized to the noise ceiling (see here for more details). AlexNet is the organizer baseline.

TRACK 2 (MEG)

The goal of Track 2 (MEG) is to construct models that best predict brain data from 2 time intervals, early and late stages of visual processing with respect to onset of an image. Participants submit their model responses to a test image set which we will compare against held-out MEG data. The representational similarity analysis approach (a technique that maps models and MEG data to a common similarity space to enable comparison) will be used to score your submissions. Time intervals of 20ms are defined based on the peak latency of MEG/fMRI fusion time series (0–200ms) in EVC and IT, and are shown with the corresponding image sets below (see Cichy et al. 2014, 2016; Mohsenzadeh et al. 2019 for MEG/fMRI fusion method).

We provide the following data (available for download here):

  • Training Set A: 92 images + MEG human brain data RDMs from 2 time intervals, i.e. early (70–90ms) and late (140–160ms) in visual processing, in response to viewing images from this set. Peak latencies = 82ms (early) and 150ms (late).
  • Training Set B: 118 images + MEG human brain data RDMs from 2 time intervals, i.e. early (100–120ms) and late (165–185ms) in visual processing, in response to viewing images from this set. Peak latencies = 108ms (early) and 176ms (late).
  • Development Kit: Sample code for generating activations from AlexNet/VGG/ResNet, computing RDMs from these networks, and evaluating model RDMs.
  • Test Set: 78 images.

Learn more and participate in Track 2 here

Track 2 Challenge Test Set Leaderboard (see detailed table)

Early Interval Late Interval Score
Rank Team Name Noise Normalized R2 (%) Noise Normalized R2 (%) Average Noise Normalized R2 (%) Detailed Results
Noise Ceiling 100 100 100
1 Aakash 58.9498 67.2455 63.5583 Detailed Results
2 rmldj 46.9061 57.3847 52.7272 Detailed Results
3 agustin 50.9508 53.5932 52.4187 Detailed Results
4 ggaziv 51.2056 35.0976 42.2571 Detailed Results
5 Team GJ 36.3899 45.3534 41.3694 Detailed Results
6 drshti 21.8638 39.2335 31.5132 Detailed Results
7 mfonseca 27.3910 27.8964 27.6718 Detailed Results
8 Mukesh_Makwana 20.7331 26.6669 24.0295 Detailed Results
9 Team UvABrain 1.1998 28.5125 16.3728 Detailed Results
10 AlexNet-OrganizerBaseline 5.8198 22.9284 15.3241 Detailed Results
11 kpmtm 25.9474 4.4091 13.9822 Detailed Results
12 adrianso 15.6797 8.4270 11.6506 Detailed Results
13 am88 6.1968 14.4103 10.7597 Detailed Results
14 Wenxin_SU 9.8936 7.1345 8.3608 Detailed Results
15 jomaka 0.3015 8.5400 4.8782 Detailed Results
16 HY 2.5384 6.4005 4.6839 Detailed Results

The three main columns are Early and Late Interval (the two time intervals of interest), and Score (average of Early and Late). The entry with the highest Score is ranked first. R2 represents the submitted model RDM squared correlation (spearman) to the Early/Late Interval RDM. Noise Normalized represents the fact that R2 values are normalized to the noise ceiling (see here for more details). AlexNet is the organizer baseline.

Track 2 Hidden Test Set Leaderboard (see detailed table)

Early Interval Late Interval Score
Rank Team Name Noise Normalized R2 (%) Noise Normalized R2 (%) Average Noise Normalized R2 (%) Detailed Results
Noise Ceiling 100 100 100
1 Aakash 49.7933 69.4401 60.2274 Detailed Results
2 rmldj 53.5460 55.0110 54.3240 Detailed Results
3 agustin 40.5026 56.6497 49.0780 Detailed Results
4 ggaziv 49.8848 28.7378 38.6541 Detailed Results
5 georginjacob 18.0934 27.2611 22.9621 Detailed Results
6 mfonseca 11.6091 27.4629 20.0287 Detailed Results
7 AlexNet-OrganizerBaseline 8.6216 23.8186 16.6924 Detailed Results

The three main columns are Early and Late Interval (the two time intervals of interest), and Score (average of Early and Late). The entry with the highest Score is ranked first. R2 represents the submitted model RDM squared correlation (spearman) to the Early/Late Interval RDM. Noise Normalized represents the fact that R2 values are normalized to the noise ceiling (see here for more details). AlexNet is the organizer baseline.

Brain Activity and Analysis

Brain Activity Measurement

Current non-invasive techniques to measure brain activity resolve the brain either well in space or in time, but not both. We provide measurements of brain activity while humans viewed a set of images from two techniques: (1) functional magnetic resonance imaging (fMRI) for millimeter spatial resolution and (2) magnetoencephalography (MEG) for millisecond temporal resolution. There is a challenge track for explaining brain data in space (fMRI) and in time (MEG) respectively.
Click here to learn more about fMRI and MEG

Comparison Metric

To compare computational models to human brains we use representational similarity analysis (RSA). The idea is that if models and brains are similar, then they treat the same images as similar or dissimilar. Using RSA allows us to compare human brains and models at the relevant level of representations in spite of the numerous differences between them (e.g. in-silico vs. biological).
Click here to learn about RSA and the comparison metric

Challenge Data Release

You can submit your model out of the box without any further training. However, we provide training brain data that can help optimize your model for predicting brain data.

Download the training data, test images, and development kit for the Algonauts Project 2019 here

Training Data

The training data consist of brain activity from both fMRI and MEG in response to viewing images from two sets—the 92 and 118 sets. For each image set, the fMRI and MEG data are collected from the same 15 human subjects. For fMRI, the data provided is based on correlation distances for two brain regions, EVC and IT and for MEG it is based on correlation distances in two time intervals (one corresponding to early brain responses associated with EVC and the other for later brain responses associated with IT).

Test Data

For the challenge, we provide the 78 test images only. For all test data released post-challenge, please visit the Download page.

Development Kit

Sample Python and MATLAB code for generating activations from AlexNet/VGG/ResNet, computing RDMs from these networks, and evaluating model RDMs against human brain data RDMs.

Challenge Rules and Best Practice

1. Participants can use any external data for model building. Participants that use the test image set for training will be disqualified (including brain data generated using the test image set).

2. Each participant (single researchers or team) can make 10 submissions per day per track for a maximum of 250 submissions.

3. Participants should be ready to upload a short report (up to 4 pages) describing their model building process for their best model to a preprint server (e.g biorxiv or arxiv) or send a PDF by email to algonauts.mit@gmail.com. If you participated in the challenge, use this form to submit the challenge report. Only teams that submit a report via the above form will be considered for the challenge participation. Submission deadline is July 1, 2019 at 11:59pm (UTC-4).

4. PRIZES FOR WINNERS: top 1 and top 2 entries for each track will receive a travel reimbursement for one attendee to present their method at the Algonauts Workshop at MIT on July 19-20, 2019. Top 1 entries for each track will receive a gift.

Citation

Please refer to this page for guidance on citation if you have used any data associated with the Algonauts Project 2019.

We will provide a more extensive paper including the results of the challenge at a later time point. When published, news will be announced via an update on our homepage.

Important Dates:

  • April 1, 2019: Training data (images, fMRI and MEG data) and Development Kit with evaluation scripts released. Test data (images only) released. [Download]
  • July 1, 2019 at 11:59pm (UTC-4): Submission deadline.
  • July 5, 2019: Challenge results released.
  • July 19-20, 2019: Winner(s) are invited to present at the Workshop.

Note: Teams with best submission results will receive an invitation to participate in the Algonauts Workshop, held at MIT on July 19-20, to give a talk about their method during the workshop.

For submitting the challenge report, use this form. Only teams that submit a report via the above form will be considered for the challenge participation.

Challenge and Workshop Team

Radoslaw Cichy

Team Leader: Radoslaw Cichy

Research Group Leader, Freie Universität Berlin
Aude Oliva

Team Leader: Aude Oliva

Principal Research Scientist, MIT
Gemma Roig

Team Leader: Gemma Roig

Assistant Professor, SUTD
Alex Andonian

Alex Andonian

Research Assistant, MIT
Kshitij Dwivedi

Kshitij Dwivedi

PhD Student, SUTD
Benjamin Lahner

Benjamin Lahner

Research Assistant, MIT
Alex Lascelles

Alex Lascelles

Research Assistant, MIT
Yalda Mohsenzadeh

Yalda Mohsenzadeh

Postdoctoral Researcher, MIT
Kandan Ramakrishnan

Kandan Ramakrishnan

Postdoctoral Researcher, MIT

Event Planners

Fern Keniston

Fern Keniston

Program Coordinator and Assistant to the Directors, MIT
Kim Martineau

Kim Martineau

Communications Officer, MIT
Samantha Smiley

Samantha Smiley

Administrative Assistant, MIT

Sponsors