Reliability and Generalizability of Similarity-Based Fusion of MEG and fMRI Data in Human Ventral and Dorsal Visual Streams

Yalda Mohsenzadeh* Caitlin Mullin* Benjamin Lahner Radoslaw Martin Cichy Aude Oliva
Western University York University MIT Freie Universität Berlin MIT


[paper] [journal link]

Abstract

To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context.

*Links to download fMRI, MEG, and stimuli set used in this paper and the Algonauts Project 2019 are below*

MEG-fMRI Fusion Shows the Progression of Visual Perception in the Dorsal and Ventral Streams

We recorded MEG and fMRI data while 15 participants viewed a set of 156 natural images (see all 156 stimuli at bottom of page). These images can be subdivided into five categories - faces, bodies, animals, objects, scenes - or two twinsets of 78 images each. Images in twinsets 1 and 2 are each composed of unique images that share the same exemplar (e.g. both twinsets each have a unique image of a giraffe).

Stimuli divided into categories

Stimuli divided into twinsets

MEG-fMRI fusion on all 156 images showing the cascading processing of visual recognition along the ventral and dorsal streams.


3D Fusion Made by Benjamin Lahner
Download the 3D fusion video.

The same 156 image fusion overlayed on axial slices of a T1-weighted brain.

Download the 156 image fusion video.

MEG-fMRI Fusion Results Are Replicated When the Same Subjects View Two Sets of Semantically Similar Images, Demonstrating Reliability

All stimuli in the two twinsets are distinct images, but the they come in semantically similar pairs. For example, both twinsets contain different images of a sandwich.

Stimuli divided into twinsets

We applied the fusion technique on twinset 1 and twinset 2 images to demonstrate the reliability of the fusion method in capturing the spatio-temporal dynamics of information flow in the ventral and dorsal streams.

Twinset 1 and Twinset 2 axial slice fusion movie


Download the Twinset 1 and 2 Fusion Video.

MEG-fMRI Fusion Results Are Highly Similar Across Subject Groups Viewing Different Images, Demonstrating Generalizability

The above 156 image fusion results were compared with fusion results from an independent study led by Cichy et al (2016). In Cichy et al's experiment, 15 different subjects viewed a total of 118 natural images of objects. Visit Cichy et al's Project Page to download the 118 image stimuli set, MEG data, fMRI data, or learn more about the experiment.

156 Image (current study) Stimuli Set 118 Image (Cichy et al) Stimuli Set

156 image stimuli set

118 image stimuli set

Upon analyzing multiple regions of interests in the ventral and dorsal streams, no significant differences were found between these two fusion movies. This shows that MEG-fMRI fusion can generalize to different subjects and stimuli sets.

156 Image (current study) Fusion Movie

118 Image (Cichy et al) Fusion Movie

Download the 156 image fusion video.

Download the 118 image fusion video.

Visit the fusion project page.

fMRI Data, MEG Data, Visual Stimuli, and Algonauts 2019 Links to Downloads
(contact MIT Oliva Lab for access)

fMRI Data

MEG Data

Stimuli Set

Algonauts 2019

Beta Maps Per Subject
MEG Epoched Data Per Subject
MEG RDMs Per Subject
156 Image Stimuli Set
Algonauts 2019 Website
Challenge RDMs

Reference

Reliability and Generalizability of Similarity-Based Fusion of MEG and fMRI Data in Human Ventral and Dorsal Visual Streams
Yalda Mohsenzadeh*, Caitlin Mullin*, Benjamin Lahner, Radoslaw Martin Cichy, and Aude Oliva
Vision 2019, 3, 8; doi:10.3390/vision3010008
[paper] [journal link]

Similarity-Based Fusion of MEG and fMRI Reveals Spatio-Temporal Dynamics in Human Cortex During Visual Object Recognition
Radoslaw M. Cichy, Dimitrios Pantazis and Aude Oliva
Cerebral Cortex, doi: 10.1093/cercor/bhw135.
[paper] [journal link] [supplementary materials]

All 156 Stimuli Divided into the Two Twinsets

All stimuli and twinsets

All 156 stimuli and their two twinsets. Each pair of images are different yet describe semantically similar high-level concepts. The two twinsets show no significant differences in low-level features.

Acknowledgements

This research was funded by NSF grant number 1532591, in Neural and Cognitive Systems as well as the Vannevar Bush Faculty Fellowship program funded by the ONR grant number N00014-16-1-3116 (to A.O.) and the DFG Emmy Noether Grant CI 241/1-1 (to R.C.). The experiments were conducted at the Athinoula A. Martinos Imaging Center at the McGovern Institute for Brain Research, Massachusetts Institute of Technology. The authors would like to thank Dimitrios Pantazis for helpful discussions.