To the extent possible under law, we waive all copyright and related or neighboring rights to the materials on this page. (Of course, we appreciate citations to the relevant papers.)
Penn McGurk Battery
To make it easier to study the McGurk effect, the Beauchamp Lab has released the Penn McGurk Battery. Click here for more information.
- Speech stimuli used in a number of Beauchamp Lab publications can be downloaded from Beauchamp:Stimuli
- Stimuli for localizing color-selective brain regions can be downloaded from Beauchamp:100Hue
for the information you want, click here
R Analysis and Visualization of ECOG Data (RAVE)
RAVE is a powerful software tool for the analysis of electrocorticography (ECOG) data. Click here to use a beta-version of RAVE on a public server with a sample dataset. More information is available at Beauchamp:RAVE.
Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation
Data from Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, & Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation
Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception
The data for Baum, SH, and Beauchamp, MS. "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here: https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip or here http://figshare.com/articles/Greater_BOLD_Variability_in_Older_Compared_with_Younger_Adults_during_Audiovisual_Speech_Perception/1181943
The dataset is organized according to the OpenfMRI organization scheme. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (firstname.lastname@example.org).
Causal inference of asynchronous audiovisual speech
Data from Magnotti, Ma, & Beauchamp (2013) may be downloaded here
The data are stored as a vector of counts for each subject. Each row is one subject. The trial types are described in the first row. There are 15 levels of asynchrony, 2 levels of visual reliability, and 2 visual intelligibility levels (60 columns in total). Experiment 1 had 12 trials per trial type (3 blocks with 4 trials each) and Experiment 2 had 4 trials (1 block).
For our initial model fitting of these data, please see the CIMS Model Page
The stimuli we used may be downloaded from Dropbox (240 total). The original videos came from David Pisoni (Hoosier multitalker dataset); we added visual blur to them.
Modeling McGurk perception across multiple McGurk stimuli
Please see the full model building page at NED Model Page
Causal Inference of the McGurk Effect
Please see the full data sharing and model page at Causal Inference of McGurk Page
- Data and code for analyzing behavioral data from Karas et al: Zipped Archive File