Beauchamp:DataSharing: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
mNo edit summary
mNo edit summary
(8 intermediate revisions by 2 users not shown)
Line 4: Line 4:
|-valign="top"
|-valign="top"
|width=400px style="padding: 5px; background-color: #ffffff; border: 2px solid #CC6600;" |
|width=400px style="padding: 5px; background-color: #ffffff; border: 2px solid #CC6600;" |
=== Stimuli ===
#Speech stimuli used in a number of Beauchamp Lab publications can be downloaded from [[Beauchamp:McGurkStimuli]]
#Color stimuli can be downloaded from [[Beauchamp:100Hue]]
===Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception===




The data for Baum, SH, and Beauchamp, MS.  "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here:
The data for Baum, SH, and Beauchamp, MS.  "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here:
https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip
https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip
or here
http://figshare.com/articles/Greater_BOLD_Variability_in_Older_Compared_with_Younger_Adults_during_Audiovisual_Speech_Perception/1181943


Data were collected on a 3T Phillips scanner. The dataset is organized according to the [https://openfmri.org/content/data-organization OpenfMRI organization scheme]. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (sarah.h.baum@vanderbilt.edu).
The dataset is organized according to the [https://openfmri.org/content/data-organization OpenfMRI organization scheme]. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (sarah.h.baum@vanderbilt.edu).


=== Causal inference of asynchronous audiovisual speech ===
=== Causal inference of asynchronous audiovisual speech ===
Data from Magnotti, Ma, & Beauchamp (2013) may be downloaded here
Data from [http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00798/abstract Magnotti, Ma, & Beauchamp (2013)] may be downloaded here
#Experiment 1
# [[ Media:Exp1_count_matrix.csv | Experiment 1 ]]
#Experiment 2
# [[ Media:Exp2_count_matrix.csv | Experiment 2 ]]
 
The data are stored as a vector of counts for each subject. Each row is one subject. The trial types are described in the first row. There are 15 levels of asynchrony, 2 levels of visual reliability, and 2 visual intelligibility levels (60 columns in total). Experiment 1 had 12 trials per trial type (3 blocks with 4 trials each) and Experiment 2 had 4 trials (1 block).
 
For model fitting of these data, please see the [[Beauchamp:CIMS | CIMS Model Page]]
 
 
=== Modeling McGurk perception across multiple McGurk stimuli ===
Please see the full model building page at  [[Beauchamp:NED | NED Model Page]]

Revision as of 07:37, 27 October 2015

Brain picture
Beauchamp Lab



Stimuli

  1. Speech stimuli used in a number of Beauchamp Lab publications can be downloaded from Beauchamp:McGurkStimuli
  2. Color stimuli can be downloaded from Beauchamp:100Hue

Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception

The data for Baum, SH, and Beauchamp, MS. "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here: https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip or here http://figshare.com/articles/Greater_BOLD_Variability_in_Older_Compared_with_Younger_Adults_during_Audiovisual_Speech_Perception/1181943

The dataset is organized according to the OpenfMRI organization scheme. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (sarah.h.baum@vanderbilt.edu).

Causal inference of asynchronous audiovisual speech

Data from Magnotti, Ma, & Beauchamp (2013) may be downloaded here

  1. Experiment 1
  2. Experiment 2

The data are stored as a vector of counts for each subject. Each row is one subject. The trial types are described in the first row. There are 15 levels of asynchrony, 2 levels of visual reliability, and 2 visual intelligibility levels (60 columns in total). Experiment 1 had 12 trials per trial type (3 blocks with 4 trials each) and Experiment 2 had 4 trials (1 block).

For model fitting of these data, please see the CIMS Model Page


Modeling McGurk perception across multiple McGurk stimuli

Please see the full model building page at NED Model Page