Beauchamp:DataSharing: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
No edit summary
 
(59 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{Beauchamp Navigation Bar}}
{{Beauchamp Navigation Bar}}


{| cellspacing="2px" cellpadding="0" border="0" style="padding: 0px; width: 750px; color: #000000; background-color: #ffffff;"
[[Image:88x31.png|50 px|link=https://creativecommons.org/publicdomain/zero/1.0/]]
|-valign="top"
[https://creativecommons.org/publicdomain/zero/1.0/ To the extent possible under law, we waive all copyright and related or neighboring rights to the materials on this page.] (Of course, we appreciate citations to the relevant papers.)
|width=400px style="padding: 5px; background-color: #ffffff; border: 2px solid #CC6600;" |


===Penn McGurk Battery===
To make it easier to study the McGurk effect, the Beauchamp Lab has released the [[Beauchamp:PennMcGurkBattery |Penn McGurk Battery. Click here for more information.]]


The data for Baum, SH, and MS Beauchamp "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOSE ONE, in press) can be found here:
=== Stimuli ===
https://dl.dropboxusercontent.com/u/70081110/ds000137.zip
#Speech stimuli used in a number of Beauchamp Lab publications can be downloaded from [[Beauchamp:Stimuli]]
Data were collected on a 3T Phillips scanner. The dataset is organized according to the [https://openfmri.org/content/data-organization OpenfMRI organization scheme].
#Stimuli for localizing color-selective brain regions can be downloaded from [[Beauchamp:100Hue]]
for the information you want, click here
 
===R Analysis and Visualization of ECOG Data (RAVE)===
RAVE is a powerful software tool for the analysis of electrocorticography (ECOG) data.
Click here to use a beta-version of RAVE on a public server with a sample dataset. More information is available at [[Beauchamp:RAVE]].
 
===Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation===
Data from Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, & Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation
# [[ Media:mcgurk20A_data.zip | Single archive containing all data ]]
===Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception===
The data for Baum, SH, and Beauchamp, MS"Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here:
https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip
or here
http://figshare.com/articles/Greater_BOLD_Variability_in_Older_Compared_with_Younger_Adults_during_Audiovisual_Speech_Perception/1181943
 
The dataset is organized according to the [https://openfmri.org/content/data-organization OpenfMRI organization scheme]. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (sarah.h.baum@vanderbilt.edu).
 
 
 
=== Causal inference of asynchronous audiovisual speech ===
Data from [http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00798/abstract Magnotti, Ma, & Beauchamp (2013)] may be downloaded here
# [[ Media:Exp1_count_matrix.csv | Experiment 1 ]]
# [[ Media:Exp2_count_matrix.csv | Experiment 2 ]]
 
The data are stored as a vector of counts for each subject. Each row is one subject. The trial types are described in the first row. There are 15 levels of asynchrony, 2 levels of visual reliability, and 2 visual intelligibility levels (60 columns in total). Experiment 1 had 12 trials per trial type (3 blocks with 4 trials each) and Experiment 2 had 4 trials (1 block).
 
For our initial model fitting of these data, please see the [[Beauchamp:CIMS | CIMS Model Page]]
 
[https://www.dropbox.com/s/d6k9ux9d23fb6ck/15async_2blur.zip?dl=1 The stimuli we used] may be downloaded from Dropbox (240 total). The original videos came from David Pisoni (Hoosier multitalker dataset);  we added visual blur to them.
 
=== Modeling McGurk perception across multiple McGurk stimuli ===
Please see the full model building page at  [[Beauchamp:NED | NED Model Page]]
 
=== Causal Inference of the McGurk Effect ===
Please see the full data sharing and model page at  [[Beauchamp:CIMS_McGurk | Causal Inference of McGurk Page]]
 
=== Cross-modal Suppression ===
# Data and code for analyzing behavioral data from Karas et al: [[Media:Speech_in_noise_karas_et_al.zip|Zipped Archive File]]
 
=== Materials for IMRF 2017 workshop ===
# Material for [[Media:part2_mturk.zip|Part 2 MTurk]]
# Material for [[Media:part3_modeling.zip|Part 3 Modeling]]

Latest revision as of 07:45, 11 July 2023

Brain picture
Beauchamp Lab



To the extent possible under law, we waive all copyright and related or neighboring rights to the materials on this page. (Of course, we appreciate citations to the relevant papers.)

Penn McGurk Battery

To make it easier to study the McGurk effect, the Beauchamp Lab has released the Penn McGurk Battery. Click here for more information.

Stimuli

  1. Speech stimuli used in a number of Beauchamp Lab publications can be downloaded from Beauchamp:Stimuli
  2. Stimuli for localizing color-selective brain regions can be downloaded from Beauchamp:100Hue

for the information you want, click here

R Analysis and Visualization of ECOG Data (RAVE)

RAVE is a powerful software tool for the analysis of electrocorticography (ECOG) data. Click here to use a beta-version of RAVE on a public server with a sample dataset. More information is available at Beauchamp:RAVE.

Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation

Data from Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, & Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and speech-in-noise: A causal inference explanation

  1. Single archive containing all data

Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception

The data for Baum, SH, and Beauchamp, MS. "Greater BOLD Variability in Older Compared with Younger Adults during Audiovisual Speech Perception" (PLOS ONE, in press) can be found here: https://dl.dropboxusercontent.com/u/70081110/BOLDVariabilityData.zip or here http://figshare.com/articles/Greater_BOLD_Variability_in_Older_Compared_with_Younger_Adults_during_Audiovisual_Speech_Perception/1181943

The dataset is organized according to the OpenfMRI organization scheme. Each subject's folder contains the anonymized hi res anatomical images (under the anatomy folder), and the raw .nii files for the localizer scan (BOLD/task001_run001) and task scans (BOLD/task002_run001 and run002). Regressor files in the 3 column FSL format are included for each participant (see paper for details on small differences between scan series in some participants). Information on the demographics of the participants is also included. If you have any questions, please contact Sarah Baum (sarah.h.baum@vanderbilt.edu).


Causal inference of asynchronous audiovisual speech

Data from Magnotti, Ma, & Beauchamp (2013) may be downloaded here

  1. Experiment 1
  2. Experiment 2

The data are stored as a vector of counts for each subject. Each row is one subject. The trial types are described in the first row. There are 15 levels of asynchrony, 2 levels of visual reliability, and 2 visual intelligibility levels (60 columns in total). Experiment 1 had 12 trials per trial type (3 blocks with 4 trials each) and Experiment 2 had 4 trials (1 block).

For our initial model fitting of these data, please see the CIMS Model Page

The stimuli we used may be downloaded from Dropbox (240 total). The original videos came from David Pisoni (Hoosier multitalker dataset); we added visual blur to them.

Modeling McGurk perception across multiple McGurk stimuli

Please see the full model building page at NED Model Page

Causal Inference of the McGurk Effect

Please see the full data sharing and model page at Causal Inference of McGurk Page

Cross-modal Suppression

  1. Data and code for analyzing behavioral data from Karas et al: Zipped Archive File

Materials for IMRF 2017 workshop

  1. Material for Part 2 MTurk
  2. Material for Part 3 Modeling