Beauchamp:Stimuli

From OpenWetWare
Jump to navigationJump to search
Brain picture
Beauchamp Lab




To the extent possible under law, we waive all copyright and related or neighboring rights to the materials on this page. (Of course, we appreciate citations to the relevant papers.)


Software

To view videos, we recommend free VLC software

 https://www.videolan.org/vlc/

Stimuli from Beauchamp et al. (2003)

point-light movies from

  1. Beauchamp, M.S., Lee, K.E., Haxby, J.V., and Martin, A.: FMRI responses to video and point-light displays of moving humans and manipulable objects. J Cogn Neurosci 15: 991-1001, 2003. Click here to download the PDF

Click here to download an archive of the point light displays of tools.

Click here to download an archive of the point light displays of actions.

Stimuli from Magnotti et al. (2024)

Click here to download an archive of the stimuli


Sample Word Stimuli

Click here to download an archive of the stimuli

Stimuli from Zhang(2023)

Word Speech stimuli (-8dB) from:

Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage. 2023 Sep;278:120271. Click on the DOI for the journal full text: [1] Click here for the PDF.Click here for the journal full text


Click here to download NoisyWords_Part1
Click here to download NoisyWords_Part2
Click here to download NoisyWords_Part3
Click here to download NoisyWords_Part4
Click here to download NoisyWords_Part5
Click here to download NoisyWords_Part6
Click here to download NoisyWords_Part7
Click here to download NoisyWords_Part8
Click here to download NoisyWords_Part9
Click here to download NoisyWords_Part10
Click here to download NoisyWords_Part11
Click here to download NoisyWords_Part12
Click here to download NoisyWords_Part13
Click here to download NoisyWords_Part14
Click here to download NoisyWords_Part15
Click here to download NoisyWords_Part16
Click here to download NoisyWords_Part17
Click here to download NoisyWords_Part18
Click here to download NoisyWords_Part19
Click here to download NoisyWords_Part20
Click here to download NoisyWords_Part21
Click here to download NoisyWords_Part22

Stimuli from Magnotti et al

speech stimuli from

Magnotti, JF, Dzeda, KB, Wegner-Clemens, K, Rennig, J, Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: a causal inference explanation. Cortex 133 (December 2020) 371-383. Click on the DOI for the journal full text: https://doi.org/10.1016/j.cortex.2020.10.002 Click here for the PDF. Click here for the preprint.

Click here to download an archive of the stimuli

Stimuli from Karas et al (2019)

speech stimuli from

Karas, PJ, Magnotti, JF, Metzger, BA, Zhu, LL, Smith, KB, Yoshor D, Beauchamp MS. The visual speech head start improves perception and reduces superior temporal cortex responses to auditory speech. eLife 2019;8:e48116 DOI: 10.7554/eLife.48116 Click here to download the PDF. Click here for the journal full text Click here for the BioRxiv preprint. Click here for an independent replication.

Click here to download an archive of the stimuli

Stimuli from Ozker et al

speech stimuli from


Ozker M, Yoshor D, Beauchamp MS. Converging Evidence from Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing. Frontiers in Human Neuroscience, 24 April 2018 doi: 10.3389/fnhum.2018.00141 Click here to download the PDF. Click here to see the preprint on BioRxiv.


Ozker M, Yoshor D, Beauchamp MS. Frontal Cortex Selects Representations of the Talker's Mouth to Aid in Speech Perception. eLife 2018;7:e30387 DOI: 10.7554/eLife.30387 Click here to download the PDF. Click here for the journal full text


Ozker M, Schepers IM, Magnotti JF, Yoshor D, Beauchamp MS. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography. Journal of Cognitive Neuroscience. June 2017, 29:6, pp. 1044–1060. doi:10.1162/jocn_a_01110. Click here to download the PDF.

The stimuli are adapted from the Hoosier Audiovisual Multi-Talker Database (Sheffert, Lachs & Hernandez,1996).

Click here to download archive of stimuli

Stimuli from Rennig & Beauchamp (2018)

The first fMRI experiment: Each 2-s movie consisted of a talker saying a single syllable. Each participant viewed 240 movies, equally distributed across six different types: three syllables (AbaVba, AgaVga, AbaVga) * two talkers (one male and one female).

The second econd fMRI experiment, participants viewed long blocks (20s) of auditory, visual or auditory-visual speech, with a single female talker reading Aesop's fables. Two runs were collected, each run contained 9 blocks, each consisting of 20s of stimulation and 10 seconds of fixation baseline, consisting of three blocks each of auditory, visual and audiovisual stimulation.


Click here to download Experiment1 stimuli
Click here to download Experiment2 stimuli_part1
Click here to download Experiment2 stimuli_part2
Click here to download Experiment2 stimuli_part3

from Rennig, J, and Beauchamp, MS,. Free viewing of talking faces reveals mouth and eye preferring regions of the human superior temporal sulcus. Neuroimage. 2018 183:25-36 doi: 10.1016/j.neuroimage.2018.08.008. Click here to download the PDF.

McGurk and Control Audiovisual Speech Syllables

The McGurk-MacDonald effect is an audiovisual illusion. McGurk stimuli consist of an auditory syllable and an incongruent visual syllable that produce the percept of a different syllable (e.g. auditory "ba" + visual "ga" perceived as "da"). Other incongruent stimuli (e.g. auditory "ga" + visual "ba") do not produce the effect. The archive contains movies that demonstrate the effect. If you use the stimuli in a scientific publication, please cite the appropriate reference in Beauchamp:Publications. Note that some of the videos in the archive require a particular video codec that is freely available in the VLC player, available from http://www.videolan.org/vlc/


Click here to download archive of different McGurk and control syllable stimuli from Nath, AR and Beauchamp, MS. A Neural Basis for Interindividual Differences in the McGurk Effect, a Multisensory Speech Illusion. Neuroimage. 2012 59: 781-787. Click here to download the PDF.

To see the stimuli without downloading the archive, you may view them on YouTube

  1. Auditory "ba" + Visual "ga" --> AV "da" http://www.youtube.com/watch?v=WK3T7LWIkP8
  2. Auditory "pa" + Visual "ka" --> AV "ta" http://www.youtube.com/watch?v=An5vvn-gcwA


Click here to download archive of syllable stimuli and stimulus orderings from Nath, AR, Fava EE and Beauchamp, MS. Neural Correlates of Interindividual Differences in Children's Audiovisual Speech Perception. Journal of Neuroscience. 2011 Sept 28;31(39)13963-13971. Click here to download the PDF.

Stimuli and notes from Basu Mallick et al

Here are the McGurk and control stimuli from Experiment 2 in Basu Mallick D, Magnotti JF, Beauchamp MS. Variability and stability in the McGurk effect: contributions of participants, stimuli, time, and response type. Psychonomic Bulletin and Review. (2015) 22:1299–1307. DOI 10.3758/s13423-015-0817-4 Click here to download the PDF.

GitHub repository with scripts for the tasks

Written participant instructions for the task

 You will see videos and hear audio clips of a person saying syllables. Please watch the screen at all times. After each video, press a button to indicate what the person said. If you are not sure, take your best guess. The buttons are inactive while the video is playing. Play the demo video to ensure that you can hear the actor and see the actor's entire face clearly. You can play the demo video multiple times. Press the "Begin Task" button to start...

Localizers

AV recordings of Aesop's Fables used as localizer stimuli from Nath et al., 2012.

  1. https://www.dropbox.com/s/olhfw8vfcueyp8j/AesopsFables.zip?dl=0

Please contact MSB if the link is broken.

Single-syllable audiovisual words

Please cite

  1. Nath, AR and Beauchamp, MS. A Neural Basis for Interindividual Differences in the McGurk Effect, a Multisensory Speech Illusion. Neuroimage. 2012 59:781-787.
  2. Nath, AR, Fava EE and Beauchamp, MS. Neural Correlates of Interindividual Differences in Children's Audiovisual Speech Perception. Journal of Neuroscience. 2011 Sept 28;31(39)13963-13971.

The following is from the methods section of Nath and Beauchamp (2012):

Word stimuli for the localizer were selected from two hundred single-syllable words from the MRC Psycholinguistic Database with Brown verbal frequency of 20 to 200, imageability rating greater than 100, age of acquisition less than 7 years and Kucera-Francis written frequency greater than 80 (Wilson, M., 1988. The MRC Psycholinguistic Database: Machine Readable Dictionary, Version 2. Behav. Res. Methods Instrum. Comput. 20, 6–11.). The duration of the words ranged from 0.5 to 0.7 seconds. The total length of each video clip ranged from 1.1 to 1.8 seconds in order to start and end each video with the speaker in a neutral, mouth-closed position and to include all mouth movements from mouth opening to closing.

Here is a link to a ZIP file containing all of the words. Please contact MSB if the link is broken. https://www.dropbox.com/s/p45pcizxumr1n8v/MRC_Words_1_syl_words.zip?dl=0

Unisensory Visual Eye and Mouth Stimuli from Zhu et al. (2017)

Here are the eye and mouth movements stimuli from Experiment 1 in Zhu LL, Beauchamp MS. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus. Journal of Neuroscience 8 March 2017, 37 (10) 2697-2708; DOI: https://doi.org/10.1523/JNEUROSCI.2914-16.2017. Click here to download the PDF. The stimuli were created using the H264 codec.

Unisensory Stimuli

Auditory stimuli

The stimuli were presented with a grey background on the monitor. If you would like to use any of the following stimuli please cite as: Basu Mallick, D., Magnotti, J.F. & Beauchamp, M.S. (2015, in press). Variability and stability in the McGurk effect: contributions of participants, stimuli, time, and response type.

Male Speaker

Female Speaker

McGurk Stimuli Available on YouTube

McGurk stimulus: Auditory "ba" + Visual "ga" --> AV "da" http://www.youtube.com/watch?v=WK3T7LWIkP8

McGurk stimulus: Auditory "pa" + Visual "ka" --> AV "ta" http://www.youtube.com/watch?v=An5vvn-gcwA

Bearded guy: http://www.youtube.com/watch?v=aFPtc8BVdJk

Brain Rules Book: http://www.youtube.com/watch?v=I1XWDOwH47Y&NR=1

German guy: http://www.youtube.com/watch?v=rIWrnJH2jAY

Sentence level McGurk guy: http://www.youtube.com/watch?v=DsdyE491KcM

BBC McGurk guy: http://www.youtube.com/watch?v=ypd5txtGdGw&feature=more_related

Curly-haired girl: http://www.youtube.com/watch?v=9aAmTNdkEPw&feature=related

Shelves guy: http://www.youtube.com/watch?v=lo-iBiEiLxs&feature=related

Brian Sawyer: http://www.youtube.com/watch?v=DPlxRtdzLIA&feature=more_related

Blue and Yellow Stripe Guy: http://www.youtube.com/watch?v=0bgY6AxeBeU&NR=1

Dk-haired Bearded guy: http://www.youtube.com/watch?v=5Lq26mgFpOc&feature=more_related

Guy with Glasses - words: http://www.youtube.com/watch?v=BgDhafI9n1I&feature=related

Visual Tactile Stimuli

These stimuli were taken from the directory

 /Users/beauchamplab/Dropbox/2015/UT_iMac/Work/HandAnimationStimuli

Notes on Stimulus Creation

When showed on a video projector (such as for giving a talk), some of Audrey's stimuli are very dark so that the mouth is invisible. To fix this, use iMovie to brighten the clips. The following parameters were used:

 exposure/brightness/contrast/saturation
 156%/ 53%/ 57%/ 200%

Alternately, use the Auto button in iMovie. This sets the following

 Levels (left slider) = 0   Levels (Right slider) = 70%


The new stimuli we recorded were edited on Final Cut Pro, cut to 2 seconds in length with the onset of the sound at the mid-time point. They were exported as .mov files.