Beauchamp: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
No edit summary
No edit summary
Line 6: Line 6:
<h3>Home</h3>
<h3>Home</h3>


The Beauchamp Lab studies the neural mechanisms for multisensory integration and visual perception in human subjects; anatomically, the primary focus of the lab is on the superior temporal sulcus, a brain area critical for both the integration of auditory, visual, and somatosensory information and for the perception of complex visual motion, such as mouth movements. Many everyday tasks require us to integrate information from multiple modalities, such as during conversation when we make use of both the auditory information we hear in spoken speech and the visual information from the facial movements of the talker. Multisensory integration is especially important under conditions in which one modality is degraded, such as in a noisy room. Even in healthy young adults, there is considerable variability in people's ability to integrate auditory and visual speech, but this difference in even more pronounced when other populations are examined. Very young children rely exclusively on auditory information to understand language, but in normal lifespan development visual speech plays an increasing role, sometimes becoming dominant as hearing declines with age. Other populations also show interesting differences: deaf children commonly use a cochlear implant to allow them to hear, but the early lack of auditory input sometimes prevents them from ever properly integrating auditory and visual speech. To understand the neural mechanisms of multisensory integration and visual perception, our primary method is blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). fMRI experiments are conducted using the research-dedicated 3 tesla scanner in the UT MRI Center adjacent to the lab. Because of the limitations of fMRI, we often combine it with other methods, including transcranial magnetic stimulation (TMS) and electrical stimulation and recording. By performing these studies, we hope to unlock one of nature's great mysteries: how the brain performs amazing feats like understanding speech, both to help those with speech comprehension difficulties and for the sheer challenge of discovery.
The Beauchamp Lab studies the neural mechanisms for multisensory integration and visual perception in human subjects; anatomically, the primary focus of the lab is on the superior temporal sulcus, a brain area critical for both the integration of auditory, visual, and somatosensory information and for the perception of complex visual motion, such as mouth movements. Many everyday tasks require us to integrate information from multiple modalities, such as during conversation when we make use of both the auditory information we hear in spoken speech and the visual information from the facial movements of the talker. Multisensory integration is especially important under conditions in which one modality is degraded, such as in a noisy room. Even in healthy young adults, there is considerable variability in people's ability to integrate auditory and visual speech, but this difference in even more pronounced when other populations are examined. Very young children rely exclusively on auditory information to understand language, but in normal lifespan development visual speech plays an increasing role, sometimes becoming dominant as hearing declines with age. Other populations also show interesting differences: deaf children commonly use a cochlear implant to allow them to hear, but the early lack of auditory input sometimes prevents them from ever properly integrating auditory and visual speech. To understand the neural mechanisms of multisensory integration and visual perception, our primary method is blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). fMRI experiments are conducted using the research-dedicated 3 tesla scanner in the UT MRI Center adjacent to the lab. Because of the limitations of fMRI, we often combine it with other methods, including transcranial magnetic stimulation (TMS) and electrical stimulation and recording. Through these sophisticated studies, we hope to unlock one of nature's great mysteries: how the brain performs amazing computational feats, such as understanding speech, and (more generally) how it makes sense of the auditory and visual world around us. Every advance in deepening our knowledge of theses processes is not only exciting for its own sake but will also help those with language difficulties.




You can reach us at: Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550G, Houston, Texas 77030. Telephone (713) 500-5978.
You can reach us at: Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550G, Houston, Texas 77030. Telephone (713) 500-5978.

Revision as of 14:02, 24 February 2011

Brain picture
Beauchamp Lab



Home

The Beauchamp Lab studies the neural mechanisms for multisensory integration and visual perception in human subjects; anatomically, the primary focus of the lab is on the superior temporal sulcus, a brain area critical for both the integration of auditory, visual, and somatosensory information and for the perception of complex visual motion, such as mouth movements. Many everyday tasks require us to integrate information from multiple modalities, such as during conversation when we make use of both the auditory information we hear in spoken speech and the visual information from the facial movements of the talker. Multisensory integration is especially important under conditions in which one modality is degraded, such as in a noisy room. Even in healthy young adults, there is considerable variability in people's ability to integrate auditory and visual speech, but this difference in even more pronounced when other populations are examined. Very young children rely exclusively on auditory information to understand language, but in normal lifespan development visual speech plays an increasing role, sometimes becoming dominant as hearing declines with age. Other populations also show interesting differences: deaf children commonly use a cochlear implant to allow them to hear, but the early lack of auditory input sometimes prevents them from ever properly integrating auditory and visual speech. To understand the neural mechanisms of multisensory integration and visual perception, our primary method is blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). fMRI experiments are conducted using the research-dedicated 3 tesla scanner in the UT MRI Center adjacent to the lab. Because of the limitations of fMRI, we often combine it with other methods, including transcranial magnetic stimulation (TMS) and electrical stimulation and recording. Through these sophisticated studies, we hope to unlock one of nature's great mysteries: how the brain performs amazing computational feats, such as understanding speech, and (more generally) how it makes sense of the auditory and visual world around us. Every advance in deepening our knowledge of theses processes is not only exciting for its own sake but will also help those with language difficulties.


You can reach us at: Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550G, Houston, Texas 77030. Telephone (713) 500-5978.