Beauchamp:Electrophysiology: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
Line 29: Line 29:
==January 2008 Subjects==
==January 2008 Subjects==
Proposed experiments for January 2008 subjects.
Proposed experiments for January 2008 subjects.
==TODO LIST==
Decide on screening stimuli; get rid of bad looking stimuli <br>


Focus on ventral temporal and lateral occipital-temporal electrodes with visual responses in fMRI
Focus on ventral temporal and lateral occipital-temporal electrodes with visual responses in fMRI

Revision as of 15:28, 21 December 2007

Brain picture
Beauchamp Lab Notebook






Electrophysiology Protocols

Presurgical Scanning

After analysing fMRI data, upload the entire contents of the AFNI and SUMA directories to Xfiles. This can be simplfied by Apple-K (Connect to Server) in Finder and choosing XFiles;

 xfiles.hsc.uth.tmc.edu (129.106.148.217)

then the folders can be dragged from the server to Xfiles, or copied in the command line, easily (without using the Web-based GUI interface).


In the EMU

Setup Apparatus

Receptive Field Mapping

Electrical Stimulation

Selectivity

Perceptual Biasing

It is also good to collect 10 minutes of resting data (no stimulation) from as many visual electrodes as possible for later analyses.

January 2008 Subjects

Proposed experiments for January 2008 subjects.

TODO LIST

Decide on screening stimuli; get rid of bad looking stimuli

Focus on ventral temporal and lateral occipital-temporal electrodes with visual responses in fMRI not on electrodes over early visual cortex

stimulation at 2 (up to 8) mA (no psychometrics) to see which, if any, late sites evoke percepts
GOAL: additional data for Dona's current paper; pilot data for grant to show that stimulation in higher areas does NOT produce a percept.
ANTICIPATED RESULT: few, if any, sites will produce percepts

object selectivity to determine preferred and nonpreferred stimuli with well-defined categories, including: faces, bodies, houses, scenes, etc.
GOAL: pilot data on category selectivity, determine preferred objects

RF mapping with preferred stimulus
GOAL: Determine RFs in higher areas (identified with fMRI)

repeated presentation of preferred stimulus; repeated presentation of nonpreferred stimulus (context: letter detection foveally)
GOAL: Pilot data for adaptation

If there is ample time:
psychometrics of stimulation at sites stimulated above
GOAL: additional data for Dona's current paper


stimulation of higher electrodes while subject makes object or noise discrimination
i.e. perceptual biasing with preferred and nonpreferred stimuli embedded in noise
GOAL:Pilot data for grant

study motion, orientation selectivity using Ping's new screening program

object selectivity with preferred stimulus in big screen of same category stimuli
object selectivity with preferred stimulus in big screen of nonpreferred category stimuli
object selectivity with nonpreferred stimulus in big screen of same category stimuli
object selectivity with nonpreferred stimulus in big screen of preferred category stimuli

Processing Subject Data

After obtaining the CD containing the patient CT data from St. Luke's, use OsiriX to export all images (using the export to DICOM option, and the hierarchical, uncompress options).

CT scans have voxel size 0.488x0.488x1 mm; this may need to be adjusted manually with

 3drefit -zdel 1.000 DE_CTSDE+orig

(If the CTs look distorted in AFNI, then the voxel size must be adjusted). Next, the CTs must be registered with the hi-res presurgical MRI anatomy. This may fail because the CT has a coordinate system with a very different origin than the MRI. Registration routines will not work if the input datasets are not in rough alignment. To check this, type

 3dinfo DE_CTSDE+orig

returns

 R-to-L extent:  -124.756 [R] -to-   124.756 [L] -step-     0.488 mm [512 voxels]
 A-to-P extent:  -124.756 [A] -to-   124.756 [P] -step-     0.488 mm [512 voxels]
 I-to-S extent:  -258.000 [I] -to-   -86.000 [I] -step-     1.000 mm [173 voxels]

We want the center of the dataset to be roughly at (0,0,0). For this example, this is true for (x,y) but not for z. First, create a copy of the dataset

 3dcopy DE_CTSDE+orig DE_CTSDEshift

Then, recenter the z-axis

 3drefit -zorigin 80 DE_CTSDEshift+orig

3dinfo returns

 R-to-L extent:  -124.756 [R] -to-   124.756 [L] -step-     0.488 mm [512 voxels]
 A-to-P extent:  -124.756 [A] -to-   124.756 [P] -step-     0.488 mm [512 voxels]
 I-to-S extent:   -80.000 [I] -to-    92.000 [S] -step-     1.000 mm [173 voxels]

The z-axis is now roughly centered around 0. In AFNI, examine the MR and the shifted CT to make sure they are in rough alignment. Next, use 3dAllineate to align the two datasets.

 3dAllineate -base {$ec}anatavg+orig -source DE_CTSDEshift+orig -prefix {$ec}CTSDE_REGtoanatV4 -verb -warp shift_rotate -cost mutualinfo -1Dfile {$ec}CTSDE_REGtoanatXformV4

Check in AFNI to make sure that they alignment is correct. NB: It is also possible to crop the MRI before Allineating since the MR coverage is typically greater than the CT coverage. In a test case, this did not have a big effect.

Things to do

HumanImageDetection

Can stimuli be vector-based rather than pixel based, so as not to lose resolution with scaling? POSSIBLE if original file is vector-based
Enable online scrambling LOOKING INTO IT
Enable online color to black and white conversion LOOKING INTO IT

HumanLetterDetection

Analyze data from LR to see where the RFs are