As you've probably noticed if you've examined your epi (that is, echo planar imaging) data closely, the shape of the brain looks a little funny. This is because there is susceptibility near the tissue boundaries (like the sinuses), which causes some inhomogeneities in the magnetic field near them. This has two main results-- first, spins near these areas will dephase faster, resulting in signal dropout. Second, the variations in the magnetic field strength experienced by the spins will displace them on the image. There isn't really a way to fix the first problem* (getting signal back), but there are a few methods for minimizing the effect of the second problem (geometric distortions) on your data. This can be really important to do if you're interested in precise anatomical locations of brain bits near the distortions. It can also be really important to do if you're doing some sort of precise masking, such as sampling from a volume to a surface. The strategies for doing this depend a bit on what data you have.
- I mean, not after you've acquired the data. You can do things to prevent it, mostly making your TE shorter.
Nothing but an EPI and an anatomical
If these are the data you have, your best bet is to do a nonlinear alignment to the anatomical. Essentially, you're mapping the spatial contrast of the epi to the spatial contrast in the anatomical. So, if this is the route you're taking, you'll want to use images with the best spatial contrast you've got. If you collect without dummy volumes (that is, if you keep the data that's collected in the first few TRs of your functional run, before the magnetization has stabilized), those first few runs will have higher signal, and better spatial contrast. You can use these to compute the nonlinear alignment. Otherwise, you could take an average of all the (motion-corrected) volumes, and use that as a single volume to align. Just make sure that you motion correct to *this* volume later.
This procedure would look something like this (unabashedly plagiarized from the AFNI help page for 3dQWarp.)
** For aligning EPI to T1, the '-lpc' option can be used; my advice would be to do something like the following: 3dSkullStrip -input SUBJ_anat+orig -prefix SUBJ_anatSS 3dbucket -prefix SUBJ_epiz SUBJ_epi+orig'' align_epi_anat.py -anat SUBJ_anat+orig \ -epi SUBJ_epiz+orig -epi_base 0 -partial_axial \ -epi2anat -master_epi SUBJ_anat+orig \ -big_move 3dQwarp -source SUBJ_anatSS+orig.HEAD \ -base SUBJ_epiz_al+orig \ -prefix SUBJ_anatSSQ \ -lpc -verb -iwarp -blur 0 3 3dNwarpApply -nwarp SUBJ_anatSSQ_WARPINV+orig \ -source SUBJ_epiz_al+orig \ -prefix SUBJ_epiz_alQ * Zeroth, the T1 is prepared by skull stripping and the EPI is prepared by extracting just the 0th sub-brick for registration purposes. * First, the EPI is aligned to the T1 using the affine 3dAllineate, and at the same time resampled to the T1 grid (via align_epi_anat.py). * Second, it is nonlinearly aligned ONLY using the global warping -- it is futile to try to align such dissimilar image types precisely. * The EPI is used as the base in 3dQwarp so that it provides the weighting, and so partial brain coverage (as long as it covers MOST of the brain) should not cause a problem (we hope). * Third, 3dNwarpApply is used to take the inverse warp from 3dQwarp to transform the EPI to the T1 space, since 3dQwarp transformed the T1 to EPI space. This inverse warp was output by 3dQwarp using '-iwarp'. * Someday, this procedure may be incorporated into align_epi_anat.py :-) ** It is vitally important to visually look at the results of this process! **
An EPI and a fieldmap
Sometimes, people measure the fieldmap directly. If that's the case, you can compute how much signal loss and distortion should have arisen, and compensate accordingly in your alignment to the anatomical. FSL has a tool called FUGUE that handles this.
Two EPIs with different phase encode directions
The susceptibility-induced distortions will depend on the phase encode direction of your scans. An approach that's becoming more common is to collect two scans with phase encode directions going to opposite directions (such as R>>L and L>>R or A>>P and P>>A). This is also called "blip up/blip down" or the like. This will give you two distorted images (which sounds worse, actually). Since they're distorted in equally wrong but in opposite directions, the half-way point is undistorted. It's also worth noting that this is a much faster acquisition scheme than collecting a full fieldmap. If you're collecting multiple task runs, just collect every other one with a phase encode flip, and you've added no time to your acquisition. Even if you only have one run, you only need a volume (or maybe two or three, for averaging) to do the calculation. That means that it only takes a few TRs of extra acquisition (if any) to do this correction.
You can do this calculation with AFNI's unWarpEPI.py.
Keep in mind that as your participant moves in the scanner, the tissues that are causing the susceptibility are moving as well. If you have a particularly wiggly participant, you might have to make some judgement calls regarding which blip up/down pairs you would use. For example... if you have 4 runs and Run01 and Run03 are A>>P and Run02 and Run04 and P>>A, you could...
- Examine each run for motion, and make judgement calls (for Run02 and Run03) about whether you want to use the last few volumes from the dataset before or the first few volumes from the dataset after to calculate the distortion correction
- Motion correct all of Run01 and Run03, average (I suppose aligning to the beginning of Run03). Same for Run02 and Run04 (aligning to the end of Run02). Then use the average of each as the input to unWarpEPI.py. Note that you'd run it twice, once with Run0103 being "forward" and once with it being "backward."
(This goes for all forms of geometric corrections). It's important to visually inspect your data. If you "undistort" badly, you could just be making it worse.