From OpenWetWare
Revision as of 07:36, 24 August 2012 by Michael S Beauchamp (talk | contribs) (Getting Data From the UT Philips Scanner if you are at UT)
Jump to: navigation, search

See also

About the UT Philips Scanner

Our scanner is a 3 T whole-body Philips scanner with 16 parallel RF channels. It is a hybrid Intera (older) and Achieva (newer). Slice acquisition is set to the default order, with the time maximized between adjacent slices. Slice acquisition order seems to be from bottom to top, based on the raw DICOM images (where the first image is the most inferior) and real-time view seen during scanning (where the most inferior images are display first, sometime most superior aren't displayed at all before the next TR starts).

Getting Data From the UT Philips Scanner if you are at UT

The MRI data can be transferred from the scanner in a number of ways; the only fail-safe method is to take a removable USB drive to the scanner room and copy the data directly to it. Alternately, the data can be placed somewhere "in the cloud" by the scanner tech and then moved to the correct place on the server. One place is the UT NAS. Connect to the NAS by hitting Apple-K and then


Login with your UT ID and password. Navigate to the correct directory and copy to the experiment directory. Another place the data could be placed is in a folder on our server. You must find it and move it to the correct location (experiment directory, "raw" subdirectory). The folder will be named by Vips (ask him for the name if you cannot find it) and is located on the server internal hard disk, as opposed to the RAID connected to the server. GUI Version. Make sure that both the "data" drive (RAID) AND "beauchamplab" (server internal hard disk) are mounted on your Mac. Apple-K to and select both if they are not. Open one finder window and navigate to ms-nbafmri/beauchamplab and find the folder with your data in it. Move or copy it to the correct experiment directory. Command line version.

 ssh beauchamplab@

Navigate to the correct directory. Then, copy to the correct location on the server as follows. Replace the sample directory name with the desired one; this is the name given by Vips. Be sure you have created the experiment directory, including the raw subdirectory, already.

cp -r ZAP_081612/* /Volumes/Pegasus/data/UT/ZAP/raw/

More Notes

Depending on how the data is exported from the scanner, the data will be one of three formats:

NiFTI Format


This is the preferred format because it can be read by AFNI directly.

PAR/REC Format


This is the native Philips format. The PAR file contains the data PARameters. The REC file contains the raw REConstructed data for the entire run.

DICOM format


Each file contains one image, so that one run has thousands of images in it.

Method 1: Vips doesn't like this because it is too slow. Using a package called ExpanDrive

Vips can connect directly to a Mac server with scp (ssh must be enabled on the server). He will copy it to the "BeauchampServer" dirve mounted on the Philips console. This is a link to the ms-nbafmri server located at (For Tim's data, it is copied to data on ike). The next step is to copy it from the ms-nbafmri server to the correct experiment directory. Here are the steps to do that:

For ms-nbafmri ii is copied to the admin home directory (on the server hard disk) and needs to be copied to the correct location (on the Promise RAID).

 ssh admin@

Use the Amazon/Gmail lab password. Find where the data is located.

 cd DATA

Then, copy to the correct location on the server as follows. (Replace the sample directory name with the desired one; this is the name given by Vips).

cp -r ZAP_081612 /Volumes/Pegasus/data/UT/

Method 2: Faster for Vips, but involves an extra step for us. Data can be copied to the NAS and then transferred from there to the server. On Windows, mount the NAS with


On Macs, Apple-K and then


For either case, log in with your UT user ID and password.

OLDER: MR data can be sent directly from the Philips scanner to an external hard drive attached to bidwell. (For some reason, the scanner PC cannot connect to Ike or the RAID). The data will end up on bidwell on ( in


(which is a link to the "My Passport" external hard disk). Vips will usually put it in a subdirectory on bidwell e.g.


The data should be copied to /data1 for analysis using this command:

 scp -r mri@ /Volumes/data1/UT/ZAJ/

If you are not sure what directory to use, you can open a window to bidwell with the following command

 ssh mri@
 cd ~mri/data/DR_BEAUCHAMP_2010

Getting Data From the UT Philips Scanner if you are at Rice or somewhere else outside UT

Vips will copy the data to the Beauchamp Lab server bidwell on ( in


(which is a link to the "My Passport" external hard disk). Vips will usually put it in a subdirectory on bidwell e.g.


To log in to this machine from Rice, you must VPN onto the UT network, follow the instructions on

Next, ssh to bidwell

 ssh mri@

And navigate to the directory where the data is found.

Finally, use the scp command to copy the desired data to the machine outside UT. This will only work if the target machine has an appropriate IP address (in a public domain). This can be tested by ping-ing the target machine.

 scp -r ZAE_042710 fmri@

Don't forget the trailing colon, it is critical. How long does this take for a typical experiment (this one is 7 fMRI runs and 2 T1s, about 1 GB), start at

 Fri Jun  3 11:38:03 CDT 2011

finish at

 Fri Jun  3 11:44:47 CDT 2011

About 6 minutes total for 1 GB, not too bad.

About the BCM Siemens Scanners

Baylor has 5 Siemens 3 T scanners. Three are Trio (whole-body) and two are Allegra (head-only). After you scan, the technologist will send you an e-mail with the link to the data. If necessary, make a new directory

 mkdir /Volumes/data1/UT/JL
 cd /Volumes/data1/UT/JL
 mkdir raw
 cd raw

And then click on the link and download the data to this directory (called "raw" for ease of reference). The data is stored as an .iso file. Double click on the .iso file to open a window that lists the component data files (with .ima suffix). Click and drag these into the raw directory.

To access pulse sequences, Click on Ctrl-Esc and select the "Advanced User" program. Type in the password (meduser1). Then go back, and open a Windows Explorer window. The pulse sequences are located under


The diffusion vectors are stored in the file


If the real-time is not working, make sure you are using the correct pulse sequence (VB17rt EPI). Also check the tabs computer (MacPro):

  1. Change user to admin_local (mayor55quimby)
 $ su admin_local
  1. Kill afni:
 $ kill -9 'cat /Uses/tabs/tabs/var/'
  1. If that does not help reboot tabs
  2. You can check the log on the scanner for hints:
 Type logviewer from the command prompt.

Image information in Trios: 76.5 cm from screen to eyeball height of screen is 13.7 in = 34. 8 cm

width of screen is 34.8 * (1024/768) = 46.4 cm atan(23.2/76.5) = 0.29 rad = 16.6 degrees ~30 degrees full width

How to save data: Patient Browser, select exam, Transfer, Export to Off-Line, Select flash drive, export

Getting Data from other Scanners

Data from other scanners will often be stored in DICOM or other formats. DICOM is the standard file format for storing medical data. Typically each scan is saved in a unique folder and each file represents a single slice. OsiriX is a free DICOM viewer for Macs and can be downloaded from the following URL:

OsiriX can often extract images from DICOM or other formats and write them out to a directory in standard DICOM format. Then, to3d is used to convert the extracted DICOM files to AFNI BRIK/HEAD files.

Using to3d from the command line

If scanner data is obtained in NiFTI format from the UT Philips scanner, then no conversion is necessary (see above). For all other types of scanner data, the program to3d is used to convert the raw data to AFNI BRIK/HEAD files. The preferred way to run to3d is from the command line, so that all parameters are recorded and the process can be automated. To see the to3d options check the help file:

 to3d –help | more

For the BCM Siemens Scanners

On the experiment sheet, you should have written down how many different scan series were collected and what each scan series consisted of. Alternately, you can use the ls command.

 ls *ima

The first three digits show the scan series number. The highest number is the last scan series. e.g.

 002-000114-152629.ima	003-000048-152134.ima	003-000165-154108.ima	004-000099-153921.ima	005-000040-154821.ima	005-000157-154821.ima	006-000082-160430.ima

Means there were six scan series. To see how many images are in each scan series use the following commands:

 foreach s (`count -digits 3 1 6`)
 echo -n $s "  "
 ls {$s}*ima | wc

This will output something like

 001          3       3      66
 002        183     183    4026
 003        183     183    4026
 004        176     176    3872
 005        192     192    4224
 006        192     192    4224

Scan series 001 is a localizer (only 3 images). From the image number it is ambiguous what the other scan series are (anatomical functional). You can use to3d to examine a single file to see if it is an EPI:

 to3d 003-000165-154108.ima

e.g. EPI image with multiple slices, or an anatomy

 to3d 005-000040-154821.ima

One high-resolution anatomical image. Then, to3d can be run accordingly:

 foreach s (004 005 006)
 to3d -prefix series{$s} -session $sess -skip_outliers -anat {$s}*ima

for anatomies and

 foreach s (002 003)
 to3d -prefix series{$s} -session $sess -skip_outliers -epan -time:zt 33 183 2000 alt+z {$s}*ima

for EPIs.

This is the older method for using to3d with Philips data

To process an anatomical dataset, to3d is quite simple because there is only one timepoint.

 set session = /Volumes/data1/UT/CD/afni
 to3d -session $session -prefix CDanat  IM_*

This creates an AFNI BRIK/HEAD named CD anat from the anatomical DICOM images in the current directory and places them in the "session" directory. AFNI reads information about the images from the DICOM header so that the voxel size and image origin is automatically correct in the BRIK/HEAD file.

To process a functional dataset, to3d if more complicated because we must tell it how many timepoints there were (this information is NOT in the DICOM header). Here is the to3d command to create a BRIK/HEAD from a functional run of DICOM images.

 to3d -session {$session} -skip_outliers -epan -prefix CDr1 -time:tz 110 33 2750 alt+z  IM_*  00000001/IM_*  00000002/IM_*  

Alternately, we can run to3d on the PAR/REC file.

to3d -skip_outliers -epan -time:tz 60 33 2750 alt+z  3D:0:0:80:80:1980:tms_10_10_1.REC

Where "1980" is the product of the number of time points and the number of slices (60*33). This requires you to manually enter all of the information about the dataset (voxel dimensions, etc.) in the GUI, as described in the next section. Alternately, a parent dataset can be specified that was collected with the same parameters.

 to3d -overwrite -prefix test -skip_outliers -epan -geomparent ~/fMRI_1.nii -time:tz 120 33 2000 alt+z  3D:0:0:80:80:3960:C.B.study_2404_2404_F.REC

Using to3d from the GUI interface

If all of the necessary arguments to create a BRIK/HEAD file are given from the command line, then to3d will create the BRIK/HEAD file and finish. If some arguments are missing, then to3d will display a GUI. For instance, simply go to the directory containing the raw DICOM files and type to3d * from the correct folder. Below is a picture of the GUI interface with the most commonly edited options highlighted. Note that the bottom right of the window includes buttons to view the images, save the dataset, and quit. Normally to3d reads the variable information from the DICOM headers and nothing needs to be changed. Only the filename prefix needs to be set.


Averaging Anatomical Scans Using AFNI

To register all the anatomicals to the space of the anatomical closest in time to the functional data (generally anatr1), this is with just two anatomical's:

 3dAllineate -base ${subj}anatr1+orig -source ${subj}anatr2+orig -prefix ${subj}anatr2_2RegTo1 -verb -warp shift_rotate -cost mi -automask -1Dfile ${subj}anatr2toanatr1

Average anatomicals into one dataset:

 3dmerge -gnzmean -nscale -prefix ${subj}anatavg ${subj}anatr1+orig ${subj}anatr2_2RegTo1+orig