About organizing computer-based research
- Motivation: when starting a new project, it is very handy to quickly and easily set up a portable structure allowing the project to be backed-up on other machines, shared with collaborators and the work to be reproduced/replicated by colleagues.
- OS choice: concerning computers, one usually has a preferred operating system. Yet, in scientific projects where computing is an important aspect of research, the most frequent is GNU/Linux. Thus, even if it's always good to know how to find our way on other operating systems, such as Microsoft Windows and Apple Mac OS X, I will focus in the following on GNU/Linux.
- I create a set of directories, via
mkdir -p bin include lib share src src_ext src_ext/Rlibs texmf tmp work:
bin: contains executables;
include: contains C/C++ header files;
lib: contains C/C++ shared libraries;
share: contains documentation;
src: contains source code from my own packages;
src_ext: contains source code from external packages;
src_ext/Rlibs: for external R packages;
texmf: contains LaTeX packages;
tmp: contains temporary tasks;
work: contains projects.
- this structure is reflected in my file
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
# User specific environment and startup programs
- External packages: for each external package in
src_ext, I create a directory with its usual name, say emacs, in which I create a file
install.bash with the necessary commands to compile and install the package, typically:
tar xzvf emacs-24.3.tar.gz
./configure --prefix=$HOME --with-x-toolkit=no --with-xpm=no --with-jpeg=no --with-gif=no --with-tiff=no
- for each project in
work, I create a set of directories, via
mkdir -p analysis doc download figures preprocessing scripts src:
analysis: contains the outputs (exploratory, temporary, final) of the analyzes;
doc: contains the documentation allowing to replicate the whole project, usually as an
download: contains the data sets obtained externally;
figures: contains all figures, used in the README and the manuscript;
preprocessing: contains the outputs of the preprocessing, then used to obtain outputs in
scripts: contains all scripts, usually used for preprocessing;
src: contains my own source code, usually used in the analyzes and not yet mature enough to be in
- to share my work with colleagues, I use
tar -czvf project.at.gz --exclude=project/download project
- Choices: I strive for freedom-protection (à la free software), portability, longevity, robustness and modularity
- editing text: I use Emacs;
- documenting projects: I use org-mode (major argument for using Emacs);
- writing code: I start from one of my templates for bash, Python, R and C++;
- versioning: I use git;
- developing packages: I use the Autotools;
- presenting: I use LaTeX (for papers) and Beamer (for talks), eventually LibreOffice Writer and Impress (papers and talks, respectively);
- drawing: GIMP and Inkscape.
- backup: I use
rsync, via a script
# backup.bash <path_to_backup> >& backup.log &
RSYNC_OPT="--compress --recursive --times --perms --links --exclude="*~" --delete --delete-excluded --progress"
rsync $RSYNC_OPT ~/remote1/work/project1 $1
rsync $RSYNC_OPT ~/remote1/work/project2 $1
- My history: in 2006, during an internship in a bioinformatics lab, I discovered GNU/Linux. More specifically, I worked on a Fedora distribution and was able to install it on my laptop. From 2007 to 2010, during my PhD, I switched to Debian and then Ubuntu for my laptop, and I used several computer clusters running with Solaris and CentOS.
- This looks like an anthology of weird names but, fundamentally, all these distributions are more or less similar to each other and can be described as Unix-like systems. Please note, however, that all these are not equivalent in terms of protecting your freedom. Michael Kerrisk presents this quite well (pdf). It is indeed important to know about the difference between GNU and Linux and, for those who read the biography of Steve Jobs, I highly recommend reading the biography of Richard Stallman (founder of GNU).