# Butlin:Unix for Bioinformatics - advanced tutorial

## Overview

This session will be more challenging if you are still new to Unix. However, it’s main aim is to demonstrate that Unix is much more than an environment in which you can execute big programmes like bwa or samtools. Rather, with its suite of text searching, extraction and manipulation programmes, it is itself a powerful tool for the many small bioinformatic tasks that you will encounter during everyday work.

This module expects that you have either gone through the Basic Unix module or downloaded the example data into your account and modified some of your environment variables as well as created a few folders and some aliases:

$echo "export PATH=~/prog:$PATH" >> ~/.bash_profile
$echo "export MANPATH=~/man:$MANPATH" >> ~/.bash_profile
$echo "alias ll=ls -lFh" >> ~/.bash_profile$ source ~/.bash_profile
$mkdir -p ~/src ~/prog ~/man/man1  There are still small (hopefully not large) bugs lurking in this protocol. Please help improve it by correcting those mistakes or adding comments to the talk page. Many thanks. ## Before you start Log into your iceberg account. On the head node iceberg1, type: $ qrsh


Then change to the directory NGS_workshop, that you created in the Basic Unix module:

$cd NGS_workshop/Unix_module  ## TASK 3: I have downloaded my illumina read data. Now I want to know how many reads my sequence run has yielded. $ zless Unix_tut_sequences.fastq.gz
$zcat Unix_tut_sequences.fastq.gz | wc -l$ man wc


gives you the number of lines in your sequence file, which is the number of reads times four for fastq format. Note, you can usually avoid storing uncompressed data files, which saves disk space.

## TASK 4: I have a .fastq file with raw sequences from a RAD library. How can I find out how many of the reads contain the correct sequence of the remainder of the restriction site?

Let's assume you had 5bp long barcode sequences incorporated into the single-end adapters, which should show up at the beginning of each sequence read. Let's also assume that you have used the restriction enzyme SbfI for the creation of the library, which has the following recognition sequence: CCTGCAGG . So the correct remainder of the restriction site that you expect to see after the 5bp barcode is TGCAGG. First have a look at your fastq file again:

$zcat Unix_tut_sequences.fastq.gz | less -N  Each sequence record contains four lines. The actual sequences are on the 2nd, 6th, 10th, 14th, 18th line and so on. The following can give you the read count from the file: $ zcat Unix_tut_sequences.fastq.gz | awk '(NR-2)%4==0'  | grep -c "^.....TGCAGG"


This is a pipeline which first uncompresses your sequence read file and pipes it into the awk command, which extracts only the DNA sequence part of each fastq record. NR in awk stands for the current line number (or Number of Record), %4 returns modulo 4 of the current line number minus 2. The modulo operator returns 0 if the result of the division is an integer (i. e. if the line number minus 2 is a multiple of 4) and 1, 2, 3 etc. otherwise. Awk only prints out those lines with a line number that is divisible by 4 without remainder. Got it?

$zcat Unix_tut_sequences.fastq.gz | awk '(NR-2)%4==0' | less  Grep searches each line from the output of the awk command for the regular expression given in quotation marks. ^ stands for the beginning of a line. A dot stands for any single character. There are five dots, because the barcodes are all 5 base pairs long. The -c switch makes grep return the number of lines in which it has found the search pattern at least once. $ man grep

$zcat Unix_tut_sequences.fastq.gz | awk '(NR-2)%4==0' | grep "^.....TGCAGG" | less  In less, type: /^.....TGCAGG  then hit enter. ## TASK 5: I have split my reads by barcode and I have quality filtered them. Now I want to know how many reads I have left from each (barcoded) individual. How can I find that out? Change into the 05_TASK directory and have a look what's in it. You should find a bunch of gzipped fastq files there. To get a read count for each file, you can use the following bash loop. Note that I have broken a long command line over several lines here. The backlashes are followed immediately by (invisible) RETURN characters. That means, when you type this command, press the RETURN key immediately after each backslash. The backslashes escape the usual meaning of a RETURN character for the shell, i. e. "command line end, start executing". $ for file in *.fq.gz; \
do echo -n "$file " >> retained; \ zcat$file | awk 'NR%4==0' | wc -l >> retained; \
done &

$less retained  This bash for loop goes sequentially over each file in the current directory which has the file ending .fq.gz. It prints that file name to the output file retained. The >> redirection character makes sure that all output is appended to the file retained. Otherwise only the output from the last command in the loop and from the last file in the list of files would be stored in the file retained. The ampersand & at the end of the command line sends it into the background, which means you get your command line prompt back immediately while the process is running in the background. Troubleshooting: if you get an error message like zcat: can't stat:, then you can try replacing zcat with gunzip -c or with zcat < . ## TASK 6: I have 30 output files from a programme, but their names are not informative. I want to insert the keyword cleaned into their names? How can I do this? Renaming files is a very common and important task. I’ll show you a simple and fast way to do that. There are as always many ways to solve a task. First, you could type 30 mv commands that’s the fastest way to rename a single file but doing that 30 times is very tedious and error prone. Second, you could use a bash for loop as above in combination with echo and the substitution command sed, like this: $ for file in *.out; do mv $file echo$file | sed 's|^$$.*$$$$\.out$$$|\1_cleaned\2|'; done  but that's a bit tricky if you are not using sed every day. Note the two backticks characters: one before echo, the other right before the last semicolon. Everything in backticks will be evaluated by bash and then replaced by it’s output. So in this case it’s the modified file name that is provided as second argument to the mv command. If you want to find out what the sed command does, take a look at the beginning of this sed tutorial. The best way, however, to solve the task is by downloading the perl rename script (note, this is not the system rename command that comes with Unix, which has very limited capabilities), put it into a directory that is in your PATH (e. g. ~/prog) and make that text file executable with chmod. You can download Larry Wall's rename.pl script from here with wget: $ mkdir ~/src ~/prog
$cd ~/src$ wget http://tips.webdesign10.com/files/rename.pl.txt


or use firefox if you are in an interactive session.

$mv rename.pl.txt ~/prog/rename.pl  Note, that the command above cut/pastes and renames at the same time. $ ll ~/prog


Let’s call the programme:

rename.pl


You should get Permission denied. That’s because we first have to tell bash that we permit its execution:

$chmod u+x ~/prog/rename.pl$ man chmod


The u stands for user (that’s you) and x stands for executable. Let’s try again:

$rename.pl  How to make the documentation in the script file visible? $ cd ~/prog
$pod2man rename.pl > rename.pl.1$ mv rename.pl.1 ~/man/man1


Then look at the documentation with:

$man rename.pl  First let's create 30 empty files to rename: $ cd
$mkdir test$ cd test
$for i in {1..30}; do touch test_$i.out; done
$ll  Now, let's insert the keyword cleaned into their filenames: $ rename.pl -nv 's/^(.+)(\.out)/$1_cleaned$2/' *.out


rename.pl should have printed out what it would be doing if you had left out the -n switch. You already know that the shell expands *.out at the end of the command line into a list of all the file names in the current directory that end with .out. So the command does the renaming on all the 30 files we created. Let’s look at the stuff in single quotes. s turns on substitution, since we are substituting old file names with new file names. The forward slashes / separate the search and the replacement patterns. The search pattern begins with ^, which stands for the beginning of the file name. The dot stands for any single character. The + means one or more of the preceding character. So .+ will capture everything from the beginning of the file name until .out. In the second pair of brackets we needed to escape the dot with a backslash to indicate that we mean dot in its literal sense. Otherwise, it would have been interpreted as representing any single character. Anything that matches the regular expressions in the brackets will be automatically saved, so that it can be called in the replacement pattern. Everything that matches the pattern in the first pair of brackets is saved to $1, everything that matches the pattern in the second pair of brackets is saved to $2. Fore more on regular expressions, look up:

$man grep$ man perlrequick


Finally, let's rename for real:

$rename.pl 's/^(.+)(\.out)/$1_cleaned$2/' *.out$ ll


## TASK 7: I have got a text file originating from a Windows or an old Mac text editor. But when I open it in less, all lines are concatenated into one line and with repeatedly the strange ^M character in it. How can I fix this?

$cd ~/NGS_workshop/Unix_module/07_TASK$ ll
$cat -v text_file_from_Windows.txt  Unix has only linefeeds, the old Mac’s had only carriage returns and Windows uses both characters together to represent one return character. The following will remove the carriage return from the end of lines, leaving only linefeeds: $ tr -d '\r' < text_file_from_Windows.txt > file_from_Windows_fixed.txt
$cat -v file_from_Windows_fixed.txt$ man tr


Note, most Unix programmes print their output to STDOUT, which is the screen. If you want to save the output, it needs to be redirected into a new file. Never redirect output back into the input file, like this:

$tr -d '\r' < text_file_from_Windows.txt > text_file_from_Windows.txt$ cat text_file_from_Windows.txt


You’ve just clobbered your input file.

less text_file_from_Mac.txt


The old Mac text file has just one carriage return instead of a linefeed. That’s why, all lines of the file are concatenated into one line. To fix this:

$tr '\r' '\n' < text_file_from_Mac.txt > file_from_Mac_fixed.txt$ less file_from_Mac_fixed.txt


As usual there are several ways to achieve the same result:

$sed 's/\r//g' < text_file_from_Windows.txt > file_from_Windows_fixed.txt$ dos2unix text_file_from_Windows.txt
$mac2unix text_file_from_Mac.txt  The last two programmes are available on iceberg, but are not by default included in a Unix system. You would have to install them yourself. Note also, that dos2unix and mac2unix do in place editing, i. e. the old version will be overwritten by the new version. ## TASK 8: I have just mapped my reads against the reference sequences and I got a BAM file for each individual. How can I find out the proportion of reads from each individual that got mapped successfully? For this task you need to have samtools installed and in your path. If not, do task 1 from the basic Unix tutorial before you continue. Note: it seems that the samtools view command should be able to do that with the -f and -c switches. However, in my trials it only returned the total number of reads in the BAM file, i .e including those that did not get mapped (this bug is fixed now). Fortunately, this is a really easy Unix task. First let’s have a look at the .bam file: $ cd ~/NGS_workshop/Unix_module/08_TASK
samtools view alignment.bam | less  If that fits on your screen without line wrapping ... lucky bastard! If not, just turn off line wrapping in less:  samtools view alignment.bam | less -S


Use the arrow keys on your keyboard to scroll left and right. The second column contains a numerical flag which indicates the result of the mapping for that read. The flag of 0 stands for a successful mapping of the read. A flag of 16 also stands for successful mapping, but of the reverse complement of the read. So in order to get the number of reads that got mapped successfully we need to count the number of lines with zeros or 16's in their second column. Let’s cut out the second column of the tab delimited .bam file:

samtools view alignment.bam | cut -f 2 | less man cut


Next, we have to sort this column numerically in order to collapse it into unique numbers:

samtools view alignment.bam | cut -f 2 | sort -n | uniq -c man sort


With the -c switch to uniq the output is a contingency table of the flags from column 2 that can be found in your alignment file. However, some reads with a 0 flag in the second column still have a mapping quality of 0 (don’t ask me why), which is in the 5th column. So, in order to get a count of the number of reads that got mapped successfully with a mapping quality above 0, use awk:

samtools view alignment.bam | awk '(2==0 || $2==16) &&$5>0' | wc -l
$man awk  $2 stands for the content in the second column, $5 for the fifth, || means logical OR and && means logical AND. Awk prints only those lines to output which match our conditions. We then simply count those lines. ## TASK 9: I have got a big multi-fasta file with thousands of sequences. How can I extract only those fasta records whose sequences contain at least 3 consecutive repeats of the dinucleotide AG? Believe it or not, this task can be solved with a combination of Unix commands, i. e. no real programming with a proper programming language like Python, Perl, Ruby etc. is required. Although you certainly could do the following with these languages as well, this task is designed to give you an idea of how to approach a more complex problem and also to give you an idea of the power that lies in Unix when you combine its relatively simple commands together in a pipeline. Let’s approach the task step by step. First have a look at the input file: $ cd ~/NGS_workshop/Unix_module/09_TASK
$less multi_fasta.fa  We could just use the programme grep to search each line of the fasta file for our pattern (i. e. 3 consecutive AG), but since grep is line based we would lose the fasta headers and some part of the sequence in the process. So, we somehow have to get the fasta headers and their corresponding sequences on the same line. $ tr '\n' '@' < multi_fasta.fa | less


Everything is on one line now (but don’t do this with really large files as this causes the whole file to be read into memory).

$tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | less  Note the g at the end of the sed command, which stands for global. With the global option sed does the replacement for every occurrence of a search pattern in a line, not just for the first. $ tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | tr '#' '\n' | less


Ok, we are almost ready to search, but we first need to get rid of the @ sign in the middle of the sequences.

$tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | tr '#' '\n'| sed 's/$$[AGCT]$$@$$[AGCT]$$/\1\2/' | less  In the sed command the search and the replacement patterns are enclosed in forward slashes /. Anything that matches the pattern between $$ and $$ will be stored by sed and can be called again in the replacement pattern. The square brackets [ ] mean match any one character that is enclosed by them. In the replacement pattern \1 stands for the base before the @ sign, \2 stands for the base after the @ sign. Finally, let’s search for the microsats: $ tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | tr '#' '\n'| \
sed 's/$$[AGCT]$$@$$[AGCT]$$/\1\2/' | egrep "@.*(AG){3,}.*@" | less


Note the use of egrep instead of just grep for extended regular expressions. The search pattern is enclosed in quotations marks. Our sequences are delimited by @’s on each side. The dot stands for any single character. The asterisk * means zero or more of the preceding character. So .* could match exactly nothing or anything else. The {3,} means 3 or more times the preceding character. Without the brackets around AG this would only refer to the the G in AG. Now that we have all sequences with ≥3x AG microsats, let’s get them back into fasta format again:

$tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | tr '#' '\n'| \ sed 's/$$[AGCT]$$@$$[AGCT]$$/\1\2/' | egrep "@.*(AG){3,}.*@" | \ tr '@' '\n' | less  And let’s get rid of empty lines: $ tr '\n' '@' < multi_fasta.fa | sed 's/>/#>/g' | tr '#' '\n'| \
sed 's/$$[AGCT]$$@$$[AGCT]$$/\1\2/' | egrep "@.*(AG){3,}.*@" | \
tr '@' '\n' | grep -v "^$" > AGAGAG.fa  The grep search pattern contains the regular expression symbols for the beginning and the end of the line with nothing in between an empty line. The -v switch inverts the matching. Open AGAGAG.fa in less, and type: /(AG){3,}  and hit Enter. Each sequence should have a highlighted match. The syntax of regular expressions for grep, sed and less (which are very similar) is one of the most useful things you can learn. The most flexible regular expressions, however, are provided by Perl. Finally, let’s count how many sequences are in the output file: $ grep -c "^>" AGAGAG.fa


## TASK 10: I have a large table from the output of the programme stacks containing the condensed information for each reference tag. Each column is tab delimited. The first column contains the tag ids. Another column the reference sequence for the tag. How can I create a multi-fasta file from this table with the tag ids as fasta headers?

First, convince yourself that the table is actually tab delimited:

$cd ~/NGS_workshop/Unix_module/10_TASK$ cat -T stacks_output.tsv | less -S


Troubleshooting: if cat -T gives you an error message, try cat -t instead.

Tabs will be replaced by ^I. In which column is the Consensus sequence of each tag? We want the Catalog ID as fasta header. Let’s first extract the two columns we need:

$cut -f 1,5 stacks_output.tsv | less  The first line of the output contains the column headers of the input table. We don’t want them in the fasta file. So let’s remove this line: $ cut -f 1,5 stacks_output.tsv | tail -n +2 | less
$man tail  Now let’s insert a > in front of the tag ids in order to mark them as the fasta headers. $ cut -f 1,5 stacks_output.tsv | tail -n +2 | sed 's/$$.*$$/>\1/' | less


In the sed command $$.*$$ captures a whole line, which we call in the replacement pattern with \1. Finally we have to replace the tab that separates each header from its sequence by a return character in order to bring each sequence on the line below its header.

$cut -f 1,5 stacks_output.tsv | tail -n +2 | sed 's/$$.*$$/>\1/' | \ tr '\t' '\n' | less  If you’re satisfied with the result, then redirect the output of tr into an output file. But what if you also wanted the multi fasta file to be sorted by tag id? $ cut -f 1,5 stacks_output.tsv | tail -n +2 | sort -nk 1,1 | \
sed 's/$$.*$$/>\1/' | tr '\t' '\n' | less


With the sort command we turn on numerical sort with -n and the -k switch lets us specify on which column the sorting should be done (by default, sort would use the whole line for sorting).

## TASK 11: I want to map the reads from 96 individuals against a reference sequence (e. g. partial or full reference genome or transcriptome, etc.). A mapping programme takes one of the 96 input files and tries to find locations for each read in the reference sequence. How can I parallelise this task and thus get the job done in a fraction of the time?

### If you are working on a multi core machine

We are going to use GNU parallel for this. So if you don't have it installed yet, please install it before continuing. We are also going to need bowtie2 for this task. Please make sure that all executables are in your PATH (e. g. $which parallel). For a little reminder, recap Task 1 of the basic tutorial. $ cd ~/NGS_workshop/Unix_module/11_TASK
$ll ind_seqs  There are 96 fastq files with the reads from 96 individuals in this folder. All input files contain a number from 1 to 96 in their names. Otherwise, their names are identical. We now want to map the reads from those individuals against the reference_seq with bowtie2. First we need to create an index of the reference sequence: $ bowtie2-build reference_seq.fa.gz bowtie_reference_seq


Let's create an output directory:

$mkdir BAM  Now, let's get parallel: $ parallel "bowtie2 -x bowtie_reference_seq -U {} | samtools view -bq 1 - > BAM/{/.}.bam" ::: ind_seqs/*fq


and let's have a look at the output files:

 $ll BAM  This will try to use all available cores. You can limit the maximum number of cores parallel will use with the -j switch. The manual of parallel provides very good documentation with plenty of examples. I hope you will agree that this was really easy. So no reason to keep those extra cores on your machine idle when you could be using them all of them at once. ### If you have access to a computer cluster Note, this task makes use of a computer cluster running the job scheduler SGE. It also requires that you have the mapping programme stampy installed in a location that is in your PATH. Check this with: $ stampy


The Iceberg computer cluster currently contains 3,440 processing cores!!! Let’s be humble and try to use only up to 96 of them at the same time.

$cd ~/NGS_workshop/Unix_module/11_TASK$ ll ind_seqs


There are 96 fastq files with the reads from 96 individuals in this folder. All input files contain a number from 1 to 96 in their names. Otherwise, their names are identical. We now want to map the reads from those individuals against the reference_seq. We have already prepared a so-called array job submission script for you. Let’s have a look at it.

$nano array-job.sh  The lines starting with #$ are specific to the SGE. The first requests 2 Gigabytes of memory. the second asks for slightly less than 1 hour to complete each task of the job. Any job or task taking less than 8 hours will be submitted to the short queue by the SGE and there is almost no waiting time for this queue. The next two lines specify that you get an email when each task has begun and ended. -j y saves STDERR and STDOUT from each task in one file. Change those two lines appropriately:

# tell the SGE where to find your executable
export PATH=/usr/local/extras/Genomics/Applications/bin:$PATH  # change into the directory where the input files are cd /home/your_login_name/NGS_workshop/Unix_module/11_TASK/ind_seqs  The important line is the following: #$ -t 1-96


This initialises 96 task ids, which we can call in the rest of the array job submission script with $SGE_TASK_ID. stampy \ -g ../reference_seq \ -h ../reference_seq \ -M ind_$SGE_TASK_ID.fq \
-o ind_$SGE_TASK_ID.sam  This is the actual command line that executes stampy, our mapping programme. The -M switch to stampy takes the input file, the -o switch the output file name. Submitting this array job script to the SGE scheduler is equivalent to submitting 96 different job scripts, each with an explicit number instead of$SGE_TASK_ID. After exiting nano let’s submit the job.

$qsub array_job.sh$ Qstat


## TASK 12: I need to run programme X that requires 2 input files and cannot read compressed files. My input files are very large and disk space is always precious. How can I avoid first having to uncompress my input files and later recompress them when I have run the programme X?

This is where Unix process substitution comes in handy. Admittedly, the problem presented for this task is not very common anymore, but process substitution is a great feature for many other tasks as well. I'll give some examples below. Process substitution is the ace that you should have up your sleeves when simple piping from STDOUT into STDIN is not enough to do the job.

We'll need starcode for this task. It is our programme X. Please install it and put in your PATH.

$git clone https://github.com/gui11aume/starcode.git$ cd starcode
$make$ cp starcode ~/prog
$cd ..  starcode is a clustering programme. Here we want to use it to collapse (almost) identical read pairs into a set of non-redundant, unique read pairs. Let's have a look at the input files. $ cd 12_TASK
$ll  There should be a single-end (SE) and a paired-end (PE) read fastq file for you. Let's check that they are actually paired and not out of phase for some reason (maybe during quality filtering for instance): $ paste <(zcat SE.fq.gz | awk '(NR-1)%4==0') <(zcat PE.fq.gz | awk '(NR-1)%4==0') | less -S


You should see the headers of the SE reads in the left column and the headers of the PE reads in the right column. If the read pairs are in phase then the right column should be equal to the left column except for a "1" replaced by a "2" in one position. You can just browse to convince yourself that the two files are in phase or if you need to be absolutely sure not to have missed something:

$diff <(zcat SE.fq.gz | awk '(NR-1)%4==0') <(zcat PE.fq.gz | awk '(NR-1)%4==0' | sed 's/ 2/ 1/')  This command should print nothing if everything is ok. Now, let's first create an output directory, then cluster with starcode: $ mkdir UNIQUE
$starcode -d 2 -t 4 -c --non-redundant -1 <(zcat SE.fq.gz) -2 <(zcat PE.fq.gz) \ --output1 >(gzip > UNIQUE/SE.uniq.fq.gz) --output2 >(gzip > UNIQUE/PE.uniq.fq.gz)  The starcode flags -1 and -2 as well as --output1 and --output2 usually only accept file names. What we are creating with the <() and >() syntax are temporary FIFO's (first-in-first-out buffers) that look like files that can be opened to starcode. They are like doors through which we can pass the output of another process: <(), or that leads to the input of another process: >(). One thing to note is that all processes are running at the same time: two decompression processes, starcode and two compression processes. If there are multiple cores on your machine, this command automatically makes use of them, but it also works on a single core machine. Ok, one more example to showcase the use of process substitution. Assume for one moment you wanted to concatenate those SE and PE reads, but mark the junction with a sequence of N's and furthermore reverse complement the PE reads. We will need seqtk for that. So, as usual, please install before trying to execute the next command. $ cd ..
$git clone https://github.com/lh3/seqtk.git$ cd seqtk
$make$ cp seqtk ~/prog
$cd ../12_TASK  Here is one way to do that: $ paste <(zcat SE.fq.gz | awk '(NR-2)%4==0') <(seqtk seq -r PE.fq.gz | awk '(NR-2)%4==0') | \
sed  's/[[:space:]]/NNNNNNNNNN/' | less -S


## Endnote

This is the end of the advanced Unix session. If you’ve made it until here CONGRATULATIONS !!! Reward yourself with a beer or whatever you like while watching this video. Cheers!

I really hope that you could get a bit comfortable with Unix through this tutorial and are now excited to apply your newly learned skills to your own data. Even if you have learned only half of what is in the basic and advanced part of this tutorial, it should help you A LOT with your everyday work on bioinformatic data analyses. After a couple of years and a few more papers published, please remember Arnie's wise words: "None of us can make it alone!". That means it's going to be your turn to give something back to the community and share your knowledge with others!

Please feel free to post comments, questions, or improvements to this protocol. Happy to have your input!