Cowboy

From HPCwiki
Jump to: navigation, search

Contents

Overview

OSUHPCC's newest supercomputer, Cowboy, was funded by NSF MRI grant “Acquisition of a High Performance Compute Cluster for Multidisciplinary Research,” OCI-1126330, 9/1/11-8/31/14, $908,812, PI Brunson.

This cluster, Cowboy, from Advanced Clustering Technologies consists of:

  • 252 standard compute nodes, each with dual Intel Xeon E5-2620 “Sandy Bridge” hex core 2.0 GHz CPUs, with 32 GB of 1333 MHz RAM and
  • two “fat nodes” each with 256 GB RAM and an NVIDIA Tesla C2075 card.
  • The aggregate peak speed is 48.8 TFLOPs, with 3048 cores, 8576 GB of RAM.
  • Cowboy also includes 92 TB of globally accessible high performance disk provided by three shelves of Panasas ActivStor12, this includes 20x 2TB drives and peak speed of 1500MB/s read and 1600MB/s write per shelf. The total solution provides an aggregate of 4.5GB/s read and 4.8GB/s write.
  • The interconnect networks are Infiniband for message passing, Gigabit Ethernet for I/O, and an ethernet management network. The Infiniband for message passing is Mellanox Connect-X 3 QDR in a 2:1 oversubscription. There are a total of 15x MIS5025Q switches providing both the leaf and spine components. Each leaf is connects to 24 compute nodes, and 12x 40Gb QDR links to the spine. Point-to-point latency is approx 1 microsecond. The ethernet network includes 11 leaf gigabit switches that connect to 24 compute nodes. Each leaf is uplinked via 2x 10G network ports to the spine 64 port Mellanox MSX1016 10 Gigabit switch. The network configuration provides a 1.2:1 oversubscription.

Quick start for experienced OSUHPCC users

Cowboy is setup very similarly to Pistol Pete. To get started, here are the primary differences:

  1. Please report all questions, comments and support requests to hpcc@okstate.edu.
  2. Your /home/username directory has a 25 GB quota.
  3. The 'default' queue is now called the 'batch' queue and has a 120 hour walltime limit. (for example: #PBS -q batch)
  4. The compute nodes have 12 cores each and 32 GB RAM. You should change your node request to ppn=12 from ppn=8 when using an entire node. (for example: #PBS -l nodes=4:ppn=12) As before, if you want 2 jobs to run per node, use ppn=6 and so on.
  5. The new directory for scratch files is called /scratch/username (from /fasttmp/username on PP).
  6. Cowboy has 2 nodes each with 256 GB RAM and and a NVIDIA Tesla C2075 card. To use these nodes, use the queue 'bigmem'
  7. We are implementing the "modules" system to handle environment variables - see the software section below for more information.

System Access

To request an account follow the directions here: Request HPCC account page

Once you have your account, follow the directions for logging in here: Logging in. Cowboy's hostname is cowboy.hpc.okstate.edu

You may find this new user training page helpful.

File systems

  • Each user has two basic directories for file storage, /home/username and /scratch/username . When you first log in, your current working directory is /home/username (NB: the word username is a standin for your actual username.)
    • /home/username has a quota of 25GB, store your source code and executables here. This filesystem is NOT BACKED UP, but it does have snapshots. To check your current usage type du -sh from your home directory.
    • /scratch/username is for large files and large collections of files. This filesystem is NOT BACKED UP.
  • /opt contains primarily applications needed by users. To request that an application be installed, please email hpcc@okstate.edu.
  • If you need archival storage, please contact dana.brunson@okstate.edu for options.

Job Queues

Cowboy's job queues facilitate fair share scheduling of the resources. To use a particular queue, put #PBS -q queuename in your submission script.

The queus are:

  • batch: The 'batch' queue is the default queue. The walltime limit is 120 hours (120:00:00). If your job needs to run longer than this and your software does not have checkpoint/restart capabilities, please email hpcc@okstate.edu for assistance as far in advance of your need as possible.
  • express: The 'express' queue is for short jobs and debugging/testing scripts. The express queue contains 2 compute nodes and has a walltime limit of one hour (1:00:00).
  • bigmem: The 'bigmem' queue directs jobs to one of the two compute nodes that have 256 GB RAM and a NVIDIA Tesla C2075 GPU card. The walltime limit is 120 hours (120:00:00).
  • killable: The 'killable' queue is for long running jobs that are unable to use a checkpoint/restart feature. The walltime limit is 504 hours (504:00:00). Jobs in this queue are subject to being killed, at the discretion of HPCC administrators, for hardware and software issues.

Software on Cowboy

A collection of program libraries and software packages are supported on Cowboy for a variety of disciplines. If you need a particular software package for your work, let us know at hpcc@okstate.edu. Also check the Cowboy software list for currently installed applications and usage tips. (Also check the search box in the left column of this wiki.) To organize the software available and to simplify the usage of software packages and versions, we are using the modules utility.

Modules

The Modules utility provides an easy way to set a variety of environment variables including: $PATH, $MANPATH, $LD_LIBRARY_PATH. Using modules to set environment variables nearly always removes the necessity of editing your .bashrc file.

To see the modules currently available use the command:

module avail

To load a module type:

module load <module_name>

To see the list of currently loaded modules type:

 module list

To search for a module with keywords type:

 module key <search terms>

Compiling Codes

Cowboy has the gnu compilers and the Intel Cluster Suite installed. MPI implementations installed include mvapich, openmpi and mpich2. To see what's available type:

 module avail

To use OpenMPI version 1.4 with intel compilers, type:

module load openmpi-1.4 

To switch to mvapich with the gnu 4.5.2 compilers type:

module swap openmpi-1.4 mvapich/gnu-4.6.2

NOTE: To run an application, you must use the mpirun launcher that corresponds to the MPI implementation that you compiled the code with. To do this, inside your submit script use the command module load openmpi-1.4/intel replacing the MPI module with the one your code uses.


Job submission script examples

For applications using MPI (applications that can use more than one compute node:

 
#!/bin/bash

#PBS -q batch
#PBS -l nodes=16:ppn=12
#PBS -l walltime=24:00:00
#PBS -j oe

module load mvapich2-1.8/intel
cd $PBS_O_WORKDIR
NP=`cat $PBS_NODEFILE | wc -l`
mpirun -np ${NP} ./yourexecutable

Note the usage of the backtick (or accent grave) character in the NP line. It is not the apostrophe, but the key in the upper left corner of they keyboard that has the tilde ~ character.

For applications that run within a single compute node:

Each compute node on Cowboy has 12 cores and 32 GB of RAM. If your code can use all 12 cores efficiently, request all 12 cores in your job script (#PBS -l nodes=1:ppn=12.) HOWEVER, if your code only utilizes a single core, it is better to only request a single core (#PBS -l nodes=1:ppn=1.) BUT, and here's where it gets a bit tricky: If your application uses more than 2.5 GB RAM, then you will need to request more cores to correspond to the amount of RAM your application uses. For example:

  • If your code needs all 32 GB RAM you want to request all the cores of the node so that your job has exclusive use of the node: #PBS -l nodes=1:ppn=12.
  • If your code needs 16 GB RAM, then you only need to request half the node, or 6 cores: #PBS -l nodes=1:ppn=6. In this situation, if you submit 2 jobs, then both of them may run on the same node.
  • In summary, request 1 core for each 2.5 GB RAM that your application needs.
  • If your application needs more than 32GB (shared) RAM, please use the bigmem queue. The bigmem nodes have 256 GB RAM and 12 cores. Request 1 core for each 20 GB RAM your code requires.
 
#!/bin/bash

#PBS -q batch
#PBS -l nodes=1:ppn=12
#PBS -l walltime=24:00:00
#PBS -j oe

module load <software_module_name>
cd $PBS_O_WORKDIR

<put the commands to run your application here>

More information is on the Submitting jobs to the cluster wiki page.

Software installed on Cowboy

Please email hpcc@okstate.edu with questions and requests for other applications.

Compilers:

Software Version(s) Compiler(s) Location Modulefile Dependencies
cuda toolkit-4.2 na /opt/cuda cuda/toolkit-4.2
cuda toolkit-5.5 na /opt/cuda5 cuda/toolkit-5.5
gnu 4.4.6 na /usr/bin none needed
gnu 4.6.2 na /act/gcc-4.6.2/bin gcc-4.6.2
gnu 4.7.2 na /act/gcc-4.7.2/bin gcc-4.7.2
intel 12.1.4 na /opt/intel/composer_xe_2011_sp1.10.319/bin/intel64
intel 13.0.1 na /opt/intel/composer_xe_2013.1.117/bin/intel64 none needed
openmpi 1.4 intel 12.1.4 /act/openmpi-1.4/intel/ openmpi-1.4/intel
openmpi 1.4 gnu 4.4.6 /act/openmpi-1.4/gnu/ openmpi-1.4/gnu
openmpi 1.4 gnu 4.6.2 /act/openmpi-1.4/gnu-4.6.2/ openmpi-1.4/gnu-4.6.2
mvapich2 1.8 intel 12.1.4 /act/mvapich2-1.8/intel/ mvapich2-1.8/intel

Applications:

Software Version(s) Compiler(s) Location Modulefile Dependencies
ABINIT 6.12.3 openmpi-1.4/gcc /opt/abinit/6.4.3/openmpi-1.4-gcc/ abinit/6.12.3-openmpi-gnu
ABINIT 7.0.4 openmpi-1.4/intel /opt/abinit/7.0.4/openmpi-1.4-intel/ abinit/7.0.4-openmpi-intel
ABySS 1.3.6 gcc /opt/abyss/1.3.6/gcc/ abyss/1.3.6
ABySS 1.3.6 openmpi-1.4/intel /opt/abyss/1.3.6/openmpi-1.4-intel/ abyss/1.3.6-openmpi-intel
ActivePerl 5.18.1.1800 gcc /opt/activeperl/5.18.1.1800/gcc/ activeperl/5.18.1.1800
ALLPATHS-LG site 45553 gcc-4.7.2 /opt/allpathsLG/45553/gcc-4.7.2/ allpaths-lg/45553
Amber 12 gcc /opt/amber/amber12 amber/amber12
AMOS 3.0.0 gcc /opt/amos/0.7/gcc amos/0.7
Anaconda 1.6.1 gcc /opt/anaconda/1.6.1/ anaconda/1.6.1
Argtable site 2.13 gcc /opt/argtable/2.13/gcc/ n/a
Augustus site 2.6.1 gcc /opt/augustus/2.6.1/gcc/ augustus/2.6.1
BaTS site 0.9 prebuilt /opt/bats/0.9/prebuilt/ bats/0.9
Beagle 1.0 gcc /opt/beagle/1.0/gcc/ n/a
Beast 1.7.4 pre-built /opt/beast/1.7.4/prebuilt/ tbd
Bioperl 1.6.901 gcc Within Perl 5.16.1 bioperl/1.6.901 Perl
Biopython 1.60 gcc /opt/biopython/1.60/gcc/ biopython/1.60
Blast+ 2.2.23+ gcc /opt/blast+/2.2.23/gcc/ blast+/2.2.23
Blast+ 2.2.26+ pre-built /opt/blast+/2.2.26/prebuilt/ blast+/2.2.26
Blast+ 2.2.27+ pre-built /opt/blast+/2.2.27/prebuilt/ blast+/2.2.27
Blast+ 2.2.28+ pre-built /opt/blast+/2.2.28/prebuilt/ blast+/2.2.28
Blat blat35 gcc /opt/blat/blat35/gcc/ blat/blat35
Boost 1.51.0 gcc /opt/boost/1.51.0/gcc/ boost/1.51.0
Boost 1.54.0 gcc /opt/boost/1.54.0/gcc/ boost/1.54.0
Bowtie 0.12.8 gcc /opt/bowtie/0.12.8/gcc/ bowtie/0.12.8-gcc
Bowtie 0.12.8 pre-built /opt/bowtie/0.12.8/prebuilt/ bowtie/0.12.8
Bowtie2 2.0.2 gcc /opt/bowtie2/2.0.2/gcc/ bowtie2/2.0.2
Bowtie2 2.0.2 pre-built /opt/bowtie2/2.0.2/prebuilt/ tbd
BWA 0.7.5a gcc /opt/bwa/0.7.5a/gcc/ bwa/0.7.5a
CENSOR 4.2.28 gcc /opt/censor/4.2.28/gcc/ censor/4.2.28
Circos site 0.63-4 prebuilt /opt/circos/0.63-4/prebuilt/ circos/0.63-4
Clustal Omega site 1.1.0 gcc /opt/clustal-omega/1.1.0/gcc/ clustal-omega/1.1.0-gcc Argtable
Cufflinks 2.0.2 pre-built /opt/cufflinks/2.0.2/prebuilt/ cufflinks/2.0.2
Cufflinks 2.1.1 pre-built /opt/cufflinks/2.0.2/prebuilt/ cufflinks/2.0.2
EMBOSS site 6.5.7 gcc /opt/emboss/6.5.7/gcc/ emboss/6.5.7
EuGene site 4.0a gcc /opt/eugene/4.0a/gcc/ eugene/4.0a gd
Exonerate 2.2.0 prebuilt /opt/exonerate/2.2.0/prebuilt/ exonerate/2.2.0
Expat 2.1.0 prebuilt /opt/expat/2.1.0/gcc/ expat/2.1.0
FastQC 0.10.1 prebuilt /opt/fastqc/0.10.1 fastqc/0.10.1
Fastx_toolkit 0.0.13.2 gcc /opt/fastx_toolkit/fastx_toolkit-0.0.13.2/ fastx_toolkit/0.0.13.2 libgtextutils-0.6.1
FFTW 2.1.5 openmpi-1.4-intel /opt/fftw/2.1.5/openmpi-1.4-intel/ fftw/2.1.5
FFTW 3.3.2 openmpi-1.4-intel /opt/fftw/3.3.2/openmpi-1.4-intel/ fftw/3.3.2
Gamess mvapich2-1.8-intel /opt/gamess/may2012/mvapich2-1.8-intel/ gamess/may2012-mvapich2-intel
GARM 0.7 pre-built /opt/garm/0.7/perl garm/0.7
Gaussian 03 G03 intel /opt/gaussian/g03/ gaussian/g03
Gaussian 09 G09 C.01 intel /opt/gaussian/g09/ gaussian/g09
GD site mercurial: 2/15/2013 gcc /opt/gd/hg-2-15-2013/
GenomeTools 1.5.1 gcc /opt/genometools/1.5.1/gcc/ genometools/1.5.1
GHC 7.4.1 gcc /opt/ghc/7.4.1/gcc/ ghc/7.4.1
Glimmer 3.02 gcc /opt/glimmer/3.02/gcc/ glimmer/3.02
GlimmerHMM site 3.0.1 gcc /opt/glimmerhmm/3.0.1/gcc/ glimmerhmm/3.0.1
Gromacs 4.5.5 openmpi-1.4-intel /opt/gromacs/4.5.5/openmpi-1.4-intel/ gromacs/4.5.5
Gromacs-XTC 1.1 gcc /opt/gromacs/xtc-1.1/gcc/ gromacs-xtc/1.1
GNU Scientific Library (GSL) 1.16 gcc /opt/gsl/1.16/gcc/ gsl/1.16
Gurobi 5.0.1 pre-built /opt/gurobi/5.0.1/linux64/ gurobi/5.0.1 (Cimarron network license)
Haskell 2012.2.0.0 icc /opt/haskell/2012.2.0.0/icc/ haskell/2012.2.0 GHC
HMMER 2.4i gcc /opt/hmmer/2.4i/gcc/ hmmer/2.4i
HMMER 3.0 gcc /opt/hmmer/3.0/gcc/ hmmer/3.0
HMMER 3.1b1 gcc /opt/hmmer/3.1b1/gcc/ hmmer/3.1b1
khmer site 0.4 gcc /opt/khmer/gcc/0.4 khmer/0.4 python 2.6.6
LAMMPS 4Jul12 openmpi-1.4-intel /opt/lammps/4Jul12/openmpi-1.4-intel/ lammps/4Jul12 FFTW-2.1.5
LAST 320 gcc /opt/last/320/gcc/ last/320
Macaulay2 2.1.6 gcc /opt/macaulay2/2.1.6/gcc/ macaulay2/2.1.6
Maker 2.28 gcc /opt/maker/2.28/gcc/ maker/2.28 perl/5.18.1; augustus; bioperl/1.6.1; blast+; exonerate; repeatmasker; snap
Maker 2.28 openmpi-1.4-intel /opt/maker/2.28/openmpi-1.4-intel/ maker/2.28-openmpi-intel openmpi-1.4/intel; perl/5.18.1; augustus; bioperl/1.6.1; blast+; exonerate; repeatmasker; snap
Mathematica 9.0.1 na /opt/mathematica/9.0/linux mathematica/9.0.1
Matlab R2012b na /opt/matlab/R2012b/ matlab/R2012b
MERLIN 1.1.2 gcc /opt/merlin/1.1.2/gcc/
Metasim 0.9.5 na /opt/metasim/0.9.5/ metasim/0.9.5
MGLTools site 1.5.6 pre-built /opt/mgltools/1.5.6/prebuilt/ mgltools/1.5.6
MGLTools site 1.5.7rc1 python /opt/mgltools/1.5.6/prebuilt/ mgltools/1.5.7rc1 python/2.7.3
Migrate-n site 3.4.4 openmpi-1.4-intel, gcc /opt/migrate-n/3.4.4/ migrate-n/openmpi-1.4-intel
Mira site 3.4.1.1 gcc /opt/mira/3.4.1.1/gcc/ mira/3.4.1.1
Mira site 4.0rc4 gcc /opt/mira/4.0rc4/gcc/ mira/4.0rc4 gcc-4.6.2
Mosaik site 1.1.0021 prebuilt /opt/mosaik/1.1.0021/prebuilt/ mosaik/1.1.0021
Mosaik site 2.1.73 prebuilt /opt/mosaik/2.1.73/prebuilt/ mosaik/2.1.73
Mothur 1.27.0 gcc /opt/mothur/1.27.0/gcc/ mothur/1.27.0
Mothur 1.27.0 openmpi-1.4-gnu /opt/mothur/1.27.0/openmpi-1.4-gnu/ mothur/1.27-0-openmpi-gnu
Mothur 1.27.0 openmpi-1.4-intel /opt/mothur/1.27.0/openmpi-1.4-intel/ mothur/1.27.0-openmpi-intel
Mothur 1.29.2 gcc /opt/mothur/1.29.2/gcc/ mothur/1.29.2
MrBayes 3.1.2 openmpi-1.4-intel /opt/mrbayes/3.1.2/openmpi-1.4-intel/ mrbayes/3.1.2-openmpi-intel Beagle
MrBayes 3.2.1 openmpi-1.4-intel /opt/mrbayes/3.2.1/openmpi-1.4-intel/ mrbayes/3.2.1-openmpi-intel Beagle
mreps 2.5 gcc /opt/mreps/2.5/gcc/ mreps/2.5
MSTMAP na gcc /opt/mstmap/
MUMmer 3.23 gcc /opt/mummer/3.23/gcc/ mummer/3.23
MUSCLE 3.8.31 na /opt/muscle/3.8.31/prebuilt/ muscle/3.8.31
NGOPT 20120518 na /opt/ngopt/ngopt_a5pipeline_linux-x64_20120518 ngopt/20120518 Perl
NMRPipe 7.9 na /opt/nmrpipe/7.9/prebuilt/
NWChem 6.3 openmpi-1.4-intel /opt/nwchem/6.3/openmpi-1.4-intel/ nwchem/6.3-openmpi-intel
OpenCV 2.4.5 gcc /opt/opencv/2.4.5/gcc/ paup/4.0b10
PASA 2 r20130605p1 gcc /opt/pasa2/r20130605p1/gcc/ pasa2/r20130605p1 activeperl/5.18.1.1800; blat; gmap
Paup 4.0b10 na /opt/paup/4.0b10/prebuilt/ paup/4.0b10
Perl 5.16.1 gcc /opt/perl/5.16.1/gcc/ perl/5.16.1
Perl 5.18.1 gcc /opt/perl/5.18.1/gcc/ perl/5.18.1
Phd2fasta site 0.990622.f gcc 4.4.6 /opt/phd2fasta/0.990622.f/ phd2fasta/0.990622.f
Phrap site 1.090518 gcc 4.4.6 /opt/phrap/1.090518 phrap/1.090518
Phred site 0.020425.c gcc 4.4.6 /opt/phred/0.020425.c phred/0.020425.c
PhyML 3.0 pre-built /opt/phyml/3.0/prebuilt/ phyml/3.0
PILER 1.0 gcc /opt/piler/1.0/gcc/ piler/1.0
Prodigal 2.60 prebuilt /opt/prodigal/prodigal.v2_60/ prodigal/2_60
ProtTest 3.2 pre-built must be placed in user's dir na
Python 2.6.6 gcc /usr/bin/ none needed
Python 2.7.3 gcc /opt/python/2.7.3/gcc/ python/2.7.3
Python 3.2.3 gcc /opt/python/3.2.3/gcc/ python/3.2.3
Quantum-Espresso 5.0.1 openmpi-1.4-intel /opt/espresso/5.0.1/openmpi-1.4-intel/ espresso/5.0.1-openmpi-intel
QTL Cartographer site 1.17j intel /opt/QTLCartUnix/1.17j/intel qtlcart/1.17j-intel
QTL Cartographer site 1.17j gcc /opt/QTLCartUnix/1.17j/gcc qtlcart/1.17j-gcc
R 2.15.1 icc /opt/r/2.15.1/icc/ R/2.15.1
R 2.15.2 gcc /opt/r/2.15.2/gcc/ R/2.15.2
R 3.0.2 gcc /opt/r/3.0.2/gcc-4.6.2/ R/3.0.2 gcc-4.6.2
RECON 1.07 gcc /opt/recon/1.07/gcc/ recon/1.07
RepeatMasker 4.0.1 na /opt/repeatmasker/4.0.1/perl/ repeatmasker/4.0.1 perl; trf/4.07b; rmblast/2.2.7
RepeatModeler 1.0.7 gcc /opt/repeatmodeler/1.0.7/gcc/ repeatmodeler/1.0.7 repeatmasker/4.0.1
RepeatScout 1.0.5 icc /opt/repeatscout/1.0.5/icc/ repeatscout/1.0.5
REPET 2.0 na /opt/repet/2.0/ repet/2.0 python/2.7.3; wublast/2.0a19; blast+/2.2.23; recon/1.07; piler/1.0; censor/4.2.28; repeatmasker/4.0.1; trf/4.07b; mreps/2.5; genometools/1.5.1; hmmer/3.0; mysql; torque
RMBlast 2.2.27 gcc /opt/rmblast/2.2.27/gcc/ rmblast/2.2.27
Samtools 0.1.18 gcc /opt/samtools/0.1.18/gcc/ samtools/0.1.18-gcc
Samtools 0.1.18 icc /opt/samtools/0.1.18/icc/ samtools/0.1.18-icc
screed site 0.7 gcc /opt/screed/0.7/screed/ screed/0.7 python/2.6.6
Scythe 0.981 gcc /opt/scythe/0.981 scythe/0.981
Sickle 1.200 gcc /opt/sickle/1.200 sickle/1.200
Siesta 3.1 openmpi-1.4-intel /opt/siesta/3.1/openmpi-1.4-intel/ siesta/3.1-openmpi-intel
SignalP 4.1 prebuilt /opt/signalp/signalp-4.1/ signalp/4.1
SNAP 0.15.4 prebuilt /opt/snap/0.15.4/prebuilt/ snap/0.15.4
SOAPdenovo r240 gcc /opt/soapdenovo/r240 soapdenovo/r240
SPAGeDi 1.3a pre-built /opt/spagedi/1.3a/prebuilt/ spagedi/1.3a
T-Coffee 9.03 gcc /opt/t-coffee/9.03/gcc/ t-coffee/9.03 Perl
Tandem Repeats Finder 4.07b pre-built /opt/trf/4.07b/prebuilt/ trf/4.07b
TAU 2.21.4 openmpi-1.4-intel /opt/tau/2.21.4/openmpi-1.4-intel/ tau/2.21.4
Tinker 6.1 ifort /opt/tinker/6.1/ifort/ tinker/6.1-ifort
Tinker 6.1 pre-built /opt/tinker/6.1/prebuilt/ tinker/6.1
TMHMM 2.0c prebuilt /opt/tmhmm/tmhmm-2.0c/ tmhmm/2.0c
Tophat 2.0.4 pre-built /opt/tophat/2.0.4/prebuilt/ tophat/2.0.4 bowtie; samtools; python/3.2.3
Tophat 2.0.6 pre-built /opt/tophat/2.0.6/prebuilt/ tophat/2.0.6 bowtie; samtools; python/3.2.3
TrimGalore 0.3.3 pre-built /opt/trimgalore/0.3.3/prebuilt/ trimgalore/0.3.3 FastQC; Cutadapt
Trimmomatic site 0.27 pre-built /opt/trimmomatic/0.27/prebuilt/ trimmomatic/0.27
Trinity site r2013-02-25 gcc /opt/trinity/r20130225/gcc/ trinity/r20130225 bowtie
Trinity site r2013-08-14 gcc /opt/trinity/r20130814/gcc/ trinity/r20130814 bowtie
Velvet 1.2.10 gcc /opt/velvet/1.2.10 velvet/1.2.10
WU-Blast 2.0a19 gcc /opt/wublast/2.0a19/gcc/ wublast/2.0a19


Personal tools