-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

NewSoftware

From NJIT-ARCS HPC Wiki
Revision as of 16:34, 5 October 2020 by Hpcwiki1 dept.admin (Talk | contribs) (Importing text file)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

11/02/2016 - Siesta 4.1-b1

module load siesta

10/10/2016 - Jython 2.7.0

module load jython

Note: it is likely that the default python3 module will not allow the python2 module to load. To circumvent this problem, on Kong create a file ~/.modules with the following 2 lines :

module load sge
module load python2

09/28/2016 - lammps 30Jul16

<pre=code>module load lammps/[cpu,gpu]/30Jul16 </pre>

08/23/2016 - Ansys 17.1 for Linux 6,7

<pre=code>module load ansys/17.1 </pre>

08/09/2016 - FMRIB Software Library (FSL 5.0.9) for Linux 6,7

FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.

<pre=code>module load fsl/5.0.9</pre>

08/02/2016 - Open MPI for Scientific Linux 6,7

<pre=code>module load ompi/2.0.0</pre>

06/15/2016 - gcc 6.1.0 for Scientific Linux 6,7

<pre=code>module load gcc/6.1.0</pre>

06/15/2016 - gcc 5.4.0 for Scientific Linux 6,7

<pre=code>module load gcc/5.4.0</pre>

06/15/2016 - gcc 5.3.0 for Scientific Linux 6,7

<pre=code>module load gcc/5.3.0</pre>

05/13/2016 - Anaconda python 3.5.1

<pre=code>module load python3</pre>

05/13/2016 - Anaconda python 2.7.11

<pre=code>module load python2</pre>

03/18/2016 - R 3.2.4

<pre=code>module load R-Project</pre>

02/22/2016 - gcc 5.3.0 for Scientific Linux 7

<pre=code>module load gcc5/5.3.0-sl7</pre>

02/22/2016 - gcc 5.3.0 for Scientific Linux 6

<pre=code>module load gcc5</pre>

02/04/2016 - NWchem 6.6

<pre=code>module load nwchem</pre>

02/03/2016 - Gromacs 5.1.1

<pre=code>module load load gromacs/5.1.1</pre>

10/19/2015 - Intel Parallel Studio XE Cluster Edition

See "Intel" under "Compilers"

09/22/2015 - GNU gcc 5.2.0

GNU compilers. This installation does not include gfortran.

<pre=code>module load load gcc5</pre>

05/14/2015 - mafTools v01

Bioinformatics tools for dealing with Multiple Alignment Format (MAf)

<pre=code>module load load maftools</pre>

04/17/2015 - R 3.1.2

<pre=code>module load load R-Project</pre>

03/04/2015 - Freefem++

Freefem++ is a partial differential equation solver. It has its own language. freefem scripts can solve multiphysics non linear systems in 2D and 3D.

On stheno <pre=code>module load freefem++</pre>

On kong and all SL6 AFS clients <pre=code>module load freefem++-sl6</pre>

02/10/2015 - Anaconda Python

Enterprise-ready Python distribution for large-scale data processing, predictive analytics, and scientific computing.

To use anaconda :

module load anaconda

Note that anaconda includes openmpi and the mpi4py python module.

Example python script using mpy4py

#!/afs/cad/linux/anaconda-2.1.0/anaconda/bin/python
from mpi4py import MPI

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

if rank == 0:
   data = [(x+1)**x for x in range(size)]
   print 'we will be scattering:',data
else:
   data = None
   
data = comm.scatter(data, root=0)
data += 1
print 'rank',rank,'has data:',data

newData = comm.gather(data,root=0)

if rank == 0:
   print 'master:',newData

Sample submit script for the mpi4py python script

#!/bin/sh 
# 
# EXAMPLE OMPI SCRIPT FOR SGE 
# Modified by Gwolosh from Basement Supercomputing 1/2/2006 DJE 
# To use, change "OMPI_JOB", "NUMBER_OF_CPUS",
# "OMPI_PROGRAM_NAME" and "UCID" to real values. 
# 
# Your job name 
#$ -N OMPI_JOB
# 
# Use current working directory 
#$ -cwd 
# 
# Join stdout and stderr 
#$ -j y 
# 
# pe request for OMPI. Set your number of processors here. 
# Make sure you use the "ompi" parallel environment. 
#$ -pe ompi NUMBER_OF_CPUS
# email beginning and end of job
#$ -m be 
#$ -M UCID@njit.edu 
# 
# Run job through bash shell 
#$ -S /bin/bash 
# 
#Generate an machine file
echo "Got $NSLOTS processors." 
echo $PE_HOSTFILE 
machines=$TMPDIR/machines 
touch $machines 
cat $PE_HOSTFILE | awk ' {print $1} ' >> $machines 
# For reporting only
cat $machines 
# 
# Use full pathname to make sure we are using the right mpiexec
/afs/cad/linux/anaconda-2.1.0/anaconda/bin/mpiexec -f $machines -np $NSLOTS /full/path/to/OMPI_PROGRAM_NAME