-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

UpsalaMPI

From NJIT-ARCS HPC Wiki
Revision as of 16:36, 5 October 2020 by Hpcwiki1 dept.admin (Talk | contribs) (Importing text file)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A comparison between job scripts in slurm and sge

SGE for an MPI application SLURM for an MPI application
#!/bin/bash
#
#$ -N test
#$ -j y
#$ -o test.output
#$ -cwd
#$ -M ucid@njit.edu
#$ -m bea
# Request 5 hours run time
#$ -l h_rt=5:0:0
#$ -P your_project_id
#$ -R y
#$ -pe dmp4 16
#$ -l mem=2G
# memory is counted per process on node
module load module1 module2 ...
mpirun your_application
#!/bin/bash -l
# NOTE the -l (login) flag!
#
#SBATCH -J test
#SBATCH -o test.output
#SBATCH -e test.output
# Default in slurm
#SBATCH --mail-user ucid@njit.edu
#SBATCH --mail-type=ALL
# Request 5 hours run time
#SBATCH -t 5:0:0
#SBATCH -A your_project_id
#
#SBATCH -p node -n 16
#
module load module1 module2 ...
mpirun your_application