-
This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu
UpsalaMPI
From NJIT-ARCS HPC Wiki
A comparison between job scripts in slurm and sge
SGE for an MPI application | SLURM for an MPI application |
---|---|
#!/bin/bash # #$ -N test #$ -j y #$ -o test.output #$ -cwd #$ -M ucid@njit.edu #$ -m bea # Request 5 hours run time #$ -l h_rt=5:0:0 #$ -P your_project_id #$ -R y #$ -pe dmp4 16 #$ -l mem=2G # memory is counted per process on node module load module1 module2 ... mpirun your_application |
#!/bin/bash -l # NOTE the -l (login) flag! # #SBATCH -J test #SBATCH -o test.output #SBATCH -e test.output # Default in slurm #SBATCH --mail-user ucid@njit.edu #SBATCH --mail-type=ALL # Request 5 hours run time #SBATCH -t 5:0:0 #SBATCH -A your_project_id # #SBATCH -p node -n 16 # module load module1 module2 ... mpirun your_application |