-
This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu
Difference between pages "HPCOffPremiseCosts" and "UpsalaMPI"
From NJIT-ARCS HPC Wiki
(Difference between pages)
(Importing text file) |
(Importing text file) |
||
Line 1: | Line 1: | ||
− | < | + | <p>A comparison between job scripts in slurm and sge</p> |
− | + | <table> | |
− | + | <tr> | |
− | + | <th>SGE for an MPI application</th> | |
− | + | <th>SLURM for an MPI application</th> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td> | |
− | + | <div> | |
− | + | <pre> | |
− | + | #!/bin/bash | |
− | + | # | |
− | + | #$ -N test | |
− | + | #$ -j y | |
− | + | #$ -o test.output | |
− | = | + | #$ -cwd |
− | + | #$ -M ucid@njit.edu | |
− | + | #$ -m bea | |
− | + | # Request 5 hours run time | |
− | + | #$ -l h_rt=5:0:0 | |
− | + | #$ -P your_project_id | |
− | + | #$ -R y | |
− | + | #$ -pe dmp4 16 | |
− | + | #$ -l mem=2G | |
− | + | # memory is counted per process on node | |
− | + | module load module1 module2 ... | |
− | + | mpirun your_application | |
− | + | </pre> | |
− | + | </div> | |
− | + | </td> | |
− | + | <td> | |
− | + | <div> | |
− | + | <pre> | |
− | + | #!/bin/bash -l | |
− | = | + | # NOTE the -l (login) flag! |
− | + | # | |
− | + | #SBATCH -J test | |
− | < | + | #SBATCH -o test.output |
− | + | #SBATCH -e test.output | |
− | + | # Default in slurm | |
− | + | #SBATCH --mail-user ucid@njit.edu | |
− | + | #SBATCH --mail-type=ALL | |
− | + | # Request 5 hours run time | |
− | </ | + | #SBATCH -t 5:0:0 |
− | + | #SBATCH -A your_project_id | |
− | + | # | |
+ | #SBATCH -p node -n 16 | ||
+ | # | ||
+ | module load module1 module2 ... | ||
+ | mpirun your_application | ||
+ | </pre> | ||
+ | </div> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td></td> | ||
+ | <td></td> | ||
+ | </tr> | ||
+ | </table> |
Latest revision as of 16:36, 5 October 2020
A comparison between job scripts in slurm and sge
SGE for an MPI application | SLURM for an MPI application |
---|---|
#!/bin/bash # #$ -N test #$ -j y #$ -o test.output #$ -cwd #$ -M ucid@njit.edu #$ -m bea # Request 5 hours run time #$ -l h_rt=5:0:0 #$ -P your_project_id #$ -R y #$ -pe dmp4 16 #$ -l mem=2G # memory is counted per process on node module load module1 module2 ... mpirun your_application |
#!/bin/bash -l # NOTE the -l (login) flag! # #SBATCH -J test #SBATCH -o test.output #SBATCH -e test.output # Default in slurm #SBATCH --mail-user ucid@njit.edu #SBATCH --mail-type=ALL # Request 5 hours run time #SBATCH -t 5:0:0 #SBATCH -A your_project_id # #SBATCH -p node -n 16 # module load module1 module2 ... mpirun your_application |