-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

Difference between pages "HPCOffPremiseCosts" and "UpsalaMPI"

From NJIT-ARCS HPC Wiki
(Difference between pages)
Jump to: navigation, search
(Importing text file)
 
(Importing text file)
 
Line 1: Line 1:
<div class="noautonum">__TOC__</div>
+
<p>A comparison between job scripts in slurm and sge</p>
 
+
<table>
== HPC Off-Premise Costs Document ==
+
<tr>
 
+
<th>SGE for an MPI application</th>
=== Summary ===
+
<th>SLURM for an MPI application</th>
 
+
</tr>
Cost estimates for hosting the NJIT HPC infrastructure off-premise (in the cloud)
+
<tr>
for a three-year period are presented for Penguin Computing, and are in preparation for Amazon
+
<td>
Web Services. Cost estimates for Azure are forthcoming.
+
<div>
 
+
<pre>
=== Purpose of the report ===
+
#!/bin/bash
 
+
#
This report is the start of a costs-benefits analysis of hosting some portion - as yet to be
+
#$ -N test
determined, and likely to be fluid - of NJIT's HPC infrastructure off-premise.
+
#$ -j y
 
+
#$ -o test.output
=== Report ===
+
#$ -cwd
<ul>
+
#$ -M ucid@njit.edu
<li> [https://wiki.hpc.arcs.njit.edu/external/off-premise/HPC3year.xlsx Three-year cost estimates, as of 10/31/2017]</li>
+
#$ -m bea
 
+
# Request 5 hours run time
<li> [https://wiki.hpc.arcs.njit.edu/external/off-premise/TartanOnSiteCostProjection.xlsx Seven-year HPC expansion cost estimate]</li>
+
#$ -l h_rt=5:0:0
 
+
#$ -P your_project_id
<li> A cost estimate for hosting the HPC cluster infrastructure at Azure for 3 years
+
#$ -R y
            is expected by EOD 11/03/2017</li>
+
#$ -pe dmp4 16
  </li>
+
#$ -l mem=2G
 
+
# memory is counted per process on node
<li> Cost estimates for other vendors - Google Cloud Platform, IBM Bluemix,
+
module load module1 module2 ...
            Oracle Cloud - may be  obtained later, on an as-needed basis.
+
mpirun your_application
  </li>
+
</pre>
 
+
</div>
<li> Big data hardware off-premise cost estimates are not included in the current phase; to
+
</td>
    be addressed later.
+
<td>
</li>
+
<div>
</ul>
+
<pre>
 
+
#!/bin/bash -l
=== Proposal ===
+
# NOTE the -l (login) flag!
Assuming that it is cost effective to move HPC off premise, ARCS recommends using the Penguin Computing On Demand
+
#
(POD) service for the following reasons:
+
#SBATCH -J test
<ul>
+
#SBATCH -o test.output
<li>POD is entirely hardware-based</li>
+
#SBATCH -e test.output
<li>Penguin Computing provides much of all of the software needed by NJIT, including compilers, utilities, and scheduler.
+
# Default in slurm
This is not the case for AWS or Azure</li>
+
#SBATCH --mail-user ucid@njit.edu
<li>POD includes the Luster parallel file system on the high-speed Omnipath node interconnect</li>
+
#SBATCH --mail-type=ALL
<li>POD is flexible with respect to accommodating user needs, e.g., backups</li>
+
# Request 5 hours run time
</ul>
+
#SBATCH -t 5:0:0
 
+
#SBATCH -A your_project_id
A trial project(s), involving one or more researchers, may be useful in gathering data on the deployment and use of off-premise HPC.
+
#
 +
#SBATCH -p node -n 16
 +
#
 +
module load module1 module2 ...
 +
mpirun your_application
 +
</pre>
 +
</div>
 +
</td>
 +
</tr>
 +
<tr>
 +
<td></td>
 +
<td></td>
 +
</tr>
 +
</table>

Latest revision as of 16:36, 5 October 2020

A comparison between job scripts in slurm and sge

SGE for an MPI application SLURM for an MPI application
#!/bin/bash
#
#$ -N test
#$ -j y
#$ -o test.output
#$ -cwd
#$ -M ucid@njit.edu
#$ -m bea
# Request 5 hours run time
#$ -l h_rt=5:0:0
#$ -P your_project_id
#$ -R y
#$ -pe dmp4 16
#$ -l mem=2G
# memory is counted per process on node
module load module1 module2 ...
mpirun your_application
#!/bin/bash -l
# NOTE the -l (login) flag!
#
#SBATCH -J test
#SBATCH -o test.output
#SBATCH -e test.output
# Default in slurm
#SBATCH --mail-user ucid@njit.edu
#SBATCH --mail-type=ALL
# Request 5 hours run time
#SBATCH -t 5:0:0
#SBATCH -A your_project_id
#
#SBATCH -p node -n 16
#
module load module1 module2 ...
mpirun your_application