-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

Stheno

From NJIT-ARCS HPC Wiki
Jump to: navigation, search

The Stheno.njit.edu high performance computing (HPC) cluster is managed nearly identically to the Kong cluster, so most of Kong's documentation applies to Stheno also.

The Stheno cluster is physically distinct from the Kong cluster. Each cluster has its own headnode, on which resides user disk space, which is accessible from the compute nodes. Both clusters have access to AFS disk space.

Stheno utilizes an InfiniBand interconnect between headnode and compute nodes, whereas Kong uses a slower Gigabit Ethernet for the same purpse .Older Stheno nodes (#<15) have 32 Gbit/s QDR InfiniBand; newer nodes have 54.54 Gbit/s FDR. The full hardware specifications of the cluster appear in the HPC Machine Specifications table.

Whereas Kong is open to all NJIT researchers, Stheno, which is is funded by the Department of Mathematical Sciences (DMS), is open only to researchers associated with DMS. DMS faculty can obtain access by email to Academic & Research Computing Systems (ARCS). Please send email using your official @NJIT.EDU email. DMS students, postdocs, and other researchers must have their DMS faculty advisor request access on their behalf.

As of October 2014, Stheno runs SL (Scientific Linux) 5.5 and Kong runs SL 6.2. The difference is apparent mainly to users requiring more recent versions of compilers and specific compiled libraries.