Wulver

From NJIT-ARCS HPC Wiki
Jump to: navigation, search

Wulver Overview

This cluster was approved for purchase with delivery expected in Dec 2022. Dell is providing the hardware, including assembly. The purchase includes management and support from X-ISS. The power and cooling requirements of this cluster exceed anything available at NJIT, so it will be co-located by a vendor specializing in HPC computational facilities. Dedicated fiber optic networking will connect the cluster to the campus. In mid-2023, some nodes from Lochness will be moved to Wulver.

Wulver Access

Wulver will be accessible to all faculty and staff by request. Students and guest access can be requested by faculty. Some nodes will be availalbe for reserved or high-priority access; policy and cost TBD.

Wulver Specifications

  • Total nodes: 127
  • Total GPUs: 100 Nvidia A100
  • 100x CPU nodes (total 12800 cores; 51200 GB RAM)
    • 2x AMD 7763 2.45GHz,64C CPUs
    • 512 GB RAM
  • 2x high memory CPU nodes (total 256 cores, 4096 GB RAM)
    • same as above, but with 2,048 GB RAM
  • 25x GPU nodes (total 3200 cores; 12800 GB RAM; 100 GPUs)
    • 2x AMD 7713 2.0GHz,64C CPUs
    • 512 GB RAM
    • 4x NVIDIA A100 80GB
  • Parallel filesystem (multi process R/W abilities) accessible by all nodes:
    • ArcaStream PixStor Storage
    • 1PB storage
    • User transparent transfer from high-speed to medium to archive storage
    • InfiniBand HDR100 and 10GigE access
  • All nodes have network accessible storage (same filesystems as on Lochness):
    • /home/: 26 TB
    • /research/: 97 TB
    • AFS /afs/cad/ storage from NJIT cloud TBD
  • All nodes have:
    • 10Gig Ethernet network interface
    • HDR100 Infiniband network interface
  • Management and deployment overseen by NJIT:
    • X-IIS cluster management
    • SLURM scheduler and Bright Cluster management via Dell
    • Five year hardware warranty from Dell
  • InfiniBand support via Dell or directly from Mellanox