-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

Difference between revisions of "BenchAspNj"

From NJIT-ARCS HPC Wiki
Jump to: navigation, search
(Importing text file)
 
(No difference)

Latest revision as of 16:32, 5 October 2020

Storage and shared HPC policies for researchers at benchmark, aspirational, and representative New Jersey universities

The desired information is difficult to obtain. The information that has been obtained is given in the following table.

Note : The information listed in "Shared HPC Resources" in the second column is meant only to indicate whether the university has some level of shared HPC resources. If it does, the available information about those resources is given.

University Category Storage allocation policy / Contact status Shared HPC resources / Policy
Case Western Reserve University benchmark Researchers are allocated 1TB of storage free of charge. / No contact Dell PowerEdge servers, Intel Xeon processors, GPU nodes for high-end graphics processing. / Eligible faculty should complete the online application form.
Colorado School of Mines benchmark No base storage allocation. / Source: Dr. Timothy Kaiser via HPC Dept. 303-273-3000 2144 processing cores in 268 nodes. 256 nodes with 512 Clovertown E5355 (2.67 GHz) (quad core dual socket). 12 nodes with 48 Xeon 7140M (3.4 GHz) (quad socket dual core) 32 Gbytes each / Policy not available
Illinois Institute of Technology benchmark No information available on website No information available on website
Louisiana Technological University benchmark Refused to provide information. / Source; Fong Chang via HPC. 225-578-0900 2.6 GHz 8-Core Sandy Bridge Xeon 64-bit Processors, 64GB 1666MHz Ram, 500GB HD / Policy not available
Michigan Technological University benchmark No information avaialable. / Called Josh Olson, no response. Left voice msg. 9064871217 100 traditional CPU compute nodes, each with 16 Intel Sandy Bridge processors at 2.60 GHz and 64 GB RAM 5 GPU-based compute nodes, each with four NVIDIA Tesla M2090 GPUs / Policy not available
Missouri University of Science and Technology benchmark Research groups and users of the General cluster have the option of paying a one-time, per terabyte, charge for storage on the cluster file system. This is particularly important for those that need more than the 50GB directory space per user that is available to cluster users. / No contact 161 compute nodes with 1438 CPUs, 1 controlling node with 36TB 48TB of High Speed Scratch provided by a Terascala Lustre Appliance / Policy not available
New Mexico Institute of Mining and Technology benchmark No information available on website / Left voicemail for Joe Franlin at his extension which is not allowed to be distributed. Forwarded from 5758355700. No information available on website
Northeastern University benchmark No information available on website / Rob is not sure if he can provide this info. Contact info given. Will reach out. 6173734357 2.28.17: Contacted Rob at the HelpDesk, he is not sure if a response will be given. Resources not available / Shared System : The shared option allows all researchers access to the Discovery cluster and attached storage through fair-share job scheduling. Buy-In : The buy-in option provides computer cycles through a preferential queue commensurate with the researcher incremental investment. Dedicateds : The dedicated option allows for equipment purchased by the researcher to be hosted and maintained centrally by Northeastern for use only by the researcher and designees.
University of Maryland-Baltimore County benchmark No information available on website / Left voicemail for Dean Drake at 4104555642. There has yet to be a response. 324-node distributed-memory cluster maya. 72 nodes with two eight-core 2.6 GHz Intel E5-2650v2 Ivy Bridge CPUs and 64 GB memory Includes 19 hybrid nodes with two modern NVIDIA K20 GPUs 34 Dell PowerEdge R620 CPU-only compute nodes Each node has two Intel Ivy Bridge 2650v2 (2.6 GHz, 20 MB cache), processors with eight cores apiece, for a total of 16 cores per node. 19 Dell PowerEdge R720 CPU/GPU nodes, each with the above CPUs plus two (2) NVIDIA Tesla K20 GPUs. / Policy not available
University of Massachusetts-Lowell benchmark No information available on website / Left voicemail for Linda Gladu-Ennis at 978-934-4718. There has been no response as of yet. No information available on website
University of Texas at Dallas benchmark Application needed to access storage and all other resources. https://www.tacc.utexas.edu/systems/user-services / Called Frank Feagan at 972-883-6756. No response left a voicemail. AMD, ARM, Intel and SPARC processors Intel Xeon Phi Co-processors AMD and NVIDIA GPUs Intel and Mellanox high performance networks / Policy not available
California Institute of Technology aspirational Dependent on cluster. Currently a central cluster. / Source: Naveed (naveed@caltech.njit.edu) Director of HPC. HPC Clusters cloud service New T30 Queue - 2800 cores: Intel Haswell Xeon CPU E5-2660 v3 2.60Ghz M40 Queue: 1560 cores: Intel Westmere 2.9Ghz H30 Queue: 3200 cores: Intel SandyBridge 2.6Ghz GPU Queue: 7 Nvidia K40 GPU Nodes / Pay-as-you-go cloud service
Carnegie Mellon University aspirational Everything is custom and is dependent on the type of research. / Source: Rob Jones 4122683425 30 nodes with 32 cores. The nodes utilize AMD processors. / Policy not available
Georgia Institute of Technology aspirational No information available on website PACE Community Cluster (PACE-CC) is a medium-sized computation cluster (approximately 200 CPU cores) / Policy not available
Massachusetts Institute of Technology aspirational Private information. Need to be MIT user. / 6173242077 Head node: 2x Intel E5-2660 @ 2.20GHz (8 cores each) Workers: node001 to node016: 2x Intel E5-2660 v2 @ 2.20 GHz (10 cores each) Large memory node: node017: 4x Intel E5-4650 @ 2.70GHz (8 cores each) worker nodes: node018 - node030: 2x Intel E5-2660 v2 @ 2.20 GHz (10 cores each) / Policy not available
Rensselaer Polytechnic Institute aspirational No information availabe on website / Call sent to voicemail. Message left for Daniele Labrie at 518-276-4373 No information avaiable on website
Texas Tech University aspirational Private information. / Source: Jerry Perez. 806-834-6929 640 nodes (7680 cores) Two Westmere 2.8 GHz 6-core processors with 24 GB memory. / Policy not available
Virginia Polytechnic Institute and State University aspirational No information available on website. There are many departments that conduct their own computational research. / 5402316000 Xeon E5540 processors for a total of 128 cores running at 2.53GHz with 16 GB main memory Twenty-three 8 core serial nodes supporting High Energy Physics Research these nodes are logically managed by the on-campus cluster Hrothgar. / Policy not available
Rowan University NJ No information available on website Head node: CPU: 2 x Intel Xeon E5-2620v3, 2.4 GHz (6-Core, HT, 15MB Cache, 85W) 22 nm Processors Head node: RAM: 64 GB (8 x 8 GB DDR4-2133 ECC Registered 1R 1.2V DIMMs) Operating at 2133 MT/s Max Storage node: CPU: 2 x Intel Xeon E5-2620v3, 2.4 GHz (6-Core, HT, 15MB Cache, 85W) 22 nm Processors Storage node: RAM: 64 GB (8 x 8 GB DDR4-2133 ECC Registered 1R 1.2V DIMMs) Operating at 2133 MT/s Max Computing Nodes with Infiniband: 28 Total Nodes (672 Total Cores, 1792 GB RAM) Graphical Processing Unit (GPU) Nodes with Infiniband: 2 Total Nodes (40 Total Cores, 128 GB RAM) / Policy not available
Rutgers University NJ No information available on website Intel Pentium(R) D CPU @ 3.20GHz (Dual Core, 64-bit) 2GB DDR2 240-pin unbuffered RAM @ 677 Mhz (PC5300) / Policy not stated
Stevens Institute of Technology NJ No information available on website No information available on website