ResearchComp

From NJIT-ARCS HPC Wiki
Jump to: navigation, search

Base allocations policies in shared research computing infrastructure : Educause Research Computing Constituent Group

ResearchCompSum Tally for Storage and HPC

Storage HPC Big data Virtual infrastructure
University Policy Cost Policy Cost Policy Cost Policy Cost General policy Notes
Colorado School of Mines Currently no base allocation for research / researcher storage. As of 01/01/17, Mines has begun offering this as a pay-as-you-go service. We charge $120/TB annually. I'm aware of other institutions that offer storage at higher and lower $/TB than that but based on type (particularly around performance) of storage.
$120/TB/yr
We have two models. One is a centrally purchased resource providing ~220Tf. Usage is requested/obtained by submitting a project proposal and request for cycles. Virtually all submissions are approved. No charge to researchers. The second model is a "condo-model" where central IT funds common (shared) infrastructure and researchers purchase nodes. Researchers who own nodes get first (and pre-emptive) priority on the nodes they own. Unused cycles on those nodes are part of a pool available to any researchers who own at least one node. If you don"t own at least one node, you cannot get access to any of those resources. No cost on the large, centrally funded platform. Cost on condo model is realize by "buying in" to the cluster with at least one node. Most researchers buy multiple nodes at a time. No policy. Not a service that we currently offer. N/A No policy. Not a service that we currently offer. N/A None exists yet - it is one of several tasks as we work to expand our research services and look for a new approach (funding, allocation, charge-back, etc) in offering those services. None
Michigan Technological University Every researcher gets 600 GB soft storage on our shared HPC cluster. They can request extensions on a per project basis. Yes, the storage space comes from IT's (research computing at Michigan Tech is an integral part of central IT) storage system and is primarily for storage/archival purposes (i.e., not for running computations). We provide the first TB to a research group for free, and every subsequent TB can be acquired at $250 per TB per year. The cost can be shared amongst multiple research groups -- all we need is their index or a set of them to be charged.
$250/TB/yr
The PI needs to be a faculty at Michigan Tech OR a staff with research responsibilities in one of our affiliated institutes. A brief proposal is necessary to gain access to our shared HPC. account request Priority for researchers' simulations is derived using an in-house merit based algorithm that takes into account their usage and production as well as willingness to follow policies and procedures. Yes, researchers can purchase "dedicated" compute nodes and add it to the shared HPC cluster. We do not entertain exclusive ownership concept, and have a X:Y sharing philosophy. Depending on the nature of usage, the research group that acquired the compute nodes will have X% of the total cores/processors readily available at any given time. The remaining Y% of the cores/processors is accessible to all users albeit for short term simulations (~6 hours). The cost can be shared amongst different PIs. N/A N/A N/A N/A None None
NJIT Base allocation : 500GB AFS and 500GB NFS DiskAndBackupCost
$250 or $870/TB/yr depending on disk access speed. Does not include backup costs.
General use is shared by multiple users on a fair share basis. Users who regularly dominate the use of the resource will be asked to purchase dedicated resources, or use off-premise resources. Researchers may purchase dedicated resources. $7.8K/CPU node/5-yr, $12K/GPU node/5-yr Same as for HPC Contact arcs@njit.edu Base allocation : 1 VM with 2 cores, 4GB RAM, 40GB disk VMCost
Typical : 8 CPU, 16GB RAM $360/yr
ResearcherBaseResources None
Northwestern University We provide 80GB home directories and 500GB project directories as part of a base allocation for research projects. 1 time purchase, covers 5 years of access. $355/TB.
$71/TB/yr
We have an allocation process: Base level allocations provide 35k compute hours and 500GBs of project storage. These are renewed annually. Researchers may request addition resources up to 500k compute hours and 2TBs of storage. These expire after 12 months. $7.8K/node/5-yr N/A N/A Configured depending up requirements. No costs for use of virtual infrastructure. None None
Ohio State University
College of Arts and Sciences
1-TB network shares at no charge to research groups upon request. We are currently re-evaluating that 1-TB limit (likely to go up). Beyond the free 1 TB, our research shares are $100/TB/year (unreplicated but snapshots available for at least seven days) and optionally an additional $70/TB/year for replication. We encourage these costs to be written into grants when possible. This is another service we're re-evaluating; many researchers object to a recurring charge (and of course can buy their own consumer-grade NAS for less!).
$100/TB/yr
The college is currently developing a central shared HPC resource with a few college-provided compute nodes and many more researcher-provided compute nodes (although we may not have hit the critical point yet where it makes sense to a researcher to share his or her resources--still has the benefit of consolidating management).
The state of Ohio (not the university or college) supports the Ohio Supercomputer Center, which is available to all researchers at Ohio State (as well as all other colleges and universities in Ohio). We do not intend the college's HPC resource as a competitor to OSC; in fact our environment is purposely similar to OSC's to encourage local development leading to further use of OSC (and perhaps less outlay for local resources).
No charge (researcher of course can provide his or her own compute and storage) for HPC service; storage similar to above is available on cluster. Nothing specific for big data at the college level. The university has big data as one of its "Discovery Themes" ([1]). N/A Researchers can get a small VM (1-2 CPU, 4-8 GB RAM, 100 GB disk) upon request. Again, we encourage these costs to be written into grants when possible. We are also re-evaluating details of this service in light of university security requirements. $75-180/yr, plus additional storage. Many researchers buy their own resources (if that's what dedicated means) and we're working on fitting those into university security requirements and developing and refining our services to minimize those purchases. None
Princeton University 500 GB on our parallel file system when a researcher gets an account on one of our computational clusters. Quotas in /home and parallel scratch vary by cluster. For a song researchers can request more quota on our parallel filesystem (w hich was paid for with an NSF MRI grant - the replacement will be centrally funded). We start pushing back in the multiple 10 TB range and push back hard at the 100 TB range. We have had some large projects pay for additions to the capacity of the filesystem. We do not charge fees for usage of any of our resources. Anyone at the university can get access to our resources (with a faculty sponsor) at a base priority. Any faculty who have contributed funding to our resources are guaranteed that their researchers are actively using (have jobs in the queue) the system - averaged over time. Nodes are not sitting waiting for them run (or have jobs pre-empted to make room), but the priority is set such that they will get their fair share averaged over time. Same as HPC resource Same as HPC resource For low resource utilization VMs, anyone at the university can get them for free. We have had researchers with restricted data and high utilization needs purchase their own hardware that we manage for them to run their VMs. No ongoing fees. Funding contributions guarantee utilization availability commensurate with contribution amount. This is averaged over time, and you can't get back what you didn't use. In other words, a researcher can't come to us in December and say that they want the whole year's allocation that they haven't used at all available to them in December. If they bought their own resource and didn't use it for 11 months, they would still have only one month of the resource left in they year. None
University of Cincinnati Currently, no base allocation for research/researcher storage. Researchers are charged the same amount as any person who wants to purchase dedicated storage. High-performance storage, Fiber Channel Storage: $0.02/Gigabyte/month File storage on the Isilon, ATA Storage: $.05/Gigabyte/month
$240/TB/yr fiber channel; $600/TB/yr ATA
UC does not currently have a central shared HPC. However, we are developing that service and are very interested in others' responses. N/A UC does not currently have central shared big data resources. However, we are developing that service and are very interested in others' responses. N/A Researchers are not provided a base allocation Hosting Fee for One Virtual Server with UCIT Administrative Support. (One Virtual server can host 25 guests): $52.08/month Hosting Fee for One Virtual Server behind a Big IP (Load balanced): $62.54/month
$625/yr to $750/yr
None [UnivOfCinc]
University of Denver DU currently doesn't have a policy for "base storage allocation", regarding a default amount of storage space allocated for researchers joi n our campus community. However, when researchers request storage, we accommodate them as best we can. We have had internal discussion of this in the past, within our tech unit, but has not gone beyond discussions. Any researcher (including students) who can demonstrate they need to use the HPC for their research is given access. Researchers who contribute funds to purchase nodes for the HPC are given priority access to those nodes and are invited to sit on the HPC Steering Committee which sets protocols and community standards for cluster use. Currently, we do not have this implemented, but we are curious about how other institutions have implemented this. Central funds are used for backplane/interconnectivity/storage on the cluster. We currently utilize our HPC for this function as our business school has recently started exploring big data research. We're curious to see other answers to this question. Currently, we do not have this implemented, but we are curious about how other institutions have implemented this. Central funds are used for backplane/interconnectivity/storage on the cluster. We provide virtual machines to researchers who need them to cond uct their research. We tailor the VM to match the needs of their research. We don't charge researchers for their use of their virtual machines in our VMware-managed environment. We currently do not have a policy like this, but we are in discussions regarding creating one. [UnivofDenver]

[UnivOfAlberta] The model here in Canada is quite different: HPC is a national resource with a Resource Allocation Competition annually (moving to triennial in 2018). An especially good outcome for a researcher would be 13,000 core-years for a given year. Storage is also available nationally: 500 GB backed up and 1 TB not backed up for each researcher with a Compute Canada account. Some institutions have local storage options available as well. A national cloud environment exists, on which researchers can obtain persistent or transient VMs, depending on their need. UAlberta has VMs available to researchers as well as faculties and departments for whatever purpose deemed necessary.

[UnivOfCinc] We do not currently offer any 'shared infrastructure' for researchers at no cost, except for the use of our NSF funded Science DMZ, but are currently developing a service model.

UC is also curious to know what level of human support is provided to researchers and if there is a fee. If there is not a fee, how are those services paid for? F&A, core funding, etc.?

[UnivOfDenver] DU is private and is not an R1 institution, so we are curious to know how our answers compare to others. We have a great deal of positive energy in support of enhancing services to researchers right now and these questions are perfect conversation starters. Summary: we have no "baseline" services for researchers but provide resources based upon research needs and our capabilities.