Jump to: navigation, search

All researchers have no-charge access to the Kong HPC cluster (shared), the big data and analytics Hadoop and Spark cluster and to multiple gigabytes of AFS storage space. Researchers who need resources beyond these can purchase their own, dedicated resources.

Computational resources

Kong nodes

Researchers can purchase dedicated nodes, both CPU and GPU, on the HPC cluster. Such nodes would be new hardware that would be incorporated into the Kong cluster. Individual owners of Kong nodes can choose to share, at periods of their choosing, the use of those nodes with other researchers. Thus, researchers can lend resources to others during periods of low need, and use others' resources in addition to their own during periods of high need.

Contact for information on supported node configurations and costs.

Babar resources

The Babar Hadoop cluster exists in the VMWare virtualized infrastructure environment. Researchers can purchase additional resources for Babar, for their dedicated use. Contact for information on such purchases.

Computational resources in the virtual infrastructure

There are cases in which a researcher may need dedicated computational resources - cores and/or RAM - that exceed those available on any single kong node. In such cases, the researcher may purchase a virtual machine (VM) in the VMWare virtualized infrastructure. Contact for information on such purchases.

Physical machines

There are cases in which a researcher needs computational resources, possibly with a large amount of attached storage, that cannot be accommodated by purchasing Kong nodes or a virtual machine, but instead has to be a physical computer.

  • Server to be located in an area controlled by the researcher
  • The researcher is responsible for :

    1. Power
    2. Arranging for networking. Network capacity and robustness outside of datacenters is necessarily less than that in datacenters
    3. HVAC
    4. Rack, if needed
    5. Physical installation
    6. Backups
    7. Physical security

    The researcher may a) wish to self-manage the server, or b) request that the server be managed by Academic and Research Computing Systems (ARCS). In the latter case, the operating system must be Red Hat Linux, or a variant thereof - Scientifc Linux, CentOS, Fedora.

    If connected to the NJIT network, the computer must be secure.

    Servers located in an area controlled by the researcher are not accessible from outside the NJIT network.

  • Server, and possibly storage, to be located in the GITC 4320 IST datacenter
  • In order for the researcher's equipment to be located in this space, the researcher must consult with ARCS on the feasibility of locating the equipment in GITC 4320 and receive an acknowledgement that the proposed equipment can be located there, before purchasing the equipment.

      IST is responsible for :
      1. Power, including UPS
      2. Arranging for datacenter-grade networking
      3. HVAC
      4. Rack, if needed
      5. Physical installation
      6. Backups
      7. Physical security
      8. Accessibility from outside the NJIT network (where applicable)

      The researcher is responsible for :

      1. Rack, if needed

      The researcher will have physical access to GITC 4320. However, this access does not extend to students working under the researcher.


    Storage located at NJIT (on-premise)

    Researchers are allocated a base of multiple gigabytes of home directory and temporary storage (both NFS-mounted) on the HPC clusters, as well as in AFS; the home directory and AFS space are backed up daily.

    Storage in addition to the base allocation can be purchased.

    Storage located in the cloud (off-premise)

    Off-premise storage can be rented from various vendors - e.g., Amazon Web Services, Google Drive, Azure. In some cases, use of such resources in addition to, or instead of, on-premise resources, is worth the cost. Researchers are encouraged to consult with ARCS on this subject.

    Off-premise computational resources