-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

HipaaDetail

From NJIT-ARCS HPC Wiki
Jump to: navigation, search
  1. Encrypt data at rest or in transit
  2. This is addressed in AuriStorFS by the yfs-rxgk security class, volume and file server level security policies.

    When "encryption at rest" it is important to identify the threat that must be protected against. Is the requirement "end to end encryption" such that the fileserver never sees unencrypted data or is the requirement simply that the data written to the vice partition be stored encrypted? If the requirement is that the data be stored encrypted, that can be achieved using:

    • Physical disk encryption
    • Vice partition file system encryption (for example, zfs)
    • The AuriStorFS writing all data to disk using an encryption key (not yet implemented)

    The same question must be asked of the backups : must the backup be encrypted such that the backup administrators cannot decrypt the data.

    AuriStorFS could easily be modified to store a per-vice partition encryption key in the VLDB or be modified to store a per-volume encryption key in the VLDB.

    What is appropriate depends upon the threat that is being protected against.

  3. Auditing of file level access
  4. AFS should really be thought of as an Object Store upon which an file system namespace is implemented. All requests processed by a fileserver are Object requests identified by the object's ID (Volume, Vnode, Unique)

    OpenAFS and AuriStorFS both provide auditing of object requests. The AuriStorFS auditing is more comprehensive than that of OpenAFS.

    The AuriStorFS auditing includes:

    • Timestamp
    • Source network endpoint (ipv4 or ipv6 and port)
    • Authenticated ordered identity list
    • The object id (FileId)
    • The request opcode
    • A subset of request parameters
    • The result code

    There are several reasons that paths are not logged by the file server:

    • The path requested by the client application is only known to the client's operating system. Symlinks, mount points, overlay file systems, etc. can alter the paths. Auditing of file paths must therefore be performed by the client operating system.
    • Volumes are rooted directory trees without a parent. There is not a guarantee that there is a single mount point to volume root directory association. Therefore, it is not possible to determine from a given object what path was used to access the object.
    • Hard links mean that there isn't even a single directory entry to object Id mapping. From an object Id it is not possible to determine which directory entry or even which directory was used to identity the object Id.
    • Directories are objects that are read and parsed by the client systems. The lookup of a name in a directory is performed by the client system, not the fileserver.

    CIFS, NFS, Lustre, GlusterFS, and similar network file systems are not Object Stores. They export a local file system via a network share or export name. The audit log entries for these file systems do not log the path names that are accessed by the application on the client. The paths that are logged are the fileserver local paths relative to the share or export root.

  5. AFS needs to run on a dedicated server for HIPAA
  6. None of OpenAFS, NFS, or CIFS could provide a secure space for HIPAA data without also requiring that the data be isolated to a single machine, or at least a machine that is applying the HIPAA data management requirements to all data on the machine. In the design of security for AuriStorFS a conscious decision to require that security policies (security class authentication, integrity protection and wire privacy) apply to the entire file server and all of the data stored on the file server. A volume that permits anonymous unencrypted access that is stored on a file server which requires authentication and encryption will never serve the volume data to an unauthenticated client.
  7. Needs to be in a AFS dedicated DMZ protected by a firewall
  8. Also the case for NFS and CIFS, but not for AuriStorFS.

  9. Weak ACLs mean that clients need to be on dedicated network so access can given via firewall rules
  10. This holds also for NFS and CIFS, depending on whether or not the organization policy permits end users to modify ACLs.

    AuriStorFS reduces this risk by permitting Maximum ACLs to be assigned to each and every AFS volume.

  11. AFS backup options are limited, restore point objective (RPO) is not considered reliable
  12. Comments :

    • There are many backup solutions for AFS and AuriStorFS. Does the statement refer to the backup solution that NJIT currently uses is not being directly supported?
    • What is the NJIT Recovery Point Objective time period?
    • Is there confusion here between "backups" and real-time replication?
    • Is the requirement that each and every store operation be committed to two or more physical servers prior to call completion? If so, does this replication have to be performed at the AFS layer or can it be addressed by the layer in which vice partitions are stored?
  13. No migration mechanisms in place in case AFS hardware fails(move data from failed/deprecated system to new)
    • In case of a fileserver hardware failure, VMWare does automatic failover to another fileserver
    • iOAFS and AuriStorFS support live volume movement between servers in order to permit maintenance without service interruption
    • In case of server failure, the underlying vice partitions can be attached to other AFS fileservers and brought online.
    • What are the acceptable migration mechanisms?