-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

HPCandBDSurvey2017.research.committee

From NJIT-ARCS HPC Wiki
Jump to: navigation, search

Overview: purpose and design of survey

The survey gathers user evaluations of the following resources: HPC hardware, HPC storage, big data Layers 1 and 2, software environment, internet bandwidth, and consultation with ARCS.

Architecture

The architecture of the survey allows fine-tuned customization for each user, both for choice of topics to be addressed and granularity of response.

The resource topics are presented in discrete sections. Users choose any or all of these topic sections in accordance with their interests. Within each section, users can refer to summary descriptions of relevant terms as they are queried about their level of satisfaction with various aspects of the services they use. Responses indicative of any level of dissatisfaction are followed by more detailed queries, allowing the user to indicate specific reasons for dissatisfaction and suggestions for improvement. These suggestions include selection among pre-set options and unrestricted text input by the user.

Questions vary in format as appropriate, and include multiple choice allowing for any combination of answers, multiple choice allowing for only a single answer, yes/no, and text boxes for unrestricted write-in responses.

Taking the survey

All users initially choose topics from a descriptive list. They then provide information about their NJIT status (i.e., faculty/staff or student) and the nature of their research. After this point, the length and complexity of the survey can vary considerably for each user, in accordance with number of topics chosen, the particular topics chosen (some topics divide further into selectable subtopics), and the nature of user responses (e.g., indication of dissatisfaction with a particular resource leads to further queries, while indication of satisfaction does not).

Example

A hypothetical user who selects HPC hardware and software environment as topics would, after providing status and research-interest information, be presented with the HPC hardware section of the survey, where he/she would indicate use of resources for parallel or serial computations. On choosing any or all of these subtopics, this user is presented with fine-grained questions only on the chosen subtopics. Thus, a user who chooses serial computations will then choose any or all of seven relevant resources (Kong, Stheno, etc.). These resource choices then lead to specific queries about the adequacy of several aspects, such as cores, nodes, RAM, etc. Users who indicate any level of dissatisfaction with any of these may then evaluate means of addressing inadequacies. This hypothetical user would next be presented with the software environment section of the survey, where he/she indicates level of satisfaction with existing software and suggests desired software not currently offered. Any suggestions of new software are followed with questions about cost and intended use. All users end the survey by rating their general satisfaction with IST-managed HPC and/or BD resources, and are given the opportunity to provide unrestricted commentary.

As the above example illustrates, completing the survey varies with each user in length, granularity, and topic choice, and question types range from choosing among brief pre-set responses to providing detailed information on multiple topics and subtopics, depending on the user's interests and level of expertise.

Results

The structure of the questions and answer choices provide aggregate data on user assessments of a wide variety of resources, at multiple levels of granularity, provided exclusively by faculty, staff, and students who use the resources in question. Collected data include user-suggested solutions to problems. These include solutions delimited by the survey answer choices as well as unrestricted text answers provided by the user. Aggregate data can be separately analyzed for faculty/staff and students, as well as for users with specific research goals. Thus, for example, if an average medium-satisfaction rating of a particular resource resulted from high ratings by researchers in one field and low ratings by researchers in another, analysis of the data would capture the different group responses.

Conclusion

The survey contains about 125 items. No one user will see all of these; rather, each user's particular subset is determined by that user's selected path and particular responses. Some users' paths may overlap a great deal, while others' may have virtually no overlap. Some users may see only a dozen questions, while other others may see several times that many. Hence, it is not possible to present a "representative" printout of the survey, nor would the logic of the branching architecture be discernible from a printed list of all questions.

Ways to get a meaningful introduction to the survey:

  1. Survey Overview
  2. Research committee members could read this document and then try the survey on-line, in a version that allows them to explore different path choices
  3. Research committee members could see a live presentation consisting of the information in this document accompanying an on-screen demonstration of some survey path choices