-

This site is deprecated and will be decommissioned shortly. For current information regarding HPC visit our new site: hpc.njit.edu

AFSStrategy

From NJIT-ARCS HPC Wiki
Jump to: navigation, search

Summary

OpenAFS (OAFS) is deeply woven on a very large scale into NJIT's academics, research, web, database, other services, and systems administration.

Trying to replace the cad.njit.edu OAFS cell with other, non-AFS, methods of providing equivalent functionality to what is now provided by the OAFS deployment is not justified either technically or economically, is probably not feasible by using a variety of methods, and is not feasible using any single method. Such a replacement would incur many downsides, including :

  • A large increase in systems administration effort, due to :
    • Markedly decreased efficiency of managing a non-AFS environment
    • Attempting to reproduce current functionality in a new environment
  • Loss of single filesystem and namespace across all platforms - Linux, MacOS, Windows
  • The need to educate users regarding the new environment; large-scale documentation changes
  • Loss of the possibility of collaboration via geographically dispersed AFS cells. This has implications re: state-wide collaborations - e.g., research over high-speed networks for data that is too large to move

I. AFS history, deployment at NJIT

The Andrew File System (AFS) was originally part of the Andrew Project at Carnegie Mellon University. The Andrew project was initiated in 1983. Transarc Corporation was formed by members of the Andrew Project to commercialize AFS in June 1989. IBM purchased Transarc in 1994 and in 1999 Transarc was renamed the IBM Pittsburgh Lab. IBM released the source code in 2000 and the OpenAFS (OAFS) project was initiated.

NJIT deployed AFS in 1995. The cost for AFS at this time was approximately $5,000 per year. There has been no cost for AFS since 2000 when the OAFS project was started.

II. Reasons to use OpenAFS (OAFS)

OAFS has a large number of important advantages compared to other filesystems :

  1. More than 20-year history of working very well and very reliably at NJIT - deeply woven on a very large scale into academics, research, web, database, and other services, and systems administration.
  2. Clients for all platforms - Linux, Mac OSX, Windows, others.
  3. Extremely efficient administration and applications distribution
  4. Single global namespace for all clients.
  5. Long history of working well and reliably at many institutions and interntational corporations.
  6. No administrator intervention in making mount points available to all clients, other than creating the mount point - a single command done on any client.
  7. Read-only replication of volumes.
  8. Scalability - number of clients, volumes, users, readily accommodated.
  9. Fine-grained ACLs.
  10. Machine-based ACLs (Heavily used in the cad.njit.edu cell)
  11. Native Kerberos integration.
  12. Simple enforcement of quotas.
  13. Reconfigurations with no user impact.
  14. On-line backup volumes.
  15. Client caching.
  16. Collaboration via cells across geographic regions.

What will be lost if there is a move off of AFS

  • Item 2. Clients for all platforms. Data served by a single application, rather than from an untried mix of applications. This greatly simplifies the delivery of software, documentation, and user home space.<
  • Item 3. Efficiency of administration and applications distribution OAFS, and by extension AuriStorFS, is simple to administer. An alternative assortment of applications would be far more complex and difficult to administer.

    AFS provides a unique environment in which to distribute software. The ability to install software in its own volume (container), and then manipulate that volume transparent to the user allows for risk-free software updates and module installations.

  • Item 4. Single global namespace. Different platforms will see different namespaces - i.e., balkanization. The ability to communicate the locations of directories and files common to all platforms would be lost.
  • Item 6. Mount points easily made available.. AFS uses server-side (rather than client-side) mounting of filesystems. This is done automatically, with no administrator intervention, and no root intervention on clients; a client knows on which fileserver to find files without any administrator intervention. Loss of this capability has enormous consequences in the efficiency of deploying software (both opensource and commercial) to clients.
  • Item 7. Read-only replication of volumes. Provides file availability during a fileserver outage. Could be used to advantage at disaster recovery site. Replication is built into AFS; additional software of some kind would be needed to add this capability for non-AFS products.
  • Item 8. Scalability. The scalability of replacement products is unknown, and would be limited to the capabilities of the least scalable product of the replacement set.
  • Item 9. Fine-grained ACLs Implemented ACL differences between Posix and Windows means that the common ACLs now in place across all platforms using OAFS will not be possible.
  • Item 11. Native Kerberos integration. Aligns with IDM, SSO plans.
  • Item 12. Simple enforcement of quotas. The administration of the cad.njit.edu cell relies on enforcement of quotas on many types of directories. Other filesystems may support user and group quotas, but unlike AFS, they do not support quotas on the "data-chunk" level.
  • Item 13. Reconfigurations with no user impact. Routine maintenance will mean filesystems unavailability, with resultant file unavailability. Furthermore, combined with the loss of single global namespace, it will be difficult to determine which files will not be accessible during a filsever outage.
  • Item 14. On-line backup volumes. Provides users immediate access to previous day's files. Provides administrators immediate access to all of previous day's files. To achieve similar capability with other products would require deployment of some kind of software, and twice the disk space OAFS uses. Absent backup volumes, a large increase in the number of restore requests would be expected.
  • Item 15. Client caching. Significantly increases performance. Use of other products would greatly increase network traffic and file servers disk access.
  • Item 16. Collaboration via cells across geographic regions. AFS was designed with this unique capability; allows users in several cells to seamlessly access data in other cells. This possibility would be lost.

II.1 Uses of AFS at NJIT : User Services

  • Home directories
    • General-purpose academic data : Web pages, code, executables, applications output, documents, etc.
  • Course directories : hundreds per semester
  • Research directories : thousands, across about 20 departments
  • Research websites
  • Portions of DMS and CCS departmental websites
  • Club websites for clubs not using Google Sites
  • Departmental administrative documents
  • MySQL databases

II.2 Uses of AFS at NJIT : Systems Administration

Enterprise-wide deployment of :

  • Opensource and commercial software
  • Opensource libraries
  • Scripts
  • System-wide utilities
  • System-wide configuration and reference files
  • Administration program output files

III Largest current AFS deployments

III.1 Largest current AFS worldwide commercial deployments

These mission-critical deployments, which are not public (almost always the case with commercial cells), comprise hundreds of cells in billion-dollar corporations. This level of usage means "industry standard" on a high level.

  • GE Aircraft Systems
  • Goldman Sachs
  • IBM and all of its spinoffs
    • Lexmark
    • Lenovo's Thinkpad division
    • Hitachi's disk manufacturing
    • Global Foundaries
  • KLM
  • Morgan Stanley : 180,000 Windows 1.7.x clients
  • Qualcomm
  • United Airlines aircraft maintenance

III.2 Largest current AFS academic and research public cells

If the cell is not public, it is not known. Like most academic cells, the NJIT cad.njit.edu cell is not public.

  • Arizona State Univ.
  • Carnegie Mellon Univ.
  • Deutsches Elektronen-Synchrotron (DESY)
  • MIT
  • North Carolina State University
  • Stanford Univ.
  • Univ of Michigan
  • Univ of North Carolina Chapel Hill - Arts and Sciences
  • Univ of North Carolina Charlotte
  • Univ of Notre Dame CRC (main campus is on auto pilot)

ActivePublicCells

IV. What are the current issues

  • Instability in the OAFS community : OAFS "vs" AuriStorFS (commercial, Jeffrey Altman). It is not clear at this point how things will develop
  • OAFS software has always been free, but OAFS may need to start charging institutions for its use
  • In late 2015, the OAFS client for Mac OSX >= 10.10 stopped working, due to Kerberos-related changes on the Mac side. This was fixed in the AuriStorFS client in 12/2015, and in the OAFS client in 1/2016.
  • Greater granularity is needed in restoring from backups. Restoring even a single small file from a backup requires restoring the entire volume containing that file - a major headache when the volume is large, as is increasingly the case.
  • Security incidents (e.g., UNC) : users mis-use AFS ACLs, inappropriately exposing directories

It should be noted that the cad.njit.edu OAFS cell continues to function normally, with no known problems, and none anticipated. However, AFS software, like any other application, must be maintained to keep pace with operating system changes and security requirements.

V. CSO-reported Problems with OAFS Client Software

The following points have been made by CSO re: OAFS client software. Note that these points are moot once NJIT has AuriStorFS support.

  1. CSO : The freely available OpenAFS clients for Windows and Mac are completely unusable at this point due to the requirement that clients be signed and certified by the OS vendor.
    • MacOSX
    • Clients do not require certification but they do require the vendor to be approved by Apple.

      • AuriStorFS
        • AuriStor, Inc. is an approved vendor and all of the OSX clients shipped by AuriStor, Inc. are appropriately signed.
      • OAFS
        • Current OAFS client for MacOSX tested at NJIT and elsewhere; it works
    • Windows
      • AuriStorFS
        • The Windows client 1.7.3301 as distributed from the AuriStor, Inc. web site is appropriately signed for all versions of Microsoft Windows through Windows 10 until Dec 31, 2016. After that time the signatures will be invalid and a replacement client package will be required.

          All clients for Windows 10 and Server 2016 signed after October 26, 2015 require co-signatures from Microsoft. All clients for Server 2016 require certificate to obtain that co-signature. AuriStor, Inc. does not yet distribute an AuriStorFS branded client to the public because it has not yet met all of the requirements for certification. A client that only supports the AFS3 protocol (no IPv6) and rxkad security class (DES and fcrypt security) cannot obtain certification.

      • OAFS
  2. CSO : We are also finding problems where some files cannot be copied from Windows explorer into AFS. Users of Microsoft office have also had problems when trying to save documents directly to AFS.

VI. Backup Concerns/Considerations

  • To restore a single file, the entire backup set containing the file has be restored from a backup tape.

    Veritas NetBackup does not support restoration of a single file from the backup set it is in. Other commercial products, e.g., Teradactyl TiBS and Tivoli Storage Manager, do have this support. A costs-benefits analysis of deploying an additional backup application is needed.

  • vos dump performance is poor.

    AuriStorFS offers greatly improved 'vos dump' problem compared to OAFS :

    There are three components to 'vos dump' performance : 1) the volserver implementation; 2), the Rx implementation; 3) the 'vos dump' command itself. All three of these have been significantly improved in AuriStorFS. Finally, whereas it is not safe to execute large numbers of "vos dump" operations in parallel with the OpenAFS volserver, each AuriStorFS volserver can support up to 1000 simultaneous volume operations.

VII. AuriStorFS as replacement for OAFS

VII.1 AuriStorFS site status as of 01/19/2017

AuriStorFS, a commercial implementation of AFS with some important enhancements in performance, security, capacities, authorization, per-file ACLs, and administration relative to OAFS. AuriStorFS was founded in October 2007 as Your File System (YFS). Jeffrey Altman, AuriStor CEO, is very willing to discuss with NJIT his view of the relationship between AuriStorFS and other products.

  • Client : MIT Lincoln Labs
  • Client : Large Wall Street firm (30 cells)
  • Client : North Carolina State Univ. Designing the merger of three OpenAFS cells into one AuriStorFS cell
  • Client : Vanderbilt Univ Advanced Computing Center has committed to purchase. Vanderbilt University along with Univ of Tennesee Knoxville and UT Memphis are submitting an NSF Campus Cyberinfrastructure proposal (NSF 16-567 Campus Cyberinfrastructure (CC*)) that uses AuriStorFS to provide name space and security services on top of a statewide Tennessee Open Research Cloud (TORC)
  • Client : Michigan State Univ., production conversion from OpenAFS to AuriStorFS by late November 2016 - up to 400,000 users
  • Client : SLAC (National Accelerator Laboratory - Stanford Univ).
  • Client : Lulea University of Technology, Sweden.
  • Client : Univ of Maryland College Park (as of 02/17/2017). 65,000 users.
  • In contracting stage with (as of 01/19/2017) :
    • Naval Research Labs
    • University of California, Santa Cruz, 55,000 accounts. Purchase process has begun.
  • In trial at :
    • Large multi-national bank that has not previously used AFS
    • Univ of Notre Dame; will present AuriStorFS use results at Super Computing 16 at UND booth and a BOF
  • In preliminary discussions at :
    • FBI
    • Defense Information Systems Agency (DISA)

VII.2 AuriStorFS security

  1. Security Policies (Authn, Integ, Privacy) requirements on volumes and file servers. Only a file server with a security policy equal to or stricter than the volume policy can host the volume. These policies are used to enforce the proper security posture for each connection that a client uses when contacting a file server.
  2. Labels. Volumes and File Servers can be assigned arbitrary labels. A volume can only reside on a file server that has a superset of the labels assigned to the volume.
  3. The yfs-rxgk security class permits the use of the AES256-CTS-HMAC-SHA1-96 algorithm for encryption and provides perfect forward secrecy. As soon as the IETF finishes standardization the AES256-CTS-HMAC-SHA384-192 algorithm will be supported.

In addition, AuriStorFS supports multi-factor access control entries so it is possible to grant different permissions to :

  • anonymous
  • user
  • anonymous @ machine
  • machine
  • user @ machine

where "user" and "machine" are Kerberos identities.

Considerations in deploying AuriStorFS :

  • Licensing costs
  • Converting from OAFS to AuriStorFS is straightforward. However, the process of reverting from AuriStorFS to OAFS may be impractical
  • Viability of AuriStorFS, currently 4 FTE, 6 contractors

VII.3 AuriStorFS and cloud storage

In AuriStor's roadmap is two-way whole file copy-and-sync between AFS directories and cloud storage (e.g., AWS/DropBox/GoogleDrive/OwnCloud) directories. When the user makes changes in the AFS directory those chaneges show up in the cloud directory, and vice versa. This capability has the added benefit that the AFS backup would also back up the cloud directories.

In addition, it will be possible to do filtering :

  • Before a file is synced from AFS to the cloud, it could go through a content filter : e.g., is this information that is not allowed to be shared? (PII, etc)
  • Before a file is synced from the cloud to AFS it could go through a content filter : e.g., virus scanning

The abstraction layer for AFS is the same regardless of the type of back-end vice partition (where data is stored) : clients access/manipulate files in exactly the same way, regardless of whether they are local to the fileserver, or, e.g., stored in an Amazon Web Services S3 back-end, or any other other type of vice partition.

Note that for the workflows required for a robust research-focused academic experience, various aspects of course work, web pages querying against academic Oracle and MySQL databases, collaboration, data security (Outsourcing data storage), long-term backups, and providing core institutional services, an institutional file system is required.

VII.4 AuriStorFS and HPC

AuriStor, with its greatly enhanced performance compared to OAFS, would move NJIT HPC closer to the goals of the Tartan HPC Initiative by replacing the separate NFS-hosted /home directories currently deployed on Kong and Stheno with AFS-mounted directories, thus providing consistent storage for researchers using both clusters.

VII.5 AuriStorFS and NetApp

  • Unlike the NetApp filesystem, and other such hardware-software combinations, OAFS / AuriStorFS is not dependent on any particular hardware; whereas the NetApp filesystem requires NetApp hardware, OAFS / AuriStorFS works on virtually any storage device. Thus, use of OAFS / AuriStorFS avoids hardware vendor lock-in.
  • With NetApp, the design of the structured namespace must be done at the start of the implementation. That design is essentially locked in.
  • NetApp's NFS does not support rpc.rquotad, which means the standard "edquota" command cannot be used to adjust user quotas on Linux systems that mount the filesystem. Instead, administrators need to ssh to the NetApp device (a different system than the one mounting the filesystem) to modify quotas, or create a utility to do so. In contrast, an AFS administrator can adjust quotas from any system in the cell, regardless of what system (or device) the files are actually on. This is an example of how OAFS / AuriStorFS insulates administration from the particular underlying hardware.

VII.6 Considerations of replacement of OAFS with a variety of applications and/or filesystems: Experience at other institutions

  1. SLAC National Accelerator Laboratory
  2. SLAC is migrating to AuriStorFS.

    • Discussion with Renata Dart, Unix Systems Administrator, 5/3/2016
      • SLAC has a large compute grid - accessed globally - with many thousands of nodes each of which has access to AFS, GPFS, and other storage systems
      • SLAC is about 40 days into a 90-day AuriStorFS trial
      • Due to the number of threads available in OAFS fileserver thread pools, SLAC instructs researchers to not use AFS to host data sets or result sets. Resarchers are only supposed to copy data from AFS to one of the cluster file systems (such as GPFS) at the start of the job and copy the results back to AFS at the end of the job. Researchers frequently do not remember and kick off jobs across hundreds or thousands of nodes, causing poor response ("meltdown") for all users of that fileserver.
      • Using an AurisStorFS fileserver for a job using 5000 nodes that caused meltdown of an OAFS fileserver caused no issues on the AurisStorFS fileserver. As a result, it is no longer necessary for SLAC to ecommend that researchers avoid the AFS file namespace for long running multi-node jobs. This has exactly the same implications for HPC at NJIT.
      • Fileservers are all RHEL, clients (1000's) are all RHEL5/6/7. Installation of AuriStorFS server and client software was routine. Command set is essentially the same as that of OAFS. There have been no crashes.
      • AuriStorFS support has been extremely responsive
      • AFS database servers are OAFS, for legacy encryption reasons. AuriStorFS clients worked fine with the OAFS dbservers
      • Backups are done via Teradactyl TiBS software (commercial)
      • SLAC has not considered moving off of AFS, and does not plan to consider it
      • Follow-up call is scheduled for early June 2016
  3. Experience at JPL, from the 2015 OAFS and Kerberos Workshop
  4. JPL is staying with OAFS for now.

    • AFS has one or more features or combinations of features that are not available with other available file systems
    • Organizations lose track of what AFS is doing and what AFS can do
    • Believing that AFS does "just X" (shares files) or is "just Y" (cross platform), niche solutions are implemented (SharePoint, other web based tools, drag and drop file transfer, home grown file system synchronization, NFS ...)
    • File systems are not "sexy" and not well understood by management; once implemented and integrated into the IT environment, AFS need rarely be discussed
    • Piecewise replacement of AFS is an inadequate approach, results in proliferation of costs due to support variety of special-purpose solutions
    • Some loss of institutional control of data for variety of reasons -- historical protections (ACLs) not transferred, format different, loss of Kerberos, different application controls, etc.
    • Cost, time, and security of moving off AFS all unknown. The writer, K. Kimball, expects that they will never leave AFS because no formal cost/risk analysis of use cases was performed
  5. Conference call with Jason Cowart, Stanford Central IT on 1-Mar-2016
    • Would like have already migrated to AuriStorFS except for turnover of about 70% of Central IT services personnel
    • In holding pattern right now, continuing to use OAFS, with service contract by Sine Nomine
    • Expecting re-evaluation of the situation in a year
  6. Musings at Stanford
    • A listing of questions, concerns, use cases, etc. intended for discussion, but no actual responses. The document is useful for us for our considerations.
  7. Stanford Linux Infrastructure cell
    • Set up of discussion pending
  8. Stanford Department of Computer Science cell
    • Set up of discussion pending

VII.7 Other alternatives to OAFS

  1. NFSv4
    • TCP/IP-based. Can cause scalability problems with fileservers being unable to handle thousands of clients due to the inability of the TCP/IP stack to support that many simultaneous connections.
    • The NFSv4 protocol, due to its reliance on TCP and the ability to use TCP hardware accelerators, is able to achieve faster throughput for a single connection than AuriStorFS's Rx UDP-based implementation. AuriStorFS' Rx will improve over time but is better at horizontal scaling for larger numbers of simultaneous clients.
    • AuriStorFS uses the same cache coherency model as AFS3 and therefore gains all of the performance benefits that model provides when compared to optimistic locking based file systems such as NFS* and CIFS.
    • Implemented ACL differences between Posix and Windows means that the common ACLs now in place across all platforms using OAFS will not be possible.
  2. General Parallel File System (GPFS) from IBM
    • Has somewhat limited MS Windows desktop client that only supports user mapping via a Microsoft add-on. From the previous URL : "Even with NFSv4, which provides features such as ACLs and delegations, the need for a separate security infrastructure (such as Kerberos) can be prohibitive."
    • Licensing estimated at $14K per server socket. We don't know how many server/sockets would be needed, but given that AuriStorFS would use three servers of two CPUs (6 sockets), the equivalent licensing for GPFS would be $84K annually.
    • Institutional cost and impact of conversion to GPFS
    • Ongoing maintenance and personnel costs for GPFS
    • Performance versus other replacement filesystems
    • Free 90 day trial available
    • Use as a general-purpose file system at educational institutions unknown
    • IBM is stable; somewhat unlikely they would drop GPFS
  3. OneFS distributed file system from Isilon Systems
    • Need to explore deployment across Windows and Mac OS; it does provide access via NFS, SMB/CIFS, FTP, HDFS.
    • Authentication via MS AD, LDAP, and NIS, but apparently not Kerberos
    • Would have cost $30K year for much smaller deployment at UMDNJ
    • Was quotes at $30K per year in 2013 at UMDNJ, for a much smaller deployment
    • Institutional cost impact of conversion to OneFS
    • Ongoing maintenance and personnel costs for OneFS
    • Performance versus other replacement filesystems
    • Free 90 day trial apparently available
    • Isilon Systems was purchased by EMC, stable company, in November 2010

VIII. AuriStorFS licensing terms

AuriStorFS Fact Sheet

  • 4 DB servers)
  • 4 file servers
  • 1000 user or machine IDs
  • Unlimited support via email and web 9-5 M-F EST with 4-hour response time
  • $21,000 per cell per year
    • Cost for cad.njit.edu cell
      • Between 10,001 and 15,000 PTS entries : $7,100 per year
      • Total cost per year : $28,100
    • Cost for uis.njit.edu cell
      • $21,000 per year

IX. AuriStorFS trial

Skip, not needed.

X. Extension of use of OAFS/AuriStorFS at NJIT - MacOSX and Windows

It is possible and beneficial to replacde current MS DFS deployment with AuriStorFS.

Reasons for extension of use

  • Single file system, accessible from all platforms
  • Very scalable
  • Efficiency of management
  • Local and geographically dispersed collaboration enabler

Immediate benefits

  • The lab image for Windows computers is now 150GB. Windows applications in AFS would significantly reduce image size and allow flexibility in updating and adding (and removing) applications

Roaming profiles stored in AFS

  • The UNC College of Engineering has successfully deployed roaming profiles from AFS on a large scale for years. This practice exploits a unified approach, and reduces administrative effort
  • This method of deploying roaming profiles should be tested for feasibility and performance

Feasibility, performance comparisons : Mac OSX, Windows

In the past, serving Windows applications out of AFS was sometimes problematical, due to :

  • Conflict between the application and the Windows registry
  • Inadequate performance of the application - e.g., loading time too long

Similarly, there have been obstacles to running Mac OSX applications out of AFS.

  1. Determine feasibility of serving Mac OSX and Windows applications out of AFS
  2. Determine performance of applications served out of AFS
    • Compared to being served from local disk
    • Compared to being served from MS DFS server (Windows only)

The timeframe for the above testing is difficult to estimate - probably on the order of several months. This task is independent of the AuriStorFS trial.

XI. Next steps

Based on the current state of research, including extensive discussions with AuriStorFS users :

  1. iImmediately purchase AuriStorFS license for at least the cad.njit.edu cell. This will provide solid support in case of security, client, or server problems with OAFS. It will also provide needed capacity enhancemenys : as of September 2016, there have been three instances of researchers hitting the OAFS limit of about 64K files in a directory, which hampers their work. The AuriStorFS limit is about 20 million, about 310 times as many. This situation is expected to get worse quickly, as researchers generate an increasingly large number of files.
  2. Test the use of roaming profiles in AFS

XII. Costs

Moving from OAFS to AuriStorFS :

  • AuriStorFS licensing costs are specified under VIII.
  • This move would free up VM resources (28 fileservers -> 3 or 4 fileservers), and would result in an increase in performance
  • Estimated staff-hours : ??

Moving off of OAFS to an assortment of other technologies, not as yet identified :

  • As of 4/7/2016, the cad.njit.edu cell had 32,851 volumes, containing 144,553,379 files, using 25.6TB of disk, woven deeply into academics, research, web, database, system administration, and other services
  • It is not possible to estimate the level of effort that would be required to move off of OAFS, except that it would be many orders of magnitude greater than that required to move from OAFS to AuriStorFS

XIII. Conclusions

  1. AFS is a critical and integral part of NJIT's academic and research infrastructure. It is deeply embedded, with over 20 years of reliable deployment. Trying to replace it with an assortment of other technologies would take an extraordinary amount of effort and money, and would result in a significantly inferior service.
  2. The point has been reached where the licensing of AurisStorFS is the rational and feasible course, in line with the purchase of support for other critical infrastructure applications.

Addenda

A. CST 19 May 2016 Document - Sites

CSTMay19DocSites

B.. CST 19 May 2016 Document - Recommendation

CSTMay19DocRecommendation

Filesystem comparison

C. July 22 2016 Meeting

July22-2016Meeting.1

D. August 18 2016 Meeting

18Aug-2016Meeting

E. Merging of cad.njit.edu and uis.njit.edu cells

Merge

F. Indiana University Research Technologies

IUresearch

G. CERN

CERN