SimCenter Research and Education Facilities

SimCenter FacilitiesA 31,000 sq. ft. SimCenter research and education facility was officially opened in November 2003. It is located at 701 East M.L. King Boulevard, adjacent to the UTC campus.

The UC Foundation renovated this formerly unoccupied building, which had been donated by the City of Chattanooga.

The SimCenter facility includes faculty offices, student cubicles, a 1,500 sq. ft. computer room, a conference/meeting room accommodating 25 people, an 80-seat auditorium, two secure expandable suites of rooms dedicated to proprietary and/or classified research, a research library, and other work space.

The interior layout is designed to facilitate extensive cross-disciplinary interactions among faculty and students, with student cubicles in large open spaces adjacent to faculty offices.

Computing FacilitiesServers Panoramic

Local Facilities – The SimCenter computing facility includes the following resources:

Compute Servers

The cluster called OneSeventeen, consists of 33 compute nodes and a master/login node. The compute nodes are configured as follows:

  • Two (dual-socket) Intel Xeon E5-2680 v4, 2.4GHz chips, each with 14 cores for a total of 28 cores per compute server.
  • 128 GB of RAM per compute server which means ~4.5 Gb of RAM per core.
  • 1 NVidia 16GB P100 GPU with 1792 double precision cores
  • Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU/GPU)

The login node is for compiling codes and launching jobs and is not included in the compute fabric. It is mostly the same as the compute nodes with the notable exception that is does not have a GPU. The login node is configured as follows:

  • Two (dual-socket) Intel Xeon E5-2650 v4, 2.2GHz chips, each with 12 cores for a total of 24 cores.
  • 128 GB of RAM.

The 'fabric' of the cluster, i.e., how the distributed compute nodes communicate with each other, is EDR InfiniBand which communicates at 100Gb/sec. The peak performance of the cluster as a whole is ~33 TFLOPs (CPU only) or 190TFLOPs (CPU/GPU). The cluster contains a total of 924 CPU cores and 59,136 GPU cores.

Papertape is a slightly older cluster, circa 2010, that contains 40 diskless compute nodes each with the following specifications:

  • Dual-socket 8 core Intel Xeon E5-2670 2.6 GHz CPU
  • 16 cores per node -> 640 cores in total
  • FDR InfiniBand interconnect (56Gb/s)
  • 32 GB RAM per compute node -> 2GB per core

qbert is a 192 core multi-use cluster which comprises 12 compute nodes and a head node configured as follows:

  • Two eight-core Intel Xeon E5-2650v2 processors per node
  • 32 GB RAM per node
  • 10 Gigabit Ethernet interconnect
  • 216TB of reconfigurable 10 Gigabit iSCSI storage
  • The head node has 256GB of RAM, an NVIDIA K20 and Xeon Phi 5110P coprocessor

Currently, eight of these nodes are configured for HPC whilst the remaining five are set up for hadoop.

The SimCenter also has a highly-available VMware cluster made up of three Dell R730s, each configured as follows:

  • VMware vSphere 6.5
  • Two 14-core Intel Xeon E5-2680v4 processors
  • 256 GB RAM
  • Two-tiered storage subsystem with 20 TB of 7.2K 'capacity' HDDs and 2 TB of write-intensive 'fast' SSDs

Legacy Clusters:

  • 1300 core 325 node diskless cluster (Dell)
    • Dual-core Intel EM64T 3.0GHz Xeon processors
    • 4 GB RAM per node
    • GigE interconnect (576 port Force10 E1200 switch)
  • 32-core compute server (IBM)
    • Four eight-core IBM POWER7 3.55GHz processors
    • 256GB RAM
  • 32-core compute server (IBM)
    • Four eight-core IBM POWER7 3.55GHz processors
    • 128GB RAM
  • 32-core compute server (Dell)
    • Four eight-core Intel Xeon Nehalem-EX 2.26GHz processors
    • 256 GB RAM

SimCenter Data StorageStorage Infrastructure

The SimCenter's HPC storage infrastructure comprises a two-tiered, self-encrypting, 1.1 PB DDN high performance parallel file system running IBM GFPS/SpectrumScale. This DDN SFA14KX storage solution is currently contained less than a single rack and is expandable up to 9 PB without additional rack infrastructure. Our HPC compute nodes communicate with this storage using 'native SpectrumScale' protocol over InfiniBand networks while desktops etc. access this storage via NFS.

  • The fast tier of the storage system is made of 100 TB of 10K SEDs (self-encrypting drives); this tier is where home directories are stored and can be configured, using SpectumScale, to contain other highly accessed data for improved IO performance.

  • The capacity tier of this system contains 1PB of 7.2K SEDs and is mainly used as 'scratch' space for our HPC simulations.