SimCenter Research and Education Facilities

SimCenter FacilitiesA 31,000 sq. ft. SimCenter research and education facility was officially opened in November 2003. It is located at 701 East M.L. King Boulevard, adjacent to the UTC campus.

The UC Foundation renovated this formerly unoccupied building, which had been donated by the City of Chattanooga.

The SimCenter facility includes faculty offices, student cubicles, a 1,500 sq. ft. computer room, a conference/meeting room accommodating 25 people, an 80-seat auditorium, two secure expandable suites of rooms dedicated to proprietary and/or classified research, a research library, and other work space.

The interior layout is designed to facilitate extensive cross-disciplinary interactions among faculty and students, with student cubicles in large open spaces adjacent to faculty offices.

Computing FacilitiesServers Panoramic

The SimCenter computing facility includes the following resources:

Compute Servers

The cluster called Tennessine or OneSeventeen, consists of 33 compute nodes and a master/login node. The compute nodes are configured as follows:

  • Two (dual-socket) Intel Xeon E5-2680 v4, 2.4GHz chips, each with 14 cores for a total of 28 cores per compute server.
  • 128 GB of RAM per compute server which means ~4.5 Gb of RAM per core.
  • 1 NVidia 16GB P100 GPU with 1792 double precision cores
  • Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU/GPU)

lookout – 160 core/640 thread (4 node) IBM Power9 (AC9322) cluster

  • Dual Power9 20 core/80 thread CPUs (2.4GHz, 3.0 GHz turbo)
  • 4 Nvidia Volta GPUs with 16 GB GPU memory; NVLink provides up to 300GB/s of bi-directional bandwidth GPU-GPU and CPU-GPU
  • 256 GB RAM
  • InfiniBand EDR interconnect (100GbE)

The login node is for compiling codes and launching jobs and is not included in the compute fabric. It is mostly the same as the compute nodes with the notable exception that is does not have a GPU. The login node is configured as follows:

  • Two (dual-socket) Intel Xeon E5-2650 v4, 2.2GHz chips, each with 12 cores for a total of 24 cores.
  • 128 GB of RAM.

The 'fabric' of the cluster, i.e., how the distributed compute nodes communicate with each other, is EDR InfiniBand which communicates at 100Gb/sec. The peak performance of the cluster as a whole is ~33 TFLOPs (CPU only) or 190TFLOPs (CPU/GPU). The cluster contains a total of 924 CPU cores and 59,136 GPU cores.

Papertape is a slightly older cluster, circa 2010, that contains 40 diskless compute nodes each with the following specifications:

  • Dual-socket 8 core Intel Xeon E5-2670 2.6 GHz CPU
  • 16 cores per node -> 640 cores in total
  • FDR InfiniBand interconnect (56Gb/s)
  • 32 GB RAM per compute node -> 2GB per core

qbert is a 192 core multi-use cluster which comprises 12 compute nodes and a head node configured as follows:

  • Two eight-core Intel Xeon E5-2650v2 processors per node
  • 32 GB RAM per node
  • 10 Gigabit Ethernet interconnect
  • 216TB of reconfigurable 10 Gigabit iSCSI storage
  • The head node has 256GB of RAM, an NVIDIA K20 and Xeon Phi 5110P coprocessor

Currently, eight of these nodes are configured for HPC whilst the remaining five are set up for hadoop.

cerberus – 32 core compute server (Dell)

  • Four 8-core Intel Xeon X7560 2.27 GHz processors
  • 256 GB RAM

Highly-available VMware cluster 

made up of three Dell R730s, each configured as follows:

  • VMware vSphere 6.5
  • Two 14-core Intel Xeon E5-2680v4 processors
  • 256 GB RAM
  • Two-tiered storage subsystem with 20 TB of 7.2K 'capacity' HDDs and 2 TB of write-intensive 'fast' SSDs

SimCenter Data StorageStorage Infrastructure

The SimCenter's HPC storage infrastructure comprises a two-tiered, self-encrypting, 1.1 PB DDN high performance parallel file system running IBM GFPS/SpectrumScale. This DDN SFA14KX storage solution is currently contained less than a single rack and is expandable up to 9 PB without additional rack infrastructure. Our HPC compute nodes communicate with this storage using 'native SpectrumScale' protocol over InfiniBand networks while desktops etc. access this storage via NFS.

  • The fast tier of the storage system is made of 100 TB of 10K SEDs (self-encrypting drives); this tier is where home directories are stored and can be configured, using SpectumScale, to contain other highly accessed data for improved IO performance.

  • The capacity tier of this system contains 1PB of 7.2K SEDs and is mainly used as 'scratch' space for our HPC simulations.