SimCenter Research and Education Facilities

SimCenter FacilitiesA 31,000 sq. ft. SimCenter research and education facility was officially opened in November 2003. It is located at 701 East M.L. King Boulevard, adjacent to the UTC campus.

The UC Foundation renovated this formerly unoccupied building, which had been donated by the City of Chattanooga.

The SimCenter facility includes faculty offices, student cubicles, a 1,500 sq. ft. computer room, a conference/meeting room accommodating 25 people, an 80-seat auditorium, two secure expandable suites of rooms dedicated to proprietary and/or classified research, a research library, and other work space.

The interior layout is designed to facilitate extensive cross-disciplinary interactions among faculty and students, with student cubicles in large open spaces adjacent to faculty offices.

 

 

 

 

 

Computing FacilitiesPaul with the Cluster

Local Facilities – The SimCenter computing facility includes the following resources:

Compute Servers

The new cluster, called onesevevteen, consists of 33 compute nodes and a master/login node. The compute nodes are configured as follows:

  • Two (dual-socket) Intel Xeon E5-2680 v4, 2.4GHz chips, each with 14 cores for a total of 28 cores per compute server.
  • 128 GB of RAM per compute server which means ~4.5 Gb of RAM per core.
  • 1 NVidia 16GB P100 GPU with 1792 double precision cores
  • Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU/GPU)

The login node is for compiling codes and launching jobs and is not included in the compute fabric. It is mostly the same as the compute nodes with the notable exception that is does not have a GPU. The login node is configured as follows:

  • Two (dual-socket) Intel Xeon E5-2650 v4, 2.2GHz chips, each with 12 cores for a total of 24 cores.
  • 128 GB of RAM.

The 'fabric' of the cluster, i.e., how the distributed compute nodes communicate with each other, is EDR InfiniBand which communicates at 100Gb/sec. The peak performance of the cluster as a whole is ~33 TFLOPs (CPU only) or 190TFLOPs (CPU/GPU). The cluster contains a total of 924 CPU cores and 59,136 GPU cores.

previous:

  • 1300 core 325 node diskless cluster (Dell)
    • Dual-core Intel EM64T 3.0GHz Xeon processors
    • 4 GB RAM per node
    • GigE interconnect (576 port Force10 E1200 switch)
  • 160-core 80-node diskless cluster (Dell)
    • Intel EM64T 3.2GHz Xeon processors
    • 2 GB RAM per node
    • GigE interconnect
  • 64-core 32-node diskless cluster (Microtronix)
    • Dual Intel 3.2GHz Xeon processors
    • 2 GB RAM per node
    • GigE interconnect (48 port HP ProCurve switch)
  • 32-core compute server (IBM)
    • Four eight-core IBM POWER7 3.55GHz processors
    • 256GB RAM
  • 32-core compute server (IBM)
    • Four eight-core IBM POWER7 3.55GHz processors
    • 128GB RAM
  • 32-core compute server (Dell)
    • Four eight-core Intel Xeon Nehalem-EX 2.26GHz processors
    • 256 GB RAM