High-Performance Computing

What is high-performance computing?
High-performance computing (HPC) is the use of powerful computers and advanced networks to process and analyze very large amounts of data or perform complex calculations much faster than a standard computer can.
High-Performance Computing at UTC
The UTC Research Institute serves as the core facility for High-Performance Computing and storage on campus, supporting faculty, students, and partners in advancing research across disciplines. As part of UTC’s Quantum Center, the Institute provides cutting-edge computational power and secure infrastructure for projects ranging from quantum science and computational engineering to data-intensive analysis in the sciences, mathematics, and beyond.
Housed in the Multidisciplinary Research Building, students and faculty benefit from proximity to advanced computing facilities, collaborative office and lab space, and secure suites for proprietary or classified research.
Computing Resources
Our physical data center is equipped with a robust infrastructure designed to handle large-scale computational workloads, virtualized environments, and data storage needs. With high-capacity servers, scalable storage, and high-speed networking hardware, UTC researchers have continuous access to resources tailored for diverse project requirements.
- epyc nodes
- Total of 18 nodes
- Total of 2,304 cores AMD EPYC 7662 CPU
- Open Access
Each node has:
- 128 core AMD EPYC 7662 CPU
- 512 GB of RAM
- InfiniBand EDR interconnect (100GbE)
- 16 of 18 nodes have 2x NVIDIA A100 80G GPU
- firefly nodes
- Total of 4 nodes
- Total of 80 cores Xeon 6148 CPU
- Open Access
Each node has:
- Two Intel Xeon Gold 6148 - 20 core / 40 thread processors
- 192 GB of RAM
- 4x Tesla V100 SXM2 32GB GPUs
- Dual 10G Gigabit Ethernet bonded to 20G of interconnect
- tennessine nodes
- Total of 32 nodes
- Total of 896 CPU cores/ Intel Xeon E5-2680 v4 processors / 57,344 GPU cores
- Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU and GPU)
- Open Access
Each node has:
- Two 14-core Intel Xeon E5-2680 v4 processors
- 128 GB RAM
- EDR InfiniBand (100 Gb/s) interconnect
- One Nvidia P100 GPU (16 GB) with 1792 double precision cores
- lookout nodes
- Total of 6 nodes
- 160 cores/640 thread IBM Power9 (AC922)
- Open Access
Each node has:
- Two Power9-20 core/80 thread CPUs (2.4GHz, 3.0 GHz turbo)
- 4 Nvidia Volta GPUs with 16 GB GPU memory; NVLink provides up to 300GB/s of bi-directional bandwidth GPU-GPU and CPU-GPU
- 256 GB RAM
- InfiniBand EDR interconnect (100GbE)
- papertape nodes
- Total of 24 nodes
- 384 cores/ Intel Xeon E5-2670
- Restricted Access/ITAR Compliant
Each node has:
- Two 8-core Intel Xeon E5-2670 CPUs
- 32 GB RAM
- FDR InfiniBand interconnect (2:1 blocking)
- papertape_v2 nodes
- Total of 11 nodes
- 1408 cores/ AMD EPYC 7702 Processors<
- Restricted Access/ITAR Compliant
Each node has:
- Two 64 Core AMD EPYC 7702 processors
- 512 GB RAM
- EDR Infiniband interconnect
- Infrastructure
- 20 GbE network backbone
- Server room network infrastructure capable of up to 100GbE
- Virtualization
- Four high end CPU/GPU Dell R740 compute servers where VMs/containers run
- Each with dual Intel Xeon Platinum 8176 28C/56T CPUs
- 384GB RAM
- One NVIDA Tesla P100 12GB GPU
- One NVIDA Tesla T4 16GB GPU
- Three Dell 740XD storage servers for local storage
- Each with 192GB RAM 168GB raw disk space; usable space half this using 2 factor replication
- Utilizes 100GbE network fabric throughout with the compute and storage servers having dual 100GbE links to facilitate extra network performance for the VMs/containers as well as all storage traffic. This network connects back to the Research Institute core at 100GbE as well.
- Four high end CPU/GPU Dell R740 compute servers where VMs/containers run
- Data Storage
- Highly scalable DDN 14KX GPFS storage system
- 113 TB (10K SAS) of high speed, tier 1 storage
- 1 PB of high capacity, lower tier storage
- Connects to the HPC infrastructure at EDR InfiniBand speeds (100 Gb/s)
- Scalable up to over 1700 hard drives and 60 GB/s bandwidth
- Available (via NFS) to all desktops in the building
- Backup system based on ZFS
- 800TB maximum capacity
- Primary target in MRDB Data Center with secondary target in Hunter Hall.
- ZFS replication between primary and secondary targets.
- Highly scalable DDN 14KX GPFS storage system