Facilities, Equipment, and Other Resources
The Research Institute is a center for multidisciplinary research at UTC. It not only serves as an incubator for innovative concepts and laboratories but also acts as UTC’s core facility for High-Performance Computing and Storage. UTC has two concentrations in its Computational Science PhD program: Computer Science and Computational Math. These concentrations are in addition to the initial emphasis in Computational Science: Computational Engineering. All of our PhD and some of our MS and undergraduate students are now housed in the Multidisciplinary Research Building. The facility includes faculty offices, student cubicles, a 1,500 square-foot computer room, a conference/meeting room that accommodates 25 people, and an 80-seat auditorium. It also hosts two secure, expandable suites of rooms dedicated to proprietary and/or classified research, a research library, and other work space. The interior layout is designed to facilitate extensive interdisciplinary interactions among faculty and students, with student cubicles in large open spaces adjacent to faculty offices. Researchers also have access to the high-performance compute resources at all times.
The Research Institute physical data center is equipped with infrastructure to support high-performance computing and physical/virtualized server needs. We offer a range of modern IT equipment, including high-capacity servers, storage solutions, and high-speed networking hardware that enables us to accommodate diverse project requirements.
Please click here for Technical Assistance
Computational & Technology Resources at the Research Institute
Computational Cluster Nodes– Types of Nodes Available:
- epyc nodes – Total of 18 nodes
- Total of 2,304 cores AMD EPYC 7662 CPU
- Open Access
- Each node has:
- 128 core AMD EPYC 7662 CPU
- 512 GB of RAM
- InfiniBand EDR interconnect (100GbE)
- 16 of 18 nodes have 2x NVIDIA A100 80G GPU
- Each node has:
- firefly nodes – Total of 4 nodes
- Total of 80 cores Xeon 6148 CPU
- Open Access
- Each node has:
- Two Intel Xeon Gold 6148 - 20 core / 40 thread processors
- 192 GB of RAM
- 4x Tesla V100 SXM2 32GB GPUs
- Dual 10G Gigabit Ethernet bonded to 20G of interconnect
- Each node has:
- tennessine nodes – Total of 32 nodes
- Total of 896 CPU cores/ Intel Xeon E5-2680 v4 processors / 57,344 GPU cores
- Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU and GPU)
- Open Access
- Each node has:
- Two 14-core Intel Xeon E5-2680 v4 processors
- 128 GB RAM
- EDR InfiniBand (100 Gb/s) interconnect
- One Nvidia P100 GPU (16 GB) with 1792 double precision cores
- Each node has:
- lookout nodes – Total of 6 nodes
- 160 cores/640 thread IBM Power9 (AC922)
- Open Access
- Each node has:
- Two Power9-20 core/80 thread CPUs (2.4GHz, 3.0 GHz turbo)
- 4 Nvidia Volta GPUs with 16 GB GPU memory; NVLink provides up to 300GB/s of bi-directional bandwidth GPU-GPU and CPU-GPU
- 256 GB RAM
- InfiniBand EDR interconnect (100GbE)
- Each node has:
- papertape nodes – Total of 24 nodes
- 384 cores/ Intel Xeon E5-2670
- Restricted Access/ITAR Compliant
- Each node has:
- Two 8-core Intel Xeon E5-2670 CPUs
- 32 GB RAM
- FDR InfiniBand interconnect (2:1 blocking)
- Each node has:
- papertape_v2 nodes – Total of 11 nodes
- 1408 cores/ AMD EPYC 7702 Processors<
- Restricted Access/ITAR Compliant
- Each node has:
- Two 64 Core AMD EPYC 7702 processors
- 512 GB RAM
- EDR Infiniband interconnect
- Each node has:
- Infrastructure
- 20 GbE network backbone
- Server room network infrastructure capable of up to 100GbE
- Virtualization:
- Four high end CPU/GPU Dell R740 compute servers where VMs/containers run
- Each with dual Intel Xeon Platinum 8176 28C/56T CPUs
- 384GB RAM
- One NVIDA Tesla P100 12GB GPU
- One NVIDA Tesla T4 16GB GPU
- Three Dell 740XD storage servers for local storage
- Each with 192GB RAM 168GB raw disk space; usable space half this using 2 factor replication
- Utilizes 100GbE network fabric throughout with the compute and storage servers having dual 100GbE links to facilitate extra network performance for the VMs/containers as well as all storage traffic. This network connects back to the Research Institute core at 100GbE as well.
- Four high end CPU/GPU Dell R740 compute servers where VMs/containers run
- Data Storage
- Highly scalable DDN 14KX GPFS storage system
- 113 TB (10K SAS) of high speed, tier 1 storage
- 1 PB of high capacity, lower tier storage
- Connects to the HPC infrastructure at EDR InfiniBand speeds (100 Gb/s)
- Scalable up to over 1700 hard drives and 60 GB/s bandwidth
- Available (via NFS) to all desktops in the building
- Backup system based on ZFS
- 800TB maximum capacity
- Primary target in MRDB Data Center with secondary target in Hunter Hall.
- ZFS replication between primary and secondary targets.
- Highly scalable DDN 14KX GPFS storage system