Computational Resources
UTC media

Please click here for Technical Assistance
Computational Resources
Computational Clusters (Open access)
- EPYC – 2176 core (128 cores/node; 17 nodes) AMD EPYC 7662
- 512 GB of RAM/node
- InfiniBand EDR interconnect (100GbE)
- 2x NVIDIA A100 80G GPU per node
- Firefly – 400 cores (80 cores/node; 5 nodes) Xeon 6148 CPU cluster
- head node 3TB RAM
- 192 GB of RAM
- 4x Tesla V100 SXM2 32GB GPUs per node
- Dual 10G Gigabit Ethernet bonded to 20G of interconnect
- Lookout – 160 core/640 thread (6 node) IBM Power9 (AC922) cluster
- Dual Power9 20 core/80 thread CPUs (2.4GHz, 3.0 GHz turbo)
- 4 Nvidia Volta GPUs with 16 GB GPU memory; NVLink provides up to 300GB/s of bi-directional bandwidth GPU-GPU and CPU-GPU
- 256 GB RAM
- InfiniBand EDR interconnect (100GbE)
- Research as a Service (RaaS) cluster – 224Core/448Thread OpenStack local cloud
- One user/admin facing management server that provides web-based user dashboards for on demand research computing
- Three back end OpenStack management servers used to facilitate VM provisioning, management, storage I/O, etc.
- Four high end CPU/GPU Dell R740 compute servers where VMs/containers run
- Each with dual Intel Xeon Platinum 8176 28C/56T CPUs
- 384GB RAM
- One NVIDA Tesla P100 12GB GPU
- One NVIDA Tesla T4 16GB GPU
- Three Dell 740XD storage servers for local RaaS storage
- Each with 192GB RAM
- 168GB raw disk space; usable space half this using 2 factor replication
- This RaaS implements a 100GbE network fabric throughout with the compute and storage servers having dual 100GbE links to facilitate extra network performance for the VMs/containers as well as all storage traffic. This network connects back to the SimCenter core at 100GbE as well.
- Tennessine – 924 CPU core/59,136 GPU core (33 node) diskless cluster (Dell)
- Two 14-core Intel Xeon E5-2680 v4 processors
- 128 GB RAM per node
- EDR InfiniBand (100 Gb/s) interconnect
- One Nvidia P100 GPU (16 GB) with 1792 double precision cores
- 400 GB SSD per node for local application caching (pending installation)
- Theoretical peak performance of ~1TFLOPs (CPU only) or ~5.7TFLOPs (CPU and GPU)
Computational Clusters (Restricted access; ITAR compliant)
- Cerberus – 32 core compute server (Dell)
- Four 8-core Intel Xeon X7560 2.27 GHz processors
- 256 GB RAM
- Papertape – 512 core (32 node) diskless cluster (Dell)
- Two 8-core Intel Xeon E5-2670 processors
- 32 GB RAM per node
- FDR InfiniBand interconnect (2:1 blocking)
- ~12 TB of usable storage (BeeGFS)
- Papertape_v2 – 512 core (4 node) AMD EPYC 7702
- Two 64-core AMD EPYC 7702 processors
- 512 GB RAM/node
- EDR Infiniband interconnect
Infrastructure
- 20 GbE network backbone
- New air conditioning units.
- Brand new server room network infrastructure capable of up to 100GbE
- Dell R730 VMWARE cluster which runs all system critical services
- Three node cluster
- Highly redundant hardware
- Important services configured “highly available” so they never crash
- Dynamic load balancing
Data Storage
- Highly scalable DDN 14KX GPFS storage system
- 113 TB (10K SAS) of high speed, tier 1 storage
- 1 PB of high capacity, lower tier storage
- Connects to the HPC infrastructure at EDR InfiniBand speeds (100 Gb/s)
- Scalable up to over 1700 hard drives and 60 GB/s bandwidth
- Available (via NFS) to all desktops in the SimCenter
- Backup system based on ZFS
- 800TB maximum capacity
- Primary target in MRDB Data Center with secondary target in Hunter Hall.
- ZFS replication between primary and secondary targets.
- Expandable Dell PowerVault LTO7 tape backup system
- LTO7 tape media can store between 6 and 15 TB (based on compression)
- Can backup/archive between 240 and 600 TB of data without expansion
UTC media

UTC Image Caption