Advanced Research Computing offers several distinct high-performance computing systems to Virginia Tech researchers and collaborators, listed below. Click on a link to read more about a given resource.
NAME | TINKERCLIFFS | INFER | HUCKLEBERRY | CASCADES | DRAGONSTOOTH | NEWRIVER |
---|---|---|---|---|---|---|
Vendor/Model | HPE/Cray | HPE/Cray | IBM | HP | Lenovo | Dell |
Key Features, Uses | Large-scale CPU | Machine learning/Inference | Deep learning applications | Data Intensive Problems | Single-node, long jobs | Data Intensive Problems |
Login Node (xxx.arc.vt.edu) |
tinkercliffs1 or tinkercliffs2 | infer1 | huckleberry1 | cascades1 or cascades2 | dragonstooth1 | newriver1 or newriver2 |
Available | October 2020 | January 2021 | August 2018 | October 2016, Expanded April 2018 | August 2016 | August 2015, Expanded June 2017 |
Operating System | CentOS Linux 7 | CentOS Linux 7 | CentOS Linux 7 | CentOS Linux 7 | CentOS Linux 7 | CentOS Linux 7 |
Theoretical Peak (TFlops/s) | 1,392.6 | 820.6 | 38.7 | 536.1 | ||
Nodes | 332 | 18 | 14 | 236 | 48 | 165 |
Cores | 41,984 | 576 | 280 | 7,288 | 1,152 | 4,380 |
Cores/Node | 96-128 | 32 | 20 | 24-72 | 24 | 24-60 |
CPU Model | AMD EPYC 7702, Intel Xeon Platinum 9242 | Intel Xeon Gold 6130 | Power8 | Varies by node type | Intel Xeon E5-2680v3 (Haswell) | Varies by node type |
CPU Speed | 2.0-2.3 GHz | 2.1 GHz | 3.26 GHz | 2.1-3.0 GHz | 2.50 GHz | 2.4-2.8 GHz |
Accelerators/Coprocessors | N/A | 18 | 56 | 88 | N/A | 86 |
Accelerator Model | N/A | NVIDIA T4 | NVIDIA P100 | NVIDIA V100, NVIDIA Tesla K80 | N/A | NVIDIA P100,* NVIDIA Tesla K80** |
Accelerators/Node | N/A | 1 | 4 | 2* | N/A | 2*, 1** |
Memory Size | 91.0 TB | 3.5 TB | 44.6 TB | 12.0 TB | 41.3 TB | |
Memory/Core | 2.0-8.0 GB | 6 GB | 4.0-42.7 GB | 10.6 GB | 5.3-51.2 GB | |
Memory/Node | 256-1,024 GB | 192 GB | 256 GB | 128 GB** | 256 GB | 128 GB*** |
Interconnect | HDR InfiniBand | EDR InfiniBand | EDR InfiniBand | EDR InfiniBand | 10 GbE | EDR InfiniBand |
Notes | *For GPU-enabled nodes **2 nodes have 3 TB; 40 V100 nodes have 376 GB All nodes have locally mounted SSDs |
*For 39 GPU nodes **For 8 vis nodes ***63 nodes have 512 GB & 2 nodes have 3TB |