Providing high performance computing systems and services for Virginia Tech students and faculty
What is ARC?
Advanced Research Computing (ARC) provides centralized research computing infrastructure and support for the Virginia Tech research community. ARC's resources include high-performance computing systems, large-scale data storage, visualization facilities, software, and consulting services, and are available to faculty and students across all disciplines. We welcome researchers of all experience levels.
How can ARC help me achieve my research goals?
ARC's high-performance computing (HPC) systems can help researchers process big data sets, run deep learning applications, conduct visualization, store data sets, and perform other tasks that are too large or complex to do on a local computer.
For example, researchers often come to ARC if they have:
- Very large jobs to run (e.g., high resolution or large-scale datasets)
- Many jobs to run (parameter sweeps, many datasets)
- Need for specialty hardware (GPU, memory, high bandwidth storage, fast network, scale)
Additionally, ARC's resources for data storage and sharing can make it much easier for researchers from different disciplines or in different locations to collaborate on a project.
How to get started with ARC
If you are interested in using ARC's resources for your current or future projects, or if you would just like to learn more about our computing systems and services, please request a consultation or drop by our office hours.
You do not need to have any prior experience with high-performance computing — our team can assist you in determining the right system for your project. We also offer introductory training sessions throughout the year via the Professional Development Network, and our computational scientists are available for classroom presentations on HPC.
Is there a cost to use ARC's systems?
ARC's systems and services are available to all Virginia Tech faculty at no cost. However, researchers and groups have the option to add compute costs to grants or contracts through ARC's Cost Center, which allows usage of additional compute or storage resources for a fee. Departments and faculty can also purchase priority access to an ARC system for up to five years through ARC's Investment Computing Program.
Helpful resources for new ARC users:
ARC is proud to announce the release of a new cluster called Infer, which provides 18 Intel Skylake nodes each equipped with an Nvidia T4 GPU. The cluster’s name "Infer" alludes to the AI/ML inference capabilities of the T4 GPUs derived from the "tensor cores" on these devices. We think they will also be a great all-purpose resource for researchers who are making their first forays into GPU-enabled computations of any type. For more information about the T4 architecture on T4 architecture and performance relative to, e.g., V100 GPUs, see the following pages:
Cluster details and examples are provided on our Infer page. Users may login at infer1.arc.vt.edu and try it out at their earliest convenience. For now, software installs are mostly limited to CUDA and associated toolchains; if users have additional requests please, they may submit them via Help ticket.
ARC is happy to announce the addition of 39 new GPU nodes to the NewRiver cluster. Each of these nodes is equipped with two Intel Xeon E5-2680v4 (Broadwell) 2.4GHz GPU (28 cores/node in all), 512 GB memory, and two NVIDIA P100 GPUs. Each GPU is capable of up to 4.7 TeraFLOPS of double-precision performance, so including CPU and GPU these nodes add over 400 TFLOPS of peak double-precision throughput to ARC's resources.
Continue reading P100 GPU Nodes added to NewRiver
ARC is happy to announce the release of a new cluster, named Cascades, available at cascades1.arc.vt.edu and cascades2.arc.vt.edu. Cascades is a 196-node system capable of tackling the full spectrum of computational workloads, from problems requiring hundreds of compute cores to data-intensive problems requiring large amount of memory and storage resources. Cascade contains three compute engines designed for distinct workloads:
- General – Distributed, scalable workloads. With Intel’s latest-generation Broadwell processors, 2 16-core processors and 128 GB of memory on each node, this 190-node compute engine is suitable for traditional HPC jobs and large codes using MPI.
- GPU – Data visualization and code acceleration! There are four nodes in this compute engine which have - two Nvidia K80 GPUs, 512 GB of memory, and one 2 TB NVMe PCIe flash card.
- Very Large Memory – Graph analytics and very large datasets. With 3TB (3072 gigabytes) of memory, four 18-core processors and 6 1.8TB direct attached SAS hard drives, 400 GB SAS SSD drive, and one 2 TB NVMe PCIe flash card , each of these two servers will enable analysis of large highly-connected datasets, in-memory database applications, and speedier solution of other large problems.
Continue reading New ARC Cluster: Cascades
ARC is happy to announce the release of a new cluster, named DragonsTooth, available at
dragonstooth1.arc.vt.edu. DragonsTooth is made up of 48 nodes, each equipped with:
- 2 x Intel Xeon E5-2680v3 (Haswell) 2.5 GHz 12-core CPU (same CPU as NewRiver)
- 256 GB 2133 MHz DDR4 memory for large-memory problems
- 4 x 480 GB SSD Hard Drives for fast local I/O ($TMPDIR)
- 806 GFlops/s theoretical double-precision peak
Continue reading New ARC Cluster: DragonsTooth
ARC HPC Systems will undergo maintenance beginning at midnight on the morning of Tuesday, March 29, 2016. The purpose of this maintenance will be to migrate to a new shared Home directory on the file system that currently provides Home to NewRiver. This will provide two key benefits to users:
- All files in your Home directory will be visible from all clusters. For example, you will see the same files in $HOME from both NewRiver and BlueRidge. This will make it easier to migrate work between clusters based on which hardware is best suited to the task or which resource is less busy.
- The maximum Home directory size will be increased from 100 GB to 500 GB per user.
Continue reading ARC Migrating to Shared Home Directories, 29 Mar 2016