Available Resources
The resources provided by OIT HPSC includes three clusters, two for open research and one for processing sensitive information. Each cluster has a Lustre file system with petabyte scale storage capacity. These resources can be used by faculty and student researchers and for academic coursework.
To obtain access to use the resources, claim your account at the ISAAC User Portal by choosing Request an ISAAC Account in the menu on the left.
ISAAC Legacy cluster for Open Research
- 232 compute nodes of various types including these noteworthy types:
- 1 Intel Xeon E5-2687Wv4 processor (24 cores) with 1 TB memory
- 6 GPU nodes
- 2 Skylake nodes with 1x NVIDIA V100S GPUs (UT)
- 4 beacon nodes with 1x NVIDIA M60 GPUs (UTK/UTHSC)
- 6,584 CPU cores
- Nodes with 32 gigabytes to 1 terabyte memory
- 2.7 petabytes of Lustre storage in /lustre/haven
- 4 Data Transfer Nodes with 10 Gb/s bandwidth and Globus
ISAAC Next Generation cluster for Open Research
- 209 compute nodes
- 118 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 192 gigabytes memory (UT/UTK/private)
- 38 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 256 gigabytes memory (UT/UTK/private)
- 16 nodes with Intel Gold 6148 (Sky Lake) processors (40 cores) and 192 GB memory (UT/UTK)
- 17 nodes with AMD Bergamo 9734 processors (224 cores) and 1 TB memory (UT/UTK/private)
- 6 nodes with AMD Milan 7713 processors (128 cores) and 512 GB memory (private)
- 4 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 1.5 TBs memory (UTK)
- 3 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 2 TBs memory (UTK/private)
- 2 nodes with Intel Platinum 6348 (Ice Lake Platinum) processors (64 cores) and 256 GB memory (private)
- 2 nodes with AMD Genoa 9534 processors (128 cores) and 512 GB memory (UT/UTK)
- 1 node with AMD Genoa 9734 processor (224 cores) and 1 TB memory (private)
- 1 node with AMD Milan 7763 processor (64 cores) and 256 GB memory (private)
- 1 node with AMD Rome 7542 processors (32 cores) and 128 GB memory (private)
- 31 GPU nodes
- 10 GPU nodes with 1x NVIDIA V100S GPUs (UTK)
- 10 GPU nodes with 4xNVIDIA H100 GPUs (private)
- 7 GPU nodes with 2x NVIDIA V100S GPUs (UT/private)
- 3 GPU nodes with 2x NVIDIA A16 GPUs (private)
- 1 nodes with 4x NVIDIA A40 GPUs, 54 TBs NVMe scratch space/burst buffer (UTK)
- 14,064 total CPU cores
- 3.6 petabytes of Lustre storage in /lustre/isaac
- 2 Data Transfer Nodes with 40 Gb/s bandwidth and Globus
ISAAC Secure Enclave cluster
- 49 compute nodes, 2 nodes with 2x NVIDIA V100S GPUs
- 2,384 CPU cores
- Nodes with 192 gigabytes of memory
- 1.7 petabytes of Lustre storage in /lustre/sip
- 1 Data Transfer Node with 10 Gb/s bandwidth and Globus
Details of the types of compute nodes available that make up each cluster is listed in the System Overview page for each cluster (listed in menus on the left).
Node Naming Conventions
The following table summarizes the meaning of the alphabetic portion of the nodenames, which are generally composed of the manufacturer codename, plus an additional letter in some instances for special node types such as GPU or Bigmem:
Node Prefix | Architecture |
ber | AMD Bergamo |
clr | Intel Cascade Lake Refresh |
clrm | Intel Cascade Lake Refresh Bigmem |
clrv | Intel Cascade Lake Refresh with NVIDIA Volta 100 GPU |
gen | AMD Genoa |
il | Intel Ice Lake |
ila | Intel Ice Lake with NVIDIA Ampere 16 GPU |
ilm | Intel Ice Lake Bigmem |
ilp | Intel Ice Lake Platinum |
ilpa | Intel Ice Lake Platinum with NVIDIA Ampere 40 GPU |
mil | AMD Milan |
rome | AMD Rome |
sk | Intel Sky Lake |
sv | Intel Sky Lake with NVIDIA Volta 100 GPU |
srph | Intel Sapphire Rapids Platinum with NVIDIA Hopper 100 GPU |
The meaning of the numerical portion of the node names varies by cluster. For the ISAAC Legacy cluster (3 digit suffix), the numerical portion is incremented and assigned to consecutive nodes of the same type as they are deployed. Consecutively numbered nodes are in many cases located in the same rack and share a common Infiniband switch, but there is no guarantee of this. For the SIP and NG clusters (4 digit suffix), the first digit is a row number, the second is a rack number within the row, and the final two digits are a rack unit number within the rack. In all cases for these two clusters, nodes will be closest together in the Infiniband fabric when their node numbers are most similar.