Available Resources
The resources provided by OIT HPSC includes three clusters, two for open research and one for processing sensitive information. Each cluster has a Lustre file system with petabyte scale storage capacity. These resources can be used by faculty and student researchers and for academic coursework.
To obtain access to use the resources, claim your account at the ISAAC User Portal by choosing Request an ISAAC Account in the menu on the left.
ISAAC ORI cluster for Open Research
- 33 compute nodes of various types including these noteworthy types:
- 25 nodes with Intel Skylake processors (40 cores) and 190 GB memory (UT/UTK/shared)
- 3 nodes with Intel Skylake processors (40 cores) and 190 GB memory (private condos)
- 1 node with AMD Rome processors (64 cores) and 512 GB memory (private condo)
- 1 node with Intel Skylake processors (40 cores) (private condo)
- 3 nodes with Intel Skylake processors (40 cores) (private condo)
- 4 GPU nodes
- 1 node with 1x NVIDIA P100 GPU (private condo)
- 3 nodes with 2x NVIDIA V100S GPUs (private condo)
- 1,280 CPU cores
ISAAC Next Generation cluster for Open Research
- 208 compute nodes
- 118 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 192 gigabytes memory (UT/UTK/private)
- 38 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 256 gigabytes memory (UT/UTK/private)
- 17 nodes with AMD Bergamo 9734 processors (224 cores) and 1 TB memory (UT/UTK/private)
- 8 nodes with Intel Gold 6148 (Sky Lake) processors (40 cores) and 192 GB memory (UT/UTK)
- 7 nodes with AMD Milan 7713 processors (128 cores) and 512 GB memory (private)
- 5 nodes with Intel Gold 6438M (Sapphire Rapids) processors (64 cores) and 768 GB memory (private)
- 4 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 1.5 TBs memory (UTK)
- 3 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 2 TBs memory (UTK/private)
- 2 nodes with Intel Platinum 6348 (Ice Lake Platinum) processors (64 cores) and 256 GB memory (private)
- 2 nodes with AMD Genoa 9534 processors (128 cores) and 512 GB memory (UT/UTK)
- 1 node with AMD Genoa 9734 processor (128 cores) and 1.5TB RAM
- 1 node with AMD Genoa 9734 processor (64 cores) and 1 TB memory (private)
- 1 node with AMD Milan 7763 processor (64 cores) and 256 GB memory (private)
- 1 node with AMD Rome 7542 processors (32 cores) and 128 GB memory (private)
- 32 GPU nodes
- 15 GPU nodes with 2x NVIDIA V100S GPUs (UT/private)
- 10 GPU nodes with 4xNVIDIA H100 GPUs (private)
- 3 GPU nodes with 2x NVIDIA A16 GPUs (private)
- 1 nodes with 4x NVIDIA A40 GPUs, 54 TBs NVMe scratch space (UT)
- 1 GPU node with 2xNVIDIA L40 GPUs (private)
- 1 GPU nodes with 1x NVIDIA V100S GPUs (private)
- 1 GPU node with 4xNVIDIA T4 GPUs (UT)
- 14,064 total CPU cores
- 3.6 petabytes of Lustre storage in /lustre/isaac
- 2 Data Transfer Nodes with 40 Gb/s bandwidth and Globus
ISAAC Secure Enclave cluster
- 49 compute nodes, 2 nodes with 2x NVIDIA V100S GPUs
- 2,384 CPU cores
- Nodes with 192 gigabytes of memory
- 1.7 petabytes of Lustre storage in /lustre/sip
- 1 Data Transfer Node with 10 Gb/s bandwidth and Globus
Details of the types of compute nodes available that make up each cluster is listed in the System Overview page for each cluster (listed in menus on the left).
Node Naming Conventions
The following table summarizes the meaning of the alphabetic portion of the nodenames, which are generally composed of the manufacturer codename, plus an additional letter in some instances for special node types such as GPU or Bigmem:
Node Prefix | Architecture |
ber | AMD Bergamo |
clr | Intel Cascade Lake Refresh |
clrm | Intel Cascade Lake Refresh Bigmem |
clrv | Intel Cascade Lake Refresh with NVIDIA Volta 100 GPU |
gen | AMD Genoa |
il | Intel Ice Lake |
ila | Intel Ice Lake with NVIDIA Ampere 16 GPU |
ilm | Intel Ice Lake Bigmem |
ilp | Intel Ice Lake Platinum |
ilpa | Intel Ice Lake Platinum with NVIDIA Ampere 40 GPU |
mil | AMD Milan |
rome | AMD Rome |
sk | Intel Sky Lake |
sv | Intel Sky Lake with NVIDIA Volta 100 GPU |
srph | Intel Sapphire Rapids Platinum with NVIDIA Hopper 100 GPU |
The meaning of the numerical portion of the node names varies by cluster. For the ISAAC Legacy cluster (3 digit suffix), the numerical portion is incremented and assigned to consecutive nodes of the same type as they are deployed. Consecutively numbered nodes are in many cases located in the same rack and share a common Infiniband switch, but there is no guarantee of this. For the SIP and NG clusters (4 digit suffix), the first digit is a row number, the second is a rack number within the row, and the final two digits are a rack unit number within the rack. In all cases for these two clusters, nodes will be closest together in the Infiniband fabric when their node numbers are most similar.