Skip to content Skip to main navigation Report an accessibility issue
High Performance & Scientific Computing

Available Resources



The resources provided by OIT HPSC includes three clusters, two for open research and one for processing sensitive information.  Each cluster has a Lustre file system with petabyte scale storage capacity.  These resources can be used by faculty and student researchers and for academic coursework.

To obtain access to use the resources, claim your account at the ISAAC User Portal by choosing Request an ISAAC Account in the menu on the left.

ISAAC Legacy cluster for Open Research

  • 228 compute nodes of various types including these noteworthy types:
    • 1 Intel Xeon E5-2687Wv4 processor (24 cores) with 1 TB memory
  • 6 GPU nodes
    • 2 Skylake nodes with 1x NVIDIA V100S GPUs (UT)
    • 4 beacon nodes with 1x NVIDIA M60 GPUs (UTK/UTHSC)
  • 6,480 CPU cores
  • Nodes with 32 gigabytes to 1 terabyte memory
  • 2.7 petabytes of Lustre storage in /lustre/haven
  • 4 Data Transfer Nodes with 10 Gb/s bandwidth and Globus

ISAAC Next Generation cluster for Open Research

  • 181 compute nodes
    • 118 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 192 gigabytes memory (UT/UTK/private)
    • 38 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 256 gigabytes memory (UT/UTK/private)
    • 6 nodes with AMD Genoa 9734 processors (224 cores) and 1 TB memory (UT/UTK)
    • 6 nodes with AMD Milan 7713 processors (128 cores) and 512 GB memory (private)
    • 4 nodes with Intel Gold 6248R (Cascade Lake Refresh) processors (48 cores) and 1.5 TBs memory (UTK)
    • 3 nodes with Intel Gold 6348 (Ice Lake) processors (56 cores) and 2 TBs memory (UTK/private)
    • 2 nodes with Intel Platinum 6348 (Ice Lake Platinum) processors (64 cores) and 256 GB memory (private)
    • 2 nodes with AMD Genoa 9534 processors (128 cores) and 512 GB memory (UT/UTK)
    • 1 node with AMD Milan 7763 processor (64 cores) and 256 GB memory (private)
    • 1 node with AMD Rome 7542 processors (32 cores) and 128 GB memory (private)
  • 29 GPU nodes
    • 10 GPU nodes with 1x NVIDIA V100S GPUs (UTK)
    • 8 GPU nodes with 4xNVIDIA H100 GPUs (private)
    • 7 GPU nodes with 2x NVIDIA V100S GPUs (UT/private)
    • 3 GPU nodes with 2x NVIDIA A16 GPUs (private)
    • 1 nodes with 4x NVIDIA A40 GPUs, 54 TBs NVMe scratch space/burst buffer (UTK)
  • 12,384 total CPU cores
  • 3.6 petabytes of Lustre storage in /lustre/isaac
  • 2 Data Transfer Nodes with 40 Gb/s bandwidth and Globus

ISAAC Secure Enclave cluster

  • 60 compute nodes, 2 nodes with 2x NVIDIA V100S GPUs
  • 2,848 CPU cores
  • Nodes with 192 gigabytes of memory
  • 1.7 petabytes of Lustre storage in /lustre/sip
  • 1 Data Transfer Node with 10 Gb/s bandwidth and Globus

Details of the types of compute nodes available that make up each cluster is listed in the System Overview page for each cluster (listed in menus on the left).

Node Naming Conventions

The following table summarizes the meaning of the alphabetic portion of the nodenames, which are generally composed of the manufacturer codename, plus an additional letter in some instances for special node types such as GPU or Bigmem:

Node PrefixArchitecture
berAMD Bergamo
clrIntel Cascade Lake Refresh
clrmIntel Cascade Lake Refresh Bigmem
clrvIntel Cascade Lake Refresh with NVIDIA Volta 100 GPU
genAMD Genoa
ilIntel Ice Lake
ilaIntel Ice Lake with NVIDIA Ampere 16 GPU
ilmIntel Ice Lake Bigmem
ilpIntel Ice Lake Platinum
ilpaIntel Ice Lake Platinum with NVIDIA Ampere 40 GPU
milAMD Milan
romeAMD Rome
skIntel Sky Lake
svIntel Sky Lake with NVIDIA Volta 100 GPU
srphIntel Sapphire Rapids Platinum with NVIDIA Hopper 100 GPU
Table 1: Nodename Prefixes

The meaning of the numerical portion of the node names varies by cluster. For the ISAAC Legacy cluster (3 digit suffix), the numerical portion is incremented and assigned to consecutive nodes of the same type as they are deployed. Consecutively numbered nodes are in many cases located in the same rack and share a common Infiniband switch, but there is no guarantee of this. For the SIP and NG clusters (4 digit suffix), the first digit is a row number, the second is a rack number within the row, and the final two digits are a rack unit number within the rack. In all cases for these two clusters, nodes will be closest together in the Infiniband fabric when their node numbers are most similar.