Skip to content Skip to main navigation Report an accessibility issue
High Performance & Scientific Computing

Filesystem Retirement



The /lustre/isaac filesystem is mounted only on the DTN nodes. Users will need to login to the DTNs in order to transfer files off /lustre/isaac

Summary

Two new Lustre storage subsystems have been acquired to replace two Lustre storage subsystems that are reaching their end of support life in 2025. The file systems associated with the storage subsystems that will be retired in 2025 are /lustre/haven which is the file system on the ISAAC Legacy cluster and /lustre/isaac which is the file system associated with the ISAAC Next Generation cluster (also known as ISAAC Next Gen or ISAAC NG). See the table below for some details.

ClusterOld FilesystemNew Filesystem
ISAAC NG/lustre/isaac (4PB)/lustre/isaac24 (5PB)
ISAAC ORI/lustre/haven (3PB)/lustre/ori (1PB)
ISAAC Legacy/lustre/havenRETIRED

Access to ISAAC ORI and /lustre/ori is for researchers who are members of Projects that have private condos on ISAAC ORI and a very limited set of others by HPSC Director approval.

Important Dates

  • 13 Jan 2025: /lustre/haven to be mounted read-only everywhere it is mounted
  • 30 Jan 2025: ISAAC Legacy cluster will be retired
  • 30 Apr 2025: /lustre/haven to be retired
  • 1 May 2025: /lustre/isaac to be mounted read-only everywhere it is mounted
  • 31 July 2025: /lustre/isaac to be retired

Directories on the new file systems

New File Systems Default Quotas and Purge Policy
File System & PurposePathDefault QuotaPurged
ISAAC NG
Scratch Directory
(total capacity 5 PB)
/lustre/isaac24/scratch/<username>10 TB,
5M files
Purging when needed only after notification
ISAAC NG
Project Directory
(total capacity 5 PB)
/lustre/isaac24/proj/<project>1 TB,
no file limit
Not Purged
ISAAC ORI
Scratch Directory
(total capacity 5 PB)
/lustre/ori/scratch/<username>10 TB,
5M files
Purging when needed only after notification
ISAAC ORI
Project Directory
(total capacity 1 PB)
/lustre/ori/proj/<project>1 TB,
no file limit
Not Purged
Default quotas and purge policy may be subject to change following adequate user notification as noted, and all quotas may be extended by user request, subject to availability of system resources.

Commands to Determine User and Project Quotas

To determine your user and/or project quota on any file system you can use a variant of the lfs command. A good place to perform the Lustre quota inquiries is either of the two ISAAC Next Gen DTNs (dtn1.isaac.utk.edu or dtn2.isaac.utk.edu). This will work for all Lustre file systems on those two DTNs. You can do the following commands to determine your user storage and file quota:

lfs quota -u <username> /lustre/haven
lfs quota -u <username> /lustre/isaac
lfs quota -u <username> /lustre/isaac24
lfs quota -u <username> /lustre/ori

Example output is shown below

[username@dtn2 ~]$ lfs quota -h -u username /lustre/isaac24
Disk
    quotas for usr username (uid 10308):
    Filesystem      used quota limit grace files    quota    limit grace
    /lustre/isaac24  28k  100T  106T     -     7 45000000 50000000     -

It is a little more complicated to determine a project quota. To determine a project quota you will need the group id for the group that is associated with your project. The easiest way to find that is to do a variant of the ls command and have it show you the group id number of the group for that project directory. If your project directory was /lustre/isaac24/proj/UTK0009 then do this command and note the group id number in the output.

$ ls -ldn /lustre/isaac24/proj/UTK0009
drwxrws--- 2 10879 3308 4096 Nov 7 17:58 /lustre/isaac24/proj/UTK0009

Then you can use this command to see the project storage and file quota.

$ lfs quota -h -p 3308 /lustre/isaac24
Disk quotas for prj 3308 (pid 3308):
Filesystem used quota limit grace files quota limit grace /lustre/isaac24 4k 0k 45T - 1 0 0 -

You can do the following commands to determine your project storage and file quotas.

lfs quota -h -p <project group number> /lustre/haven
lfs quota -h -p <project group number> /lustre/isaac
lfs quota -h -p <project group number> /lustre/isaac24
lfs quota -h -p <project group number> /lustre/ori

Moving Data from /lustre/haven to /lustre/isaac24

Migrating data between /lustre/haven and /lustre/isaac24 requires the use of Globus, a web-based data transfer tool, or use of one of the ISAAC Next Generation (ISAAC Next Gen) clusters data transfer nodes (DTN). Each ISAAC Next Gen DTN mounts /lustre/haven, /lustre/isaac, and /lustre/isaac24. Options for transferring data include Globus and LNET based transfers. For more details on how to transfer files using Globus or using the LNET method, see the Data Transfer page under the ISAAC Next Gen menu or click here.

Moving Data from /lustre/isaac to /lustre/isaac24

The /lustre/isaac24 filesystem is mounted everywhere on the ISAAC NG cluster. Therefore, any node on the cluster be used to transfer data in addition to the options described previously. It is recommended to use Globus, the LNET method from the DTNs or using the transfer_lustre_mpi tool developed by the HPSC staff. The script, transfer_lustre_mpi has been provided in the default path that will facilitate this method of transfer. Please check transfer_lustre_mpi --help for usage

Moving Data from /lustre/haven to /lustre/ori

If your project has access to ISAAC ORI, moving data from /lustre/haven to /lustre/ori can be performed with Globus, the LNET method from the ISAAC Next Gen DTNs, or using the transfer_lustre_mpi method described in the paragraph above.

IMPORTANT NOTE: As /lustre/ori is only 1 PB in size, project and user quotas are smaller than they were on Haven. Please check your Lustre user and project quotas and plan accordingly.

Moving Data from any Filesystem to UT-StorR Archival Storage

It is important for all researchers to manage their data. Types of data include initial primary data, intermediate data, and results data. Frequently, users do not clean up intermediate data or results data and researcher storage usage always increases and never decreases. Many times initial primary data and results data are left on the ISAAC clusters and are no longer actively used. With the introduction of the UT-StorR archival storage system there is now a solution to move any data not actively used to the UT-StorR archival storage system. Primary data and results data that is no longer actively used should be moved to the UT-STorR long term archival storage system which is very, very easy to request access and to use. Please see the UT-StorR information in the menu to the left or go to this link for more details and to request access training.