Investment criteria and standard node configurations are reviewed annually at the beginning of each fiscal year by OIT. This page has been updated for FY21.
Faculty and other University researchers may purchase compute nodes to be placed in the ISAAC HPC cluster using sponsored research funds, startup funds, or other funds for a private condo. These private condo nodes are exclusive to investors and the project(s) to which they belong and can participate in the responsible node sharing program. If you wish to investigate purchasing nodes for a research project or are ready to purchase nodes for use in a private condo the cluster, please submit an HPSC Service Request (ticket) at this link. A staff member will contact you to discuss details regarding the purchase and once you want to commit to a purchase you will work with the Associate CIO for Service Level and Capacity Management to develop a service level agreement (SLA) that will apply to the node(s). For FY21, the standard SLA period for a compute or GPU node investment will be three years. Other periods will be considered on a case-by-case basis including a proposal performance period.
Note that costs below are estimates and final costs will be determined based on equipment configuration usually in the form of a vendor quote or an OIT Service Level Agreement document corresponding to the funding source. Federal research R accounts can only be charged the direct equipment cost since the facility is not a cost center. A vendor quote would be provided for inclusion in a proposal. After an award, the vendor quote would be requested again as usually vendor quotes are only good for 30 days. A facilities document for inclusion in a proposal is available at this link (University authentication required for access)
The compute node costs below are an estimated one-time costs for the three year period.
With the standard compute node, investors will obtain a powerful compute node that can handle significant HPC workloads. The pertinent technical specifications are listed below.
The estimated cost for a standard compute node is $10,000. Modifications can be made to the above configuration on a case-by-case basis.
With the standard GPU node, investors will obtain a compute node that can handle standard HPC workloads and has significant GPU performance for GPU-intensive applications. The pertinent technical specifications are listed below.
The estimated cost for a single standard GPU node is $23,000.00. Modifications can be made to this configuration on a case-by-case basis.
Unpurged Lustre project storage space (/lustre/haven/proj/…) is available as well as purged Lustre scratch space (/lustre/haven/user/…). Project storage spaces have quotas enabled and Lustre scratch does not empose quotas. University of Tennessee projects receive 1 terabyte (TB) of Lustre project space for no additional cost to the research project. Any project storage beyond 1TB for University of Tennessee sponsored projects and other funded projects should be purchased. External organizations and University faculty leading sponsored projects or other funded projects can purchase storage resources on the high-performance Lustre storage system for their research projects. The FY21 cost per terabyte is $59 per year. Also, Lustre file systems have a finite number of files that the entire file system can contain. Any project or researcher that uses more that 10 million files will also be charged $59 per year for every 10 million files over the first 10 million.
Please note that the previously listed storage costs only includes the storage equipment, administration, and maintenance of the file system and does not include backups. Backups of Lustre storage are the responsibility of the end users. On a case-by-case basis arrangements can be made for limited amounts of some type of backups, such as, as a duplicate copy or snapshot and should be discussed and included into any investment Service Level Agreement. For more information on the available file systems, please refer to the File Systems document.