Investment criteria and standard node configurations are reviewed annually at the beginning of each fiscal year by OIT. This page has been updated for FY22.
If you are working on a proposal and need the ISAAC facilities document it is available here (UT authentication required).
Faculty and other University researchers may purchase compute nodes to be placed in the ISAAC HPC cluster using sponsored research funds, startup funds, or other funds for a private condo. These private condo nodes are exclusive to investors, their project(s), and project members. In addition to private condo exclusive use (the default) private condo resources can participate in the responsible node sharing program.
If you wish to investigate purchasing nodes or know what you want and are ready to purchase nodes for use in a private condo the cluster, please submit an HPSC Service Request (ticket) at this link. A staff member will contact you to discuss details regarding the purchase and once you want to commit to a purchase you will work with the Associate CIO for Service Level and Capacity Management to develop a service level agreement (SLA) that will apply for the equipment purchase (equipment, cost, service period, etc.).
For FY22, the standard SLA period for a compute or GPU node investment will be three years. Other periods will be considered on a case-by-case basis including a proposal or award performance period.
Note: Costs below are estimates; final costs will be determined based upon equipment configuration (usually in the form of a vendor quote or an OIT Service Level Agreement document corresponding to the funding source).
Federal research R accounts can only be charged the direct equipment cost since our facility is not a cost center. A vendor quote would be provided for inclusion in a proposal. After an award, the vendor quote would be requested again, as usually vendor quotes are only good for 30 days.
Intel, AMD, and Dell have come out with new generation technology for FY22 that Dell calls their 15th generation technology based which is based upon Intel Xeon Ice Lake processors and AMD Milan processors.
For a period of time we will be able to get 14th generation Dell technology, but at some point in time 14th generation technology will no longer be available and Dell will no longer provide quotes for the previous generation’s technology. HPSC is aware that HPSC will no longer be able to order AMD Rome processor-based technology. T
The compute node costs below are estimated one-time costs for a three year period.
Note: All prices below are estimates, and pricing changes monthly (approximately). Dell advised HPSC in July 2021 that due to the industry-wide shortage of chips, glass, and other parts, the price of memory and required order lead times are increasing.
While 14th generation compute node technology is still available, below specifices a powerful Intel based compute node that can handle significant HPC workloads. The pertinent technical specifications are listed below.
The estimated cost for a this 14th generation standard compute node is now just over $10,000. Modifications can be made to the above configuration on a case-by-case basis while available.
With the 15th generation standard compute node, investors will obtain a powerful compute node based on Intel Xeon Ice Lake processors or AMD Milan processors that can handle significant HPC workloads. The pertinent technical specifications are listed below.
The estimated cost for a standard Intel processor compute node is approximately $12,700 ($226 per core). Modifications can be made to the above configuration on a case-by-case basis.
The estimated cost for a Dell PowerEdge R6525 is approximately $22,135 ($173 per core). Modifications can be made to the above configuration on a case-by-case basis.
With the standard 14th generation GPU node while still available, investors will obtain a compute node that can handle standard HPC workloads and has significant GPU performance for GPU-intensive applications. The pertinent technical specifications are listed below.
The estimated cost for a single standard GPU node is approximately $24,500 depending on the type and quantity of GPUs. Modifications can be made to this configuration on a case-by-case basis.
With the standard GPU node, investors will obtain a compute node that can handle standard HPC workloads and has significant GPU performance for GPU-intensive applications. The pertinent technical specifications are listed below.
The NVIDIA V100S GPU is not available in the 15th Generation. Will obtain sample quotes for Ampere GPUs of which now there is an entire product line (A100, A40, A30, A10 and also T4). The estimated cost for a 15th generation GPU node with two T4 GPUs is $15,020 and with two A100 GPUs is ~$28,000. Modifications can be made to this configuration on a case-by-case basis.
The estimated cost for a Dell PowerEdge R6525 is approximately $39,100. Modifications can be made to the above configuration on a case-by-case basis.
Unpurged Lustre project storage space (/lustre/haven/proj/…) is available, as well as purged Lustre scratch space (/lustre/haven/user/…). Purged storage space contents are deleted with notice as part of routine cluster maintenance; Unpurged storage is not routinely purged during routine cluster maintenance, and users are required to manage their own data.Project storage spaces (unpurged) have quotas enabled, while Lustre scratch (purged) does not impose quotas.
University of Tennessee projects receive 1 terabyte (TB) of Lustre project space at no additional cost to the research project. Any project storage beyond 1TB for University of Tennessee sponsored projects and other funded projects should be purchased.
External organizations and University faculty leading sponsored projects or other funded projects can purchase storage resources on the high-performance Lustre storage system for their research projects. The FY21 cost per terabyte was $59 and the FY22 cost per terabyte is approximately $87. Also, Lustre file systems have a finite number of files that the entire file system can contain. Any project or researcher that uses more that 10 million files will also be charged $100 per year for every 1 million files over the first 10 million.
Please note that the previously listed storage costs only include: the storage equipment, administration, and maintenance of the file system, and this cost does not include back-ups. Back-ups of Lustre storage are the responsibility of the end users.
Note: VM (virtual machine) storage is handled on a case-by-case basis. Please contact HPSC directly via email or help request ticket if your project has questions about VM storage.
On a case-by-case basis, arrangements may be made for some varieties of back-ups of limited size, such as a duplicate copy or snapshot. If these sorts of back-ups will be desired, projects should discuss these desired back-ups with HPSC staff, and include these desires/requirements in any investment Service Level Agreement. For more information on the available file systems, please refer to the File Systems document.