Investment criteria and standard node configurations are reviewed annually at the beginning of each fiscal year by OIT. This page has been updated for FY23 on August 15, 2022.
If you are working on a proposal and need the ISAAC facilities document it is available here.
(UT authentication required)
Faculty and other University researchers may purchase compute nodes to be placed in the ISAAC HPC cluster using sponsored research funds, startup funds, or other funds for a private condo. These private condo nodes are exclusive to investors, their project(s), and project members. In addition to private condo exclusive use (the default) private condo resources can participate in the responsible node sharing program.
If you wish to investigate purchasing nodes or know what you want and are ready to purchase nodes for use in a private condo the cluster, please submit an HPSC Service Request (ticket) at this link. A staff member will contact you to discuss details regarding the purchase and once you want to commit to a purchase you will work with the Associate CIO for Service Level and Capacity Management to develop a service level agreement (SLA) that will apply for the equipment purchase (equipment, cost, service period, etc.).
For FY23, the standard SLA period for a compute or GPU node investment will be three years. Other periods will be considered on a case-by-case basis including a proposal or award performance period.
Note: Costs below are estimates; final costs will be determined based upon equipment configuration (usually in the form of a vendor quote or an OIT Service Level Agreement document corresponding to the funding source).
Federal research R accounts can only be charged the direct equipment cost since our facility is not a cost center. A vendor quote would be provided for inclusion in a proposal. After an award, the vendor quote would be requested again, as usually vendor quotes are only good for 30 days.
Intel, AMD, and Dell came out with new technology in 2021 that Dell calls their 15th generation technology which is based upon Intel Xeon Ice Lake processors and AMD EPYC processors.
The compute node costs below are estimated one-time costs for a three year period.
Note: All prices below are estimates, and pricing changes monthly (approximately). Dell advised HPSC in August 2022 that prices were going up and lead times for shipment of equipment can be up to six months due to the industry-wide shortage of chips, glass, the prices of memory, power units, and other specialized parts.
With the 15th generation standard compute node, investors will obtain a powerful compute node based on Intel Xeon Ice Lake processors or AMD Milan processors that can handle significant HPC workloads. The pertinent technical specifications are listed below.
The estimated cost for a standard Intel processor compute node is approximately $13,740 ($245.35 per core). Modifications can be made to the above configuration on a case-by-case basis.
The estimated cost for a Dell PowerEdge R6525 is approximately $21,000 ($164 per core). Modifications can be made to the above configuration on a case-by-case basis.
With the standard GPU node, investors will obtain a compute node that can handle standard HPC workloads and has significant GPU performance for GPU-intensive applications. The pertinent technical specifications are listed below.
The estimated cost for a 15th generation GPU node with one NVIDIA 48GB A40 GPU is $23,952. Modifications can be made to this configuration on a case-by-case basis.
The estimated cost for a Dell PowerEdge R6525 with one NVIDIA 48GB A40 GPU is $30,473. Modifications can be made to the above configuration on a case-by-case basis.
Lustre project storage space is available on each cluster managed with quotas. Project space is not purged. There is also Lustre scratch space available on each cluster managed with quotas on ISAAC NG and the Secure Enclave cluster and available without quotas on ISAAC Legacy. As needed Lustre scratch storage will be purged with a minimum of two weeks notice as part of routine cluster maintenance. We have not had to purge space on any cluster’s Lustre since late 2021. OIT HPSC tries as best as possible to maintain an amount of storage to meet demands of the research community to drastically reduce the need to purge. However, we reserve the right to coordinate purging on any Lustre scratch space to keep enough free space on any Lustre file system to maintain a healthy file system.
University of Tennessee projects receive 1 terabyte (TB) of Lustre project space at no additional cost to the research project. Requests for project space up to 10 TBs can be requested at any time by submitting an HPSC Service Request (see Submit HPSC Service Request in the menu to the left of this page). The OIT HPSC staff can approve any reasonable request for additional project space up to 10 TBs. Any project storage between 10 and 100 TBs should be brought to the attention of the HPSC Director for consideration also via a HPSC Service Request. Any request beyond 100 TBs for an existing project or for a sponsored research proposal should request a quote for the purchase of the additional storage using research, department or faculty project funds.
External organizations and University faculty leading sponsored projects or other funded projects can purchase storage resources on the high-performance Lustre storage system for their research projects. The FY22 and FY23 cost per terabyte is $87 or by quote for directly buying drives for the storage amount requested that will go into the storage subsystem. Also, Lustre file systems have a finite number of files that the entire file system can contain. Any project or researcher that uses more that 10 million files will also be charged $100 per year for every 1 million files over the first 10 million.
Please note that the previously listed storage costs only include: the storage equipment, administration, and maintenance of the file system, and this cost does not include backups.
Backups of Lustre storage are the responsibility of the end users. The three Lustre files systems are 1.3 petabytes, 2.9 petabytes, and 3.6 petabytes and we do not have the capability to backup that much space.
Note: VM (virtual machine) storage is handled on a case-by-case basis. Please contact HPSC directly via email or help request ticket if your project has questions about VM storage.
On a case-by-case basis, arrangements may be made for some varieties of back-ups of limited size, such as a duplicate copy or snapshot. If these sorts of back-ups will be desired, projects should discuss these desired back-ups with HPSC staff, and include these desires/requirements in any investment Service Level Agreement. For more information on the available file systems, please refer to the File Systems document.