Purchasing additional resources

The Hoffman2 Cluster is open, free of charge, to the entire UCLA campus with a base amount of computational and storage resources. Researchers can purchase additional computational and storage resources to increase their computational capacity and satisfy their storage needs. Computational resources (i.e., compute nodes, GPU nodes) owned by research groups can be accessed in preferential mode, in which each group accesses their own resources with higher priority and for extended run times, or in a shared mode, in which momentarily unused computational resources from each group are accessed by any other contributing group for short (up to 24 hours) run times. The advantage of shared model is that researchers can access a much wider set of resources than what they have contributed.

Cost for storage | compute resources

Type

Cost

Description

Storage/TB/year

$120.78

Effective March 4, 2024, the cost for one terabyte of HPC storage for a period of one year is $120.78 (based on a daily rate of $.33 for prorating purposes; billed monthly). For details on what the rate includes, see How to purchase storage.

Compute node

$8,200.86

The cost for one compute node, please see compute node specifications

Please refer to:

How to purchase storage

On the Hoffman2 Cluster users are endowed with 40GB of backed-up $HOME directories

To order dedicated storage for your group:


The IDRE Store uses the UCLA Federated Authentication Service to authenticate access, therefore you need to have a UCLA Logon ID in order to access the site. The UCLA Logon ID is not a service of the IDRE Research Technology Group, if you have difficulty with it consider contacting the UCLA IT Support Center.

Important

Please note that if you are using federal funds, these charges will incur facilities and administrative (F&A) costs (56% as of July 1, 2018) as they are considered a service and not equipment.

Storage costs includes:

  • Backups

  • The physical storage space

  • The infrastructure to support it

  • Administration of the users of your storage space

  • Hardware and software upgrades

  • Problem fixes.

This rate will remain in effect until renewed through the campus.

How to purchase compute nodes

The Hoffman2 Cluster includes both a general campus use section, that is freely available to all interested researchers, and a condo-style section where researchers with significant computing needs can purchase and contribute nodes for their own priority use. Please refer to the Compute node specifications table to see the current standard node configuration.

Compute node specifications

CPU

Memory

Local Storage

Network

Warranty

2 x 24-core Intel Xeon Gold 6342 (36MB cache, 2.8 GHz)

512 GB

1.92 TB (SSD)

EDR Infiniband

5-year

To order compute node(s):

  • For standard compute nodes, please open a request ticket via our online help desk and let us know the number of nodes you would like to purchase, one of the IDRE RTG technologists will work with you directly to discuss options.

  • For GPU nodes purchase, please open a request ticket via our online help desk and one of the IDRE RTG technologists will work with you directly to discuss options.

  • For memory upgrades, please open a request ticket via our online help desk and one of the IDRE RTG technologists will work with you directly to discuss options.

  • For larger disk capacities, please open a request ticket via our online help desk and one of the IDRE RTG technologists will work with you directly to discuss options.

Note

For a standard compute nodes purchase, once a node purchase agreement has been signed the node(s) is(are) generally assigned and available to your group within 2 business days.

Your purchase includes:

  • Full access to your cores through the Hoffman2 shared cluster queuing system

  • Administration of the group and users of your compute resources

  • Complete support of the hardware and the operating system

  • Connection to the cluster’s InfiniBand fabric

  • Access to OARC RTGs consultation services

  • Access to the software and compilers available on the cluster