High Performance Computing (HPC)

Shared supercomputing resources for Columbia researchers

Also known as HPC and Shared HPC.

CUIT’s High Performance Computing service provides a cluster of computing resources that power transactions across 31 research groups and departments at the University, as well as additional projects and initiatives as demand and resources allow. The Shared Research Computing Policy Advisory Committee (SRCPAC) oversees the operation of existing HPC clusters through faculty-led subcommittees. SRCPAC is also responsible for the governance of the Shared Research Computing Facility (SCRF) as well as making policy recommendations for shared research computing at the University.

HPC service is available 24x7. Downtimes for maintenance may be scheduled every 3 months. The duration of these planned outages varies but is typically less than a day and is announced to users in advance.

Habanero HPC Cluster logo

Habanero Shared HPC Cluster

The Habanero xluster was launched in November 2016, and is housed in Manhattanville in the Jerome L. Greene Science Center. It is available for annual purchase cycles, rental, or free shares. It is also available for classroom teaching. The cluster is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s Research Computing Services (RCS) team.

222 nodes with a total of 5328 cores (24 cores per node):

  • 208 HP ProLiant XL170r Gen9 nodes with dual Intel E5-2650v4 Processors (2.2 GHz):
    • 176 standard memory nodes (128 GB)
    • 32 high memory nodes (512 GB)
  • 14 HP DL380 Gen9 nodes with dual Intel E5-2650v4 Processors (2.2 GHz) and NVIDIA K80 GPU (2 per node) supplying ~140,000 GPU cores
  • 407 TB DDN GS7K GPFS storage
  • EDR Infiniband (FDR to storage)
  • Red Hat Enterprise Linux 7
  • Slurm job scheduler

Yeti Shared HPC Cluster

Columbia’s previous generation HPC cluster, Yeti, is located in the Shared Research Computing Facility (SRCF), a dedicated portion of the university data center on the Morningside campus, and continues to provide a powerful resource for researchers. Although no additional permanent shares in Yeti are available, faculty, research staff, and sponsored students may rent shares of Yeti by submitting a rental request below.

167 nodes with a total of 2672 cores (16 cores per node):

  • 61 HP SL230 Gen8 nodes with Dual Intel E5-2650v2 Processors (2.6 GHz):
    • 10 standard memory nodes (64 GB)
    • 3 high memory nodes (256 GB)
    • 48 FDR Infiniband nodes (64 GB)
  • 5 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650v2 Processors (2.6 GHz) and NVIDIA K40 GPU (2 per node) supplying 28,800 GPU cores
  • 97 HP SL230 Gen8 nodes with Dual Intel E5-2650L Processors (1.8 GHz):
    • 38 standard memory nodes (64 GB)
    • 8 medium memory nodes (128 GB)
    • 35 high memory nodes (256 GB)
    • 16 FDR Infiniband nodes (64 GB)
  • 4 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650L Processors (1.8 GHz) and NVIDIA K20 GPU (2 per node) supplying ~20,000 GPU cores
  • 160 TB NetApp FAS6220 scratch storage
  • Red Hat Enterprise Linux 6
  • Torque/Moab job scheduler

Hotfoot Shared HPC Cluster

Hotfoot, now retired, was launched in 2009 as a partnership among: the departments of Astronomy & AstrophysicsStatistics, and Economics plus other groups represented in the Social Science Computing Committee (SSCC); the Stockwell LaboratoryCUIT; and the Office of the Executive Vice President for Research; and Arts & Sciences.

Columbia faculty, research staff and students used Hotfoot to pursue research in diverse areas.

In later years the cluster ran the Torque/Moab resource manager/scheduler software and consisted of 32 nodes which provided 384 cores for running jobs. The system also included a 72 TB array of scratch storage.

CUIT offers four ways to leverage the computing power of our High Performance Computing resources.

Please note: Morningside, Lamont, and Nevis faculty and research staff are eligible for the Purchase option. Morningside, Lamont, and Nevis faculty, research staff, and sponsored students are eligible for the Rent and Free options.

 

Researchers may purchase servers and storage during periodic purchase opportunities scheduled and approved by faculty and administration governance committees. A variety of purchasing options are available with pricing tiers that reflect the level of computing capability purchased. Purchasers receive higher priority than others leveraging the HPC.

For more information on this option, please email rcs@columbia.edu.

An individual researcher may pay a set fee for a share of the system for one year as a single user with the ability to use additional computing capacity as it is available, based on system policies and availability. The current price is set at $1000/year.

Submit a request form for an HPC rental request now.

Researchers, including graduate students, post-docs, and sponsored undergraduates may use the system on a low-priority, as-available basis. User support for this option will be provided on a best-effort basis only, with use of the online documentation strongly encouraged.

Submit a request form for free HPC access now.

Instructors teaching a course or workshop addressing an aspect of computational research may request temporary access for their students. Access will typically be arranged in conjunction with a class project or assignment.

Submit a request form for HPC Education access now.

Other Requests

Current HPC customers can request access to their HPC group for a new user. This option is available to current authorized users only.

XSEDE HPC Access

All Columbia faculty members and postdoctoral researchers who are eligible principal investigators (PIs) can contact RCS to inquire about joining our XSEDE national HPC test allocation as a first step to obtaining their own allocation. See http://www.xsede.org/ for more information and email rcs@columbia.edu for inquiries.

FAQ