Shared Research Computing Facility
Our High Performance Computing (HPC) service provides a cluster of computing resources to power transactions for 31 research groups and departments across the University, as well as additional projects and initiatives as demand and resources allow.
CUIT centrally manages our HPC resources on the Morningside campus in the Shared Research Computing Facility (SRCF), which consists of a dedicated portion of the university data center.
Our HPC cluster utilizes energy-use measurement and monitoring, with a focus on maximizing computing capacity-per-watt, thereby increasing energy efficiency. These efforts help Columbia meet our local and national commitments to reduce the university's carbon footprint.
Accessing Our High Performance Computing Environment
CUIT allows University researchers and departments to purchase shares in our HPC cluster, providing access to our infrastructure—with each share providing additional processing capabilities and higher priority for scheduling jobs.
Our New HPC Cluster - HABANERO!
CUIT Is proud to announce the go-live of the next generation HPC Cluster, named Habanero! It went live in the Fall of 2016 and is available now. Faculty, research staff, and sponsored students who were not a part of the initial purchase may pay a fee to rent a share of the cluster. See High Performance Computing Renter Service for more information, including rental terms and a rental request form. If you are interested in more information, reach out to us at firstname.lastname@example.org.
Our Other HPC Cluster - Yeti
In the meantime, Yeti, our previous generation HPC Cluster, continues to provide a powerful resource for researchers.
Getting Access to the Extreme Science and Engineering Discovery Environment (XSEDE)
We are happy to provide assistance to Columbia faculty members, and postdoctoral researchers who are eligible principal investigators (PIs,) in obtaining access to an allocation on XSEDE, as well as support in getting started on that cluster.
Email us at email@example.com for more information.