The Deans and faculties of KSAS and WSE have partnered to create a Homewood High Performance Cluster (HHPC). The HHPC integrates the resources of many PIs to create a powerful and adaptive shared facility designed to support large scale computations on the Homewood Campus.
The HHPC is managed under the aegis of IDIES and operates as a Co-Op. Hardware is provided by users, who then get a proportional share of the pooled resources. The networking infrastructure and a systems administrator are provided by the Deans of the Whiting School of Engineering and the Krieger School of Arts and Sciences.
The first cluster, HHPCv1, came online in December 2008 and was retired 8 years later. It contained almost 200 compute nodes connected by a DDR Infiniband switch. A new cluster, HHPCv2, came online in the newly renovated Bloomberg 156 Data Center in November 2011. It currently has over 350 nodes with 12 cores and 48GB of RAM and there is no plan for expansion given the newer hardware available at MARCC. HHPCv2 is connected by high speed links to MARCC and other clusters in Bloomberg 156, including the Datascope and 100TFlop Graphics Processor Laboratory.
Under the management plan for the HHPC, 10 percent of the compute time can be allocated by the Deans to faculty on the Homewood campus. The priorities for use of this time are to meet temporary surges in compute needs for research projects at Homewood, to provide access to new hires and new contributors before nodes they have ordered arrive, and to allow potential members of the HHPC to “kick the tires”. For more information, and to apply for time on the HHPC, see the HHPC application below.
To access the HHPC cluster, first you will need an HHPC account. Please get approval from your PI and then send an account email request to the hhpc@jhu.edu.
HHPC log-in node, login.hhpc.jhu.edu, is where all HHPC users access to submit their jobs to the HHPC compute nodes. Login.hhpc.jhu.edu is not a compute node; therefore, no one should be running compute job on it.
Job queue is managed by the Slurm scheduler. Because we share similar settings as the MARCC cluster, please go here to learn on using Slurm’s scheduler to submit jobs: MARCC SLURM.
If you have a question, please email hhpc@jhu.edu.
Please attach a .pdf file addressing the following questions: