Home | Who We Are | Current Member List | Member Resources

Member Resources

IDIES members receive access to the following unique big data resources, and grant submission support for proposals aligning with the mission of the institute.

Grant Submissions

IDIES faculty and affiliate members (Co-I status only) can submit sponsored funding applications through the institute. All projects that are being submitted through IDIES must have characteristics and/or objectives that align with our mission.

The Maryland Advanced Research Computing Center (MARCC) is a shared computing facility located on the Bayview Campus of JHU. MARCC is shared with the University of Maryland College Park, made possible with generous funding from the state of Maryland; and provides stable, efficient and easily expandable housing for high performance clusters.

For more info, please contact Jaime Combariza.

SciServer brings the analysis to the data with accessible cloud-based research and education tools that are co-located with big datasets.

Homewood High-Performance Cluster

The Deans and faculties of KSAS and WSE have partnered to create a Homewood High Performance Cluster (HHPC). The HHPC integrates the resources of many PIs to create a powerful and adaptive shared facility designed to support large scale computations on the Homewood Campus.

The Bloomberg Data Center is home to a number of special computing facilities run by IDIES members, including its own data-intensive project with sensors monitoring the temperature, power consumption, air flow and other operating conditions.

Coursera | Johns Hopkins University

Johns Hopkins has partnered with Coursera to deliver free online education. JHU faculty, including several IDIES affiliates, lead a variety of courses, including many relevant to data scientists and researchers meeting the challenge of Big Data. IDIES recommends The Data Scientist’s Toolbox, Statistical Inference, and the ever-popular R Programming.

 

Data-Scope

The Data-Scope endeavors to overcome issues related to big data on traditional HPC by Storing the data local to the computer, One-to-One mapping of users to nodes, Leverage GPUs for computation, and Eliminating Bottlenecks.