High Performance Computing Working Group

Working Group Leads:
 
Ravi Radhakrishnan (rradhak@seas.upenn.edu)
Former Leads: Yufang Jin (Yufang.Jin@utsa.edu) and Pratul Agrawal (agarwalpk@ornl.gov)
 
Goals and Objectives:

The goal of this working group is to initiate discussion, foster collaboration and data sharing including (but not limited) to:

  1. Computational resources including high performance computing.
  2. Multi-scale modeling and simulations on emerging computer architectures (e.g. GPUs).
  3. Computational algorithms, libraries, tool-kits and software.
  4. Infrastructure and strategies to handle big-data problems.
 
High-performance computing (HPC) broadly involves the use of new architectures (such as GPU computing), computing in distributed systems, cloud-based computing, and computing in parallel to masively parallel platforms or extreme hardware architectures for running computational models. The term applies especially to systems that function above a 10^12 floating-point operations per second (teraflop) and even in the petaflop (10^15 floating-point operations per second) regime or systems which require very large memory. HPC has remained central to applications in science, engineering, and medicine. Challenges for researchers utilizing HPC platforms and infrastructure range from understanding emerging new platforms to optimizing algorithms in massively parallel architectures to efficient access and handling of data at the large scale. The high performance computing working group is focussed on addressing contemporary HPC related issues faced by biomedical researchers.
 
The HPC technology is rapidly evolving and is synergistic yet complimentary to the developmnt of scientific computing. To help young scholars get started we recommend the following review articles and resources as starting points.
 
Review Articles

GPU computing for systems biology, Lorenzo Dematté  Davide Prandi, Brief Bioinform (2010) 11 (3): 323-333.

DOI: 10.1093/bib/bbq006

 

Mesoscale computational studies of membrane bilayer remodeling by curvature-inducing proteinsReview Article, N. Ramakrishnan, P.B. Sunil Kumar, Ravi Radhakrishnan, Physics Reports, 2014, 543, 1-60

DOI: 10.1016/j.physrep.2014.05.001

 

111 years of Brownian motion, Xin Bian,   Changho Kim and   George Em KarniadakisSoft Matter, 2016,12, 6331-6346 

DOI: 10.1039/C6SM01153E

 

Molecular dynamics simulations of large macromolecular complexes. Juan R. Perilla, Boon Chong Goh, C. Keith Cassidy, Bo Liu, Rafael C. Bernardi, Till Rudack, Hang Yu, Zhe Wu, and Klaus Schulten.Current Opinion in Structural Biology, 31:64-74, 2015.

DOI: 10.1016/j.sbi.2015.03.007

 

Journals:

Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal https://www.siam.org/journals/mms.php

PLOS Computational Biology http://journals.plos.org/ploscompbiol/

 
 
Participants
  • Pratul Agrawal
  • Mark Alber
  • Gary An
  • Portonovo Ayyaswamy
  • Keith Bisset
  • Danny Bluestein
  • Chase Cockrell
  • Scott Diamond
  • Brian Drawert
  • Salvador Dura-Bernal
  • David Eckmann
  • Ahmet Erdemir
  • Mohsin Jafri
  • Yufang Jin
  • Malgorzata Klosek
  • Gianluca Lazzi
  • Nicole Li
  • Felise Lightstone
  • Elebeoba May
  • James Olds
  • Andre Pereira
  • Ravi Radhakrishnan (Lead)
  • Herbert Sauro
  • Maciek Swat
  • Emad Tajkhorshid
  • Jared Weis
  • Zhiliang Xu
  • Alireza Yazdani
  • Gene Yu

 

MSM 2017 (10th Anniversary) meeting Discussion Items

FileHPC-WG-IMAG-2017-final.pptx

 

Past Presentations: 

Symposium on Computational Science at the University of Pennsylvania, Philadelphia PA, 2015

Title: D^3 Deformation, Defects, Diagnosis
Presenters: 
Eduardo Glandt, Erin Grinspun, Ladislav Kavan, Arup Chakraborty, Zachary Ives, Eric Bradlow, George Biros, Christos Davatzikos, Dmitri Chklovskii, Danielle  Bassett, Christopher Rycroft, Talid Sinno, Andrea Liu, Stefano Curtarolo, Bryan Chen, Tal Arbel, Joseph Subotnik, Elliot Lipeles

Program Booklet

 

Also see list of Webinars below.

 

Additional Information: 

Data Science and HPC Symposium: Emerging Paradigms in Scientific Discovery


October 5-6, 2016, University of Pennsylvania, Philadelphia Pennsylvania


Registration is free but required. Register at: https://pics.upenn.edu/content/emerging-paradigms-scientific-discovery-registration


Organizers: Ravi Radhakrishnan, Zack Ives, Rob Riggleman


URL: http://www.pics.upenn.edu/events/emerging-paradigms-scientific-discovery


Program Booklet: File Emerging Paradigms Booklet Layout.docx; Program Flyer: PDF icon 11x17_EmergingParadigms_Symp_final.pdf


 


XSEDE Scholars Program


Supercomputers, data collections, new tools, digital services, increased productivity for thousands of scientists around the world.Sound exciting? These are some of the topics you can learn more about through the XSEDE Scholars Program.  The XSEDE Scholars Program (XSP) is a program for U.S. students from underrepresented groups in the area of computational sciences.  As a Scholar, you will learn more about high performance computing and XSEDE resources, network with cutting-edge researchers and professional leaders, and belong to a cohort of student peers to establish a community of academic leaders. In particular, the focus is on the following underrepresented groups: African Americans, Hispanics, Native Americans, and women.


Apply at https://www.xsede.org/xsede-scholars-program


 


Training on XSEDE Platforms










XSEDE offers training classes to teach users how to maximize their productivity and impact in using the XSEDE services. The training classes focus on systems and software supported by the XSEDE Service Providers, covering programming principles and techniques for using resources and services effectively. Training classes are offered in high performance computing, visualization, data management, distributed and grid computing, science gateways, and more.


Current and potential XSEDE users should review the  XSEDE Training Course Listing and browse the current Course Calendar for a list of upcoming training courses at XSEDE Sites. XSEDE also maintains a list  Online Training materials of relevance to XSEDE users. The list of online training materials will be expanded as new materials are developed; suggestions for additions can also be submitted via the feedback form.


See: https://www.xsede.org/training1


 


Parallel Processing Using Python: This class will cover the Python module, mpi4py, that provides a Message Passing Interface (MPI) implementation and enables multi-node computation. In addition, the multiprocessing module, which provides general multi-process computation, will be covered. Both topics will be illustrated through hands-on examples. This course is intended for students that have proficiency with the Python language and with the Linux OS command-line. In order to participate in the lab exercises, students' machines must have ssh/terminal capability and have logged into TACC HPC systems.https://portal.xsede.org/web/xup/course-calendar


High-performance computing software development workshop http://www.dehpc.org


Virtual High-Performance Computing Workshop (do this from the comfort of your laptop/couch!): https://cvw.cac.cornell.edu/


HPC in life sciences: http://www.cbi.utsa.edu/workshops


Extreme Science and Engineering Discovery Environment (XSEDE): https://www.xsede.org/using-xsede


XSEDE User Portal (XUP): https://portal.xsede.org


 

Working Group Activities: 


  • Organize webinars on High Performance Computing Topics

  • Conducting workshops (local, national, international) on High Performance Computing topics and publicizing within MSM consortium

  • Fostering and nurturing collaborations on using High Performance Computing  research

----------------------------------------------------------------------------------------------------------------------


Join NVIDIA for a free online course featuring the OpenACC Toolkit from NVIDIA 


http://info.nvidianews.com/index.php/email/emailWebview?mkt_tok=3RkMMJWW...


----------------------------------------------------------------------------------------------------------------------


HPC Virtual Workshop: sharpen your high-performance computing skills (or acquire hpc skills) in the convenience of your own laptop/couch at any time:


https://cvw.cac.cornell.edu/


----------------------------------------------------------------------------------------------------------------------


A Summary of the Working Group Break-Out Discussion from the 2017 MSM Meeting is attached:


FileHPC-WG-IMAG-2017-final.pptx


The 2017 meeting was conducted by Prof. Portonovo Ayyaswamy. 


A Summary of the Working Group Break-Out Discussion from the 2015 MSM Meeting is attached:


FileHPC-WG-IMAG-2015v2.pptx


We are still seeking your feed-back on the following !!!!!! Please send your responses by e-mail to the working group leads


 


Topic: Computing Hardware Resources



  1. Some researchers already have access to what they need (computers, university clusters, storage space) but not everybody. Do you need access to computing resources that are not currently available to your multi-scale modeling needs?

  2. Questions was raised at the previous meetings about access to supercomputers. Would it be beneficial to coordinate access for MSM Consortium to avail group time on various hardware as needed? Particularly resources at the DOE and NSF centers?

  3. GPUs and other non-standard computer hardware is being used to solve the multi-scale problems. Would you benefit (and also willing) share your experiences and code to benefit members who do not have expertise or human resources to develop the code on their own?

  4. Would cloud computing benefit your investigations? Would you like to access cloud environment such Amazon EC2?

Topic: Computational Algorithms



  1. Do you see the benefits of sharing tool-kits, code, data (?) associated with multi-scale modeling on HPC resources? Would you be willing to share your algorithms and code?

  2. Do you have a wish-list of optimized libraries on the HPC environment, which will improve your productivity while using HPC resources? Could you specify which ones?

  3. If someone wants to seek advice on solving computational problem, do you see the benefits of having a group/forum discussion? These could be simple problems such as what compiler or hardware or libraries to use to solve the problems or something more complex.

Topic: Collaborate, define the needs of the community



  1. It is discussed that 20% of all computing time on NSF supercomputers and 10% of time on DOE supercomputers is used by NIH funded PIs. However, each PI has to write separate proposals (peer-reviewed) and only big users are typically able to get access. Would it benefit to coordinate the access to supercomputers, especially for small to medium sized or beginning users?

  2. Do you see the benefit of organizing focused work-shops to discuss and exchange ideas related to HPC for multi-scale modeling? Either through sessions at existing conferences and meetings? Or perhaps organize new workshop?

New Topic: What other questions do you have or topics do you suggest?


----------------------------------------------------------------------------------------------------------------------

Table sorting checkbox: 

Comments