THEME 2- PDE Breakout - Human Safety

Back to Main Agenda

Back to THEME 2- Partial Differential Equations session page

Back to Human Safety Page

Session Lead:  Adrian Buganza Tepole

IMAG Moderators: Laurel Kuxhaus (NSF), Virginia Pasour (ARL)

 

Breakout Session Notes:

  • Introductions (name and interest):
    • Session 1:
    • Session 2:
  • Build on current state of the are PDE Models
    • Transfer learning and domain adaptation for PDEs + ML 
    • For multiscale models, a key issue is to couple across models of different scales:
      • How to propagate uncertainties across scales using ML? 
      • How far can we push coupling of different scales, i.e. what are the limits on building a tower of models
  • Build on current state of the art models for Human Safety
    • Predictability is an issue: needs to use available data and models because it is unethical to do experiments on humans
    • What about synthetic data? “What if” scenarios combined with real data is desirable. Also, real data may be skewed
      • How to fuse real and synthetic data together? 
    • Predicting long term outcomes by observing short term response, issues with multiple time scales, not just spatial scales 
    • Use ML to do sensitivity analyses to determine which variables to measure and which simplified models may be used for decisions for which there is limited time to act  
    • Issues of data privacy
  • ML-MSM integration opportunities
    • Integrate the physics into the machine learning models: there is no perfect PDE or model to describe the biological process, but there might be some data and some known physics 
  • Challenges ML-MSM modelers should address
    • How to determine trust in the PDE? How to know that your PDE actually models your biological system? 
    • Other issues of trust and trustworthtiness: list and quantify the different uncertainties, e.g. model uncertainty, parametric uncertainty, measurement uncertainty, etc.
      • Trust the data? Maybe also model the measurement process, although this may not always be possible due to the complexity of what is measured, especially in biological systems. Additionally, be aware that large variance does not mean noise. 
    • ML-MSM:  reproducible research computing is important!  Will enable novices (e.g., UG) to work in this space, bring on new grad students; see codes/infrastructures/trust each other/trust the work – helpful for dissemination.  (Doesn’t have to be GUI, but a standardized language would be helpful.) What are best practices? What are benchmark problems? 
    • Identify limitations: Neural networks can be brittle; issues of explainability; correlation does not mean causation 
    • Multiple sources of data – multisource data doesn’t mean incorrect  
    • Improve robustness: train with noise; adversarial networks. 
Table sorting checkbox
Off