THEME 1- ODE Breakout - Human Safety

Back to Main Agenda

Back to THEME 1- Ordinary Differential Equations session page

Back to Human Safety Page

Session Lead: Krishna Garikipati

IMAG Moderators:  Jennifer Couch (NCI), Junping Wang (NSF)

 

Breakout Session Notes:

test

  • Introductions (name and interest):
    • Session 1:  Krishna Garikipati, Junping Wang, Simon Rose, Jennifer Couch, Jerry Myers, Mona Matar, Lauren McIntyre
    • Session 2: includes session 1 people + Suvranu De, Jorg Peters, Rachel Slayton, 
  • Build on current state of the are ODE Models
    • Reliability: understand mathematical accuracy but for human safety, need to quantify uncertainty, uncertainty in input...how does it propagate to uncertainty in model; in classical ODEs we understand this propagation reasonably well. 
    • How well do we understand the starting uncertainty (uncerainty in the priors)?...distribution, confidence in value of particular parameter.
    • How does the uncertainty in the model translate into human safety?  (e.g. does patient live die)
    • Uncertainty propagates through the model input/output to the layers of the model:  if we know the uncertainty of the input values, we can propagagte through.  Each model to next model (output of model 1 is input of model 2, etc; we can understand how uncertainty propagates).
    • model credibility = uncertainty propagation and sensitivity; model "validation" often ignores uncertainty, particularly when validating outside the original regime
    • Sensitivity:  which parameters (when perturbed) make the difference, 
    • How expensive is sensitivity analysis for ODEs?  most of the expense is in the uncertainty analysis.
    • Sensitivity through cascade?  Monte carlo (generally cheap)... EFAS (uncertainty replaced with an frequency band):  this is very expensive but best
    • can do a design of experiments approach to focus on an area of interest; can be fully bayesian, 
  • Build on current state of the art models for Human Safety
    • NASA has an ODE model that propagates human safety through but hospitals, large centers do not do this with their models.
    • WHat about data/measurements that can't be made?  Model infers these things.  Validation much take this into account (that some data/info is inferred).  
    • Most health models are not ODEs, they're empirical.
    • One method would be replacement with ML models; doens't really gain you much to do this
    • What about models across populations?  (vs model of individual, e.g. astronaut)
    • Inference methods (for the measurements we can't make directly):  
    • Number of parameters in human safety may exceed the number of parameters from other systems (becuase they are systems of systems).  For ODE models, must worry about all interactions?  ODEs give us structure.  But ODE for every pair, every interaction?  Is there a way to reduce those down?
    • Disease propagation, ODEs are well established, and network modeling...e.g. predict where disease will spread (some uncertainty, uncertainty doesn't entirely propagate through the model).  Models are confounded by where we choose to put our limited resources (observations)
  • ML-MSM integration opportunities
    • Integrate via code plaforms (those that work with current state of art model, e.g. ODEs)
    • How do we steer the ODE models with ML?
    • ODE models are multi-scale already, can we steer them using machine learning?  using ML frameworks?  e.g. sensitivity analysis, systems identification...can we wrap ML frameworks around those?
    • ML used when we don't have an ODE (physics or biology, can't get the measurement to answer their question)
    • ML allwos for response function which has complexity we don't understand (ML can get us past this)
    • Can we build an ensemble high fideltiy approaches, how do we put them together to solve a large, complex system (ML can get us there)
  • Challenges ML-MSM modelers should address
    • Trust in the ML models is always an issue
    • System identification:  we may need to do a very large number of parameters
    • Can we make a general platform for assessing credibility of ML models (or of hybrid ODE-ML models)
    • can we make the ML models interpretable?  (can we embed a layer of interpretability in the models)?

 

Table sorting checkbox
Off