Back to Main Agenda
Back to THEME 3- Data-Driven Approaches session page
Back to Human Safety Page
Session Lead: Linda Petzold
IMAG Moderators: Steven Lee (DOE), Jerry Myers (NASA)
Breakout Session Notes:
- Introductions (name and interest):
- Linda Petzold
- Jerry Myers
- Amanda Minnich
- Mona Matar
- Lauren McIntyre
- Simon Rose
- Sanjay Purushotham
- Casey Hanley
- Parya Aghasafari
- Jessica Zhang
- Ashlee Ford Versypt
- Ahmet Erdemir
- Amy Gryshuk
- Julie Leonard-Duke
- Shantenu Jha
- Ken Wilkins
Potential Applications
* Recovery: Instantaneous injuries, crashes. Surrogate for human. LSI model (short-term), LS-DYNA finite element code/models. ML model.
* Psychiatric drugs selection.
* Chiropractics
* Spine models. Combine simulation & experiments to improve accuracy. How to improve the (simplified) model?
* Personalize a model, what drug and predict how long drug needs to be taken.
* TBI
etc, etc, etc
Challenges
* Two sources of data: Real - noisy. Simulated data - don't know good is it? How to include physics into the digital/simulated data?
* How much patient data is needed? Depends on the model. What's the right balance? Need enough of the right kinds of data.
* Model design and uncertainty propagation. Integrated quantities - use model for quantity of interest. Jerry model - crystals of urine. Representation of next-level of physics (multi-scale).
* What machine learning architecture to use: layer, nodes per layer, connectivity, etc? What question are you asking? See Rule 1 - Context.
* What can we do better? Can multi-modal data be combined well?
* Does the data suit the purpose? How to fuse the data?
* Guard against use ML as a hammer (need a screw driver).
* Multi-scale processes - can't measure what we need. Need guidance in how to interpret results/architect the framework
* How to make models credible? Best data? How to validate? What are the uncertainties and how propagated? How to document the linkages? What level of confidence in the result?
* Probabilistic models (emphasize uncertainty, sensitivity). Advantages of Bayesian methods? What to do with small data sets?
* Not modeling for model's sake. Need testable hypothesis and check the model's accuracy.
* Probabilisitic Risk Assessment - not all models are testable. Some situations where we can not test directly.
* Lots of potential for ML: What to test next? Can the model help with that?
Obstacles
* Linking of model and data
* Hidden biases in the data
* Workflow: Data collection, Model updating, Does it converge?
* Our credibility
* Lack of expertise & background, programming skills. Educating the workforce. Need for intensive, focused workshops?
* Risk and consequences of mistakes can be high. Who is liable?
* Too much specialization? Image processing vs natural language processing, etc. Matlab vs Python vs Julia
* Confusion about nature, scope, limitations of AI/ML. Need to manage expectations. Over-confidence.
* Need to understand the application (domain knowledge).
* Education: Apprenticeship - learning on the job. Mentors.
* Need to have stakeholders in the discussion. What's the right level of interaction from end-users (user expectations, need specifics or broader concerns)?