Panel 4 -- How do we incorporate verification, validation and uncertainty quantification in MSM for precision healthcare?

Return to 2012 MSM CONSORTIUM MEETING

Panel Members:

  1. Tony Hunt
  2. Wing Kam Liu
  3. Lealem Mulugeta (scribe)
  4. Dalin Tang 

 

Discussion

The following is from Ch. 1 of the NRC 2012 document: Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification.

For purposes of this report the committee adopts the following definitions: • Verification. The process of determining how accurately a computer program (“code”) correctly solves the equations of the mathematical model. This includes code verification (determining whether the code correctly implements the intended algorithms) and solution verification (determining the accuracy with which the algorithms solve the mathematical model’s equations for specified quantity of interest). • Validation. The process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model (taken from AIAA, 1998). • Uncertainty quantification (UQ). The process of quantifying uncertainties associated with model calculations of true, physical quantity of interest, with the goals of accounting for all sources of uncertainty and quantifying the contributions of specific sources to the overall uncertainty.

These definitions are intended for the “Grand Unified Model” concept illustrated in Fig. 1.1 of the report.

VV & uncertainty characterizations (U) must be discussed in the context of specified model uses. The meeting focus is MSM and Precision Medicine. In the attached document (Visions for How MSM Enables Individualized/Precision Medicine/Healthcare) I offer two model-use visions. Both visions center on use of “virtual patients” (VPs) that are composites of MSMs. My comments below are based VP use cases described under Vision 2.

My position: the preceding definitions are not useful for the future context of using VPs to enable and facilitate precision (individualized) healthcare decisions under Vision 2. I envision VPs being a necessary and essential part of the Precision Medicine process. In that context, validation is the process that answers this question: is this an acceptable/trustable VP for the individualized use cases under consideration? Verification will be the process and information that enables answering this question: can we trust the origins, organization, and evolution of this VP, and further, can we trust that this VP is what the user thinks it is (which is beyond simply trusting that it is what it claims to be)?

Clearly software issues are critical (especially the mappings from software components & mechanisms to human counterparts in the specified individual). However, the origins, organization, and evolution of the components assembled to create a VP (or a set of VP variants) will go far beyond the software. We must consider the conceptual models on which the VP (& components) was based. Are there several? Are they stable or evolving? To what extent have they survived rounds of experimental challenge (wet-lab, clinical, translational, & in silico)? We must also consider the wet-lab/clinical data on which conceptual models were based (and may have been used in earlier VP simulation validations). How do we characterize the experimental, mechanistic, and translational uncertainties? What can we say about sources of biological and experimental variability?

For a specified medical context, each layer (wet-lab, clinical, conceptual, translation, specification, VP, use cases) is important. Because they are interconnected, if any one layer is untrustworthy, then the trustworthiness of each is in question. Uncertainty characterization applies in all layers: U(wet-lab), U(clinical), U(conceptual), U(trans), U(spec), U(VP), and U(use cases [different individuals]). Just for example, if we learn that there was an experimental design/procedure flaw (oversight, unjustified assumption, …) in the wet-lab models from which the conceptual mechanisms (for the particular disease focus) were induced, then U(VP) is at least as complex as UQ(wet-lab).

Having an acceptable/trustable VP components means that a biomedical domain expert must be able to examine a VP (or cadre of VPs) and their simulation events (for the considered medical intervention) under different scenarios without needing computational expertise. To do that, issues, information, and documentation, on which each of the above Us is based, needs to accessible from within the VP and its framework. If that is the case, then the VP is also serving a larger purpose: it is a knowledge embodiment.


MSM Meeting Panel Discussion Materials

Panel 4 discussion agenda - Final: Media:MSM_2012_-_Panel_4_agenda_Final.doc

VV&UQ Multiscale IVP (Liu): Media:Final_VV&UQ_Multiscale_IVP_Suv-Panel_Oct_22-23_2012.ppt

NASA's DAP M&S Credibility Assessment Process (Mulugeta): File:NASA's DAP M&S Credibility Assessment Process- MSM Meeting Wiki.pdf

(Hunt): Media:Fig1_Panel_4.jpg


CHALLENGES FOR ENGINEERS IN BIOMEDICAL AND CLINICAL SCIENCES: File:Final LetterheadASME NEMB DC Workshop 2012 White Paper 080712.pdf

This figure illustrates multiscale, multilevel influences in cancer prevention and control. It can also serve to illustrate features of the envisioned individualized virtual patients (IVPs) that will enable Precision (individualized) Medicine: Media:Multiscale,_multilevel_influences_17Oct12.pdf

Figure only Media:Fig1_Panel_4.jpg

NASA-STD-7009: https://standards.nasa.gov/documents/detail/3315599

Visions for how MSM enables individualized/precision medicine/healthcare: Media:CommentsPrecisionMed12Oct12.doc

VV & UQ ideas for Panel 4 (& MSM meeting) discussion: Media:VV_&_UQ_discussion_points_10Oct12.doc

Short document that includes MSM-focused commentary on the six conclusion and six recommendations of the “Toward Precision Medicine” document: Media:CommentsPrecisionMed10Oct12.doc

This sketch and legend are for use in support of Panel 4 discussions: Media:VV&U_DiscSupportFig17Oct12.doc

MSM 2012 - Panel 4 agenda Revision A: Media:MSM_2012_-_Panel_4_agenda_Revision_A.doc

Question from Louis Gross: I find it surprising how little emphasis there is on defining evaluation criteria for models prior to constructing the model. Such criteria need to be guided by the purposes for which the model is being constructed. How to we more effectively encourage our own modeling community to carefully specify objectives and evaluation criteria prior to model development?

Notes from the Post-panel Breakout Discussion

Attendees:

Pras Pathmanathan: prasanna.Pathmanathan@fda.hhs.gov

Wing Liu: w-liu@northwestern.edu

Thomas Russel: trussell@nsf.gov

Lealem Mulugeta: Mulugeta@dsls.usra.edu or lealem.mulugeta@nasa.gov

Jacob Barhak: jacob.barhak@gmail.com


Notes:

It is clear that we need to first establish definitions of key terms and concepts in order to be able to establish systems and methods for vetting computational models developed by the MSM community. One concept that was highlighted during the breakout discussion is:

What does “precision medicine” mean?

  • From a clinician point of view: diagnostic or procedure that can help the patient
  • From a scientist perspective: this is based on the complexity of the problem and the data available
  • Although the above two definitions are a good attempt at defining what “precision medicine” means to clinicians and scientists, we need to engage clinicians and the MSM research community to establish a clear definition of “precision medicine”.
  • In addition, although “precision medicine” was the theme of the MSM Consortium Meeting this year, are all the models developed by the MSM community for precision medicine? Is it possible that some of these models are intended for broader clinical investigations? If the latter case is true, then it may be more appropriate to not restrict ourselves to the concept of “precision medicine”.

How should we define “below the skin” models? – Wing Lui will draft a proposed definition

How should we define “above the skin” models? – Lealem Mulugeta will draft a proposed definition

Regardless of the type of model that is under consideration (e.g. above or below the skin), the processes involved in understanding the behavior of a model and ensuring it is clinically believable will likely consist of some common elements/factors. Some of the elements highlighted by the meeting participants include:

  • Verification
  • Validation
  • Uncertainty Quantification
  • Uncertainty Propagation
  • Quality Control – Documentation, traceability of the origins of the model, and version control etc

There may be more elements/factors, but it is important to establish clear definitions of the elements/factors as they relate to the MSM community’s work. During the panel discussion and the breakout session, it was apparent that different researchers and practitioners have different definition of what each of the terms mean. The differences in definitions are likely a due to the researchers’ or practitioners’ specific field and technical focus within that field.

In addition, the methods and challenges of implementing each element/factor may differ depending on the complexity or type of models. In some cases it may be appropriate to vet models using benchmark problems (e.g. comparison with a calibrated model) in combination with some or all of the elements/factors listed.

This work will be carried forward over the following months. This will likely be done through a working group or committee dedicated to advocating and defining methods the MSM community can leverage to build the credibility of their models and simulations to inform clinical events and interventions.

As we continue this discussion forward, we encourage anyone from the MSM community to contribute. If you are interested in participating as part of the working group/committee which will be formed in the near future, please let us know either by posting to this wiki page or emailing Lealem Mulugeta (see above).


4-Dec-2012 Comment by Jacob Barhak 

To improve model credibility, it is important to admit mistakes as fast as possible and investigate them quickly. This is fundamental for long term credibility. If people know that mistakes are being caught and dealt with, then the system is considered maintained and therefore more credible than a system that is not maintained. And beyond the psychological aspect, the constant improvement with each version makes the system more reliable. 

Therefore it is recommended that models maintain a bug list / errata / wanted features list.

 

12-Dec-2012 Comment by Jacob Barhak 

The literature provides several guidelines for modeling that discuss uncertainty. [1] provides general guidelines for disease modeling while [2] provides more specific guidelines for diabetes modeling. 

Bibliography

[1] Weinstein MC, O’Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices – Modeling Studies. Value Health 2004;6:9–17. American Diabetes Association Consensus Panel 2004), Guidelines for computer modeling of diabetes and its complications (Consensus Statement), Diabetes Care 27 2262-2265.

[2] American Diabetes Association Consensus Panel. Guidelines for computer modeling of diabetes and its complications (Consensus Statement). Diabetes Care 2004;27:2262–5.

MSM Meeting Attendee Questions and Comments

We welcome the attendees to contribute their thoughts, questions, and/or ideas below to help us consolidate notes for a possible breakout discussion.

Suvranu De - is there a good example of uncertainty quantification and propagation (in the context of MSM) actually leading to better clinical outcomes or for understanding a disease state which would not have been possible without this?

Question from Louis Gross: I find it surprising how little emphasis there is on defining evaluation criteria for models prior to constructing the model. Such criteria need to be guided by the purposes for which the model is being constructed. How to we more effectively encourage our own modeling community to carefully specify objectives and evaluation criteria prior to model development?

Question from Jacob Barhak: Can you elaborate on Model versioning? Are older version results compared to results from newer model versions? Are there multiple versions used in parallel? How much does it complicate the modeling work?

Question from Louis Gross: Have there been any examples in the biomedical community of what the environmental community calls "adaptive management" in which the data collection procedures are specifically developed to reduce model uncertainty in an active manner. That is specifically choosing the "control" to force the system into regimes that would most effectively reduce uncertainty?

Raj Vadigepalli: May be changing the terminology to "Model/Response Distribution Quantification" instead of "Model/Response Uncertainty Quantification" helps in framing the scope and conceptual approach to devising the strategy/guidelines?

Madhav Marathe: -- Important to go beyond predictive validity. -- Notion of active learning (that Louis Gross as referred to as adaptive management) to improve model quality. -- Robustness of predictions is important

Key Themes/Topics Raised by the MSM Community

The required level of validation needs to be tied to the level of decision the model will be used for (e.g. the criticality of the decision on the health/life of the patient)

It is important to get into a culture where we start by first writing out what our criteria for evaluating our models are prior to doing any math or any aspects of code development.

It is important to go beyond predictive validity, and adopt the notion of active learning to improve model quality.

It is important capture model robustness of predictions.

What are the key considerations we need to think about when we are developing models that are intended for research vs clinical purposes?

Does the community need to agree on definitions of specific terms? If so, which terms? - Two highlighted by the audience include: Validation and uncertainty quantification.

Do modelers know what data they need for validating their models? If not, why not and what can we do to improve this?

What about taking the approach of model “accreditation” instead of V&V?

Is there a concrete example that shows that quantifying uncertainty and uncertainty propagation has improved the outcome of a decision or helped develop advance treatment?

How can we encourage theoretical mathematicians to be more willing to work with modelers to help with V&V and UQ of biomedical models?

We are planning to have a focused breakout meeting on day 2 to address the above items as input towards "good practice" guidelines that will be proposed at a later date (3:30-4:30 in the cafeteria).

19 October 2012 10:57(EDT) Tang, Dalin [dtang@WPI.EDU]

I am closer to the beginning steps of verifications and validations. For most of us, these are what we do in grant applications and research activitities in the conventional sense. Those VV concepts are not trial, at least to me:

1. Verification of a model (a narrowly defined math model) is to see if the computational procedures did find the solution of the the model, which may or may not even be a good representation of the physical problem we are modeling;

2. Validation has several layers:

2.1 Validation of the model: is the model a good representation of the physical problem. It is worth noting that most biological models have to omit many aspects and factors. They need to be validated.

2.2 Validation of model predictions of the variables they are calculating;

2.3 Clinical validation: when we make a linkage between computational output such as flow shear stress, structure stress, we link them to clinical events (quantified), with treatment recommendations (again, quantified), we want to validate the linkages, recommendations (currently with patient or animal studies).

We need vision and plans as long-term goals. However, the long-term goals are closely linked to current research and practice, which in term serve as beginning steps and support of long-term goals, such as individualized virtual patient (IVP).

User:SauroH Validation is an ill defined word associated with modeling in biology, there is no measure of 'validatedness' of a model is, as someone said in the audience, validation means something completely different to the general public. We should avoid using it.

18 October 2012 20:24 (EDT) Hunt, Tony [a.hunt@ucsf.edu]

Re: Wing’s 18 October 2012 6:05am post:

From my perspective, the above(below)-the-skin illustration is the same type found in many multi-scale ecology modeling papers & textbooks. In that context, good practices are increasingly well established (see Media:EcolMod202.385'07.pdf). Within such nested, multi-models, it is not uncommon to use three model types, and within each type, uncertainty (and credibility) may be addressed differently. It is further complicated by interface uncertainty issues that are different depending the on the direction of communication between modules (e.g., a module that uses continuous math and one that is agent-based). In that context "uncertainty quantification" is not a useful phrase (it is useful when talking about the continuous math model, but not the agent-based model). Note that in the linked paper, experimental frames are used as a means to help address and control (and document) uncertainty issues. Note also that when the use case changes enough to alter any one experimental frame, then (good practice) all uncertainties are revisited.

Analogous to the above(below)-the-skin illustration, as we move toward IVPs using modeling methods similar to those used in ecology, we can expect a large set of issues that influence credibility. Classical VV & QU, as applied to absolutely grounded, equation-based models will be members of the set. So, I disagree with Wing’s statement that “VV & UQ ideas explains how to apply VV&UQ ideas to virtual patients (VPs).”

Given that, it may be wise to revise and rephrase the panel’s question.

18 October 2012 22:00 (CST) Liu, Wing[w-liu@northwestern.edu]

Tony: Talk to you all tomorrow, it will be an interesting discussion as we do not agree on UQ which has been supported by many researchers!

18 October 2012 6:05am (CST) Liu, Wing[w-liu@northwestern.edu]

To Tony, Lealem, Dalin All (I will also input it into the Wiki) after sending the email.

I assume we are set of the conference call on: Friday October 19 7:45 am PDT will be 9:45 am CST

I have been reading some of the documents and below are my preliminary assessment of the proposed agenda. Will continue to read and contribute.

Looking forward to the Friday Conference call.

PS: I agree with Dalin that the majority of the participants are still working along the "under the skin" multiscale and V&V and UQ, though I do not see much on UQ. I also know of someone working on UQ based on data driven approach, and I have a project with Goodyear Tires and Rubber Company on materials design along this line. I am consulting with my colleagues on UQ based on data driven approach. Hence, I think we should not ignore this group of "scientists and engineers". I believe Tony has agreed to show the figure on multiscale multilevel slide (below skin and above skin), if we can 1) a very brief discussion on "below the skin" 2) spend some time on "above the skin" and educate the audience and explain to them why it is important 3) perhaps one focus area will be how to link the two together, and this will be a team work, as not that many people work on large projects like this. I spoke with Lealem yesterday and I can see why NASA wishes to establish guidelines. We should educate the audience, though I am not sure if that is the only focus. I have been working with American Society of Mechanical Engineers for the past five years trying to working the so-called transfer of technology, and then set Codes and Standards. It is interesting, and I am not sure for this group of researcher will be interested in setting up Codes and Standards. Mostly on the fundamental science and engineering and transfer of technology. I also recall that from talking to NIST, they wish to develop Codes and Standard and "guidelines" for nanomaterials, and it seems to us at that time (10 years ago or so) that we do not understand enough to do the job. In our last ASME NanoEngineering for Medicine and Biology workshop in April 2012 (see attached), a number of researchers from National Cancer Institute (NCI) commented that there are thousands of nano-materials for drug delivery and each one of them takes at least 4 phases of trial before they might be approved!

Comments and suggestions on the agenda along with some of my understanding Precision Medicine Most of the dialect is carried around taxonomy by defining a disease database (knowledge network at the international level). It is suggested that Taxonomy should be both modular and dynamic, which ultimately stems from patient specific MMSMs. MMSMs can be molecular, environmental and phenotypic and should be constructed based on already established clinical and laboratory data. Furthermore, it is said that MMSMs embedded in individualizable virtual patients (IVPs) that link to particular morbidity/disease should undergo VV and be screened for falsification. We can suggest that although this is important, we should also integrate UQ analysis during IVPs investigation, so we can provide a prediction within a given confidence interval. The predictions and confidence intervals will be improved if either more data is supplied, or when more statistical sampling studies are done (i.e. Monte Carlo).

VV & UQ ideas Explains how to apply VV&UQ ideas to virtual patients (VPs). I would make it more consistent with the earlier document and I would change VPs to IVPs, especially when referring to Precisions Medicine. One important remark that was raised is closed-form mathematical model failure in therapeutic interventions, since response to a treatment may be found uncorrelated between individuals. Hence, at the IVP scale VV&UQ seems somewhat arbitrary, when applied to equation-free methods. By the way, it is unclear if the so called equation-free approach will work. Working on both “above skin” and “under skin” seems to be overly ambitious to me as you will deal with both physics based and behavior models. I hope in the workshop we will have a wide range of attendees who can contribute to the discussions in both domains. I suggest that we will have one panelist for each submodel of the “big elephant”, and correspondingly have subgroups in the breakout session that can focus on each submodel.

VVUQ is more applicable for the “under the skin” model where the model is physics based and experimental data can be collected (this is the domain that I am familiar with. I don’t think the VVUQ principles in literature (that I am familiar with) are very useful for validating the “above the skin” model, even though such models are needed. In the proposed agenda, should we allocate time for both the modeling and experiments (data collection) aspect?

UQ is not mentioned in the agenda at all, and it should be a thrust that I can handle as a panelist. At least we should discuss what the source of uncertainties is and how to reduce the uncertainty, etc.

The language used in the current agenda under Part C is very vague to me. “Requirements” and “model credibility” sound not scientific. I recommend one report to the participants: Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification http://www.nap.edu/catalog.php?record_id=13395.

17 October 2012 3:06 (CST) Mulugeta, Lealem [mulugeta@dsls.usra.edu]

I have made suggested changes and comments to the agenda Tony drafted. You can download it as a word document above (MSM 2012 - Panel 4 agenda Revision A).

I also think we need to have one more telecon to iron out final details. Wing mentioned that he can meet Friday morning. That works for me as well. Tony and Dalin, can you specify a time that works best for you? Once we have a time to meet I will email you all a telecon number we can use.

16 October 2012 22:05 (CST) Liu, Wing [w-liu@northwestern.edu]

All: We have very little time to cover a lot of ground in one hour. I suggest we come up with a few new initiatives and focus on those that we will agree upon. I believe that we all agree that we need to expand the scope, however, I wish to see if we can come up a more general class of MSM for precision healthcare.

Below is what I will post and see if we expand from there. They are very general and important.

Topic: Importance of Uncertainty

USES Uncertainty quantification (UQ): model uncertainty with interpretable, “samplable” models Uncertainty propagation (UP): model the cause-effect relationship between input and output uncertainty for (1) physical experiments, (2) simulations Validation: determine if a model is any good Experimental design: choose experiments (physical, numerical) that minimize prediction uncertainty Design under uncertainty: choose optimal design while considering design variable/parameter randomness

METHODS Quadrature-based collocation (UP) Polynomial chaos expansion (UP, UQ) Principal component analysis (UQ) Robust, reliability-based design Bayesian model updating (UQ, prediction) Any others?

APPLICATIONS of the above have been applied to: Geophysics: reservoir engineering and fossil fuel identification Infrastructure: naval and aerospace health monitoring Materials science: microstructure randomness Biology: patient-specific diagnostics, sick/healthy classification, probability of success Finance: risk modeling, investment performance prediction, asset-price correlations. Energy: wind farm schedule based on weather predictions

16 October 2012 11:52(CST) Liu, Wing [w-liu@northwestern.edu]

All

One hour is a short time

I thought that we agreed to use the attached slide to introduce Multiscale and multilevel influences in cancer prevention and control This concept is new to a lot of audience, including me and I think this is the objective of panel 4 to expand the "scientific multiscale." Since there are a lot of people working on the lower part of the figure (multiscale biological influences under the skin), I think discussion on the the upper part of the figure (MUltilevel Socioecological influences "Above the Skin") will be more valuable to the general audience.

Finally, the merger of the two.

15 October 2012 21:50(CST) Liu, Wing [w-liu@northwestern.edu]

All

I am also fine with the proposed program, though I do not think we can complete it in one hour, it is just too much. However, I do not have a better plan. Let it be.

Thanks

 

15 October 2012 17:12(EDT) Hunt, Tony C. [a.hunt@ucsf.edu]

Precision medicine [healthcare]: refers to the tailoring of medical treatment to the individual characteristics of each patient.

MSM for precision healthcare: refers to 1) use of future computational models to enable tailoring of medical treatment options to the individual; and 2) the ongoing refinement of those future computational models, based on knowledge changes and changes in individual characteristics &/or influential environment changes, including outcome measures, to improve patient health.

Suggestion: moving forward we refer to such a future computational model as a Individualized Virtual Patient (IVP).

15 October 2012 12:59(EDT) Mulugeta, Lealem [mulugeta@dsls.usra.edu]

I think everyone has touched on very important areas regarding vetting models for precision healthcare. In order to ensure the confidence of healthcare providers in using computational models for therapeutics or clinical research, rigorous V&V and uncertainty quantification are imperative. I would also stress that we need to pay attention to model sensitivity and data pedigree. I know these might seems obvious to some of us and therefore not pay too much attention to them, but these two elements play a direct role in the stability/response of a given model to various perturbations/parameters, and overall fidelity of your model for a given application.

As some of you might already know, NASA uses a systematic processes outlined in “NASA-STD-7009: Standard for Models and Simulations” to vet computational models that are intended for research and operations. This is a very rigorous and systematic process to assess the credibility of computational models. Although the standard was originally established for engineering systems, the NASA Human Research Program has adapted it further for biomedical models intended for research and clinical/operational applications. You can download the standard from: https://standards.nasa.gov/documents/detail/3315599 (also posted on the above).

This is not to advocate for NASA-STD-7009, but rather to say that I think it is important to view the vetting of models and simulation as a much larger process than just V&V and uncertainty quantification. From personal experience of working with researchers and clinicians to implement models and simulation to inform research and operational events, I’ve learned that it will take more than these three elements to help them gain confidence in models.

Anyhow, to summarize NASA standard 7009, we take the approach of “model credibility” which is assessed based on:

  1. Verification
  2. Validation
  3. Input pedigree (how good is the data used to develop the model)
  4. Uncertainty
  5. Results robustness (sensitivity of the model)
  6. Use history (heritage of the model for decision making for the intended use/application)
  7. People qualification (what is the expertise of the developers, users, and analysts with respect to the field of interests e.g. modeling of cancer, cerebrovascular disease etc)
  8. M&S management (how well is the model development and implementation controlled?)

All of these elements or factors are evaluated on a scale of 0 to 4 based on the fidelity or rigor, where 0 being insufficient evidence and 4 being the highest fidelity possible (i.e. exact nor near exact representation of the “real world system”). A cancer model that is validated based on rat experimental data is not going have a level 1-2 validation credibility over a model that was validated based on human data, which would have a credibility of 3 to 4.

In conjunction with the scaled scoring system, the first five factors include technical/peer review process to ensure the modeling and simulation (M&S) approach used is appropriate. The technical review is also scored on a scaled system of 0 to 4. Logically, 0 indicates no technical/peer review was performed, and a score of 4 means independent external peer review of the model, including evaluation of the underlying code.

Prior to starting the credibility assessment process, however, we perform a thorough risk assessment of the model or simulation. This basically helps us determine, in the context of research or operation/clinical objectives, the impact the model will have on the human subject, patient, or in our case astronauts. This information, in conjunction with available data, is then used to set appropriate thresholds scores for the eight factors and technical review to establish appropriately vetted models for research or operations for the conditions of interest. It is not always necessary, desirable or possible to have models that are an accurate representation of the real world.

This goes much deeper, but this gives you a glimpse of how we approach things when it comes to vetting biomedical models for research or operational applications. (NOTE: operations refers to space mission operations, not surgical procedures)

Lealem

Sun, Oct 14, 2012 19:39(CST) Liu, Wing [w-liu@northwestern.edu]

Tony

I did not read the complete report but went through quickly on Ch 1 of the report.NRC 2011 report: Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease.

I do not recall that you have sent us the NRC 2012. If you have it, please send it to us as my team is eager to read it

I do find the attached article.

In summary, the method that we proposed are based on Bayesian statistics and the programming of these methods need work to make it useful.

There are many other approaches that we used in Engineering that can be applied to medicine and biology.

For example, we are looking into the modeling and simulation of tumor growth. The current practice is not good enough for prediction, though it is a good start. However, how good is a good model is questionable at this time. To be fair, it is not realistic to come up with a predictive model at this time. For engineering, reliability analysis have been well established in electronic industries since the 70's. The geometry and physics are a lot more simple and there are enough samples (experiments) that the prediction and precision are very much established except for new materials. Bayesian statistics can be a good work horse.

As for aerospace and mechanical systems, we have done a lot of work in this area for linear systems. I am one of the first to work on nonlinear system entitled "Random Field Finite Elements." Liu, W. K., Belytschko, T. and Mani, A. (1986), Random field finite elements. Int. J. Numer. Meth. Engng., 23: 1831–1845. doi: 10.1002/nme.1620231004. The work was initiated for design of the space shuttle engines. The whole engine worked only about 20 minutes (30 seconds up and 30 seconds coming down), for 20 times. That was the reliability at that time as they did not want any failure. The same principle for the fighter planes that they will be replaced every 6,000 hours! Note that various names: health monitoring, .., and now "prognostics" have been coined. Not much work have been applied to small scale systems in which all failures were initiated in smaller scales.

We can borrow many of these engineering and mathematical approaches for biological systems and that is what a few groups have been working on, though it is just the beginning and we need to "design" both numerical and physical experiments based on our limited understanding..

Drug delivery is simpler that biological systems though the targeting drug delivery (into a specify class of cells) can be extremely complex (aims 2 and 3).

We should have an exciting conference call tomorrow, and look forward to the workshop.

 

Sun, Oct 14, 2012 11:04(CST) Liu, Wing [w-liu@northwestern.edu]

Dear Dalin, Tony, and All

Sorry that I did not engaged in the email discussions as I just come back from a trip last night.

My background is in modeling and simulations and computational materials design and devices. I also develop computer software for various applications.

Recently, I have been working on nanoparticles design for drug deliveries under uncertainties.

I founded the American Society of Mechanical Engineering (ASME) NanoEngineering Council in 2008-2009. The first activity was the first Global NanoEngineering for Medicine and Biology (NEMB) held in Houston, February, 2010. (http://www.asmeconferences.org/nemb2010/)

It was followed by the ASME NEMB Workshop held in Washington in April 2012 (see attached "Final_LetterhadASME_NEMB_DC_Workshop_2012_White_Paper). Note that using V&V and Uncertainty Quantification in medicine and biology is new and I was the only speaker on this subject, though V&V have been established for engineering applications for more than three decades and I know of the AIAA as well as the ASME and the United States Association for Computational Mechanics (USACM) (as well as the International counterpart) activities in V&V and UQ. There is little work in the area on UQ in medicine and biology. My presentation (2012_ASME_thematic.pdf) is also attached.

The second ASME NEMB workshop (or the first Venice NEMB) was held just last week, (agenda attached), in Venice, Italy. They are represented mostly by Europeans. We had more than 90 participants with 18 lectures and 40 posters. The Venice workshop final program is attached. Again, I am the only speaker in V&V and UQ, a shortened version focusing on V&V and UQ is also attached.

I look forward to the Monday (tomorrow) conference call.

Table sorting checkbox
Off