Multiscale modeling for population studies

Relevant Articles: Personalizing medicine: a systems biology perspective


070808 Paolo Vicini:

Regarding the TRIPP concept, I think that the complexity of the question being asked (how to translate scientific data and conclusions in interventions) relates mostly to the fact that we do not know exactly how to perform the translation. Specifically, each patient or individual responds differently to a drug dose or any other kind of therapeutic intervention. Thus, the difficulties of linking basic scientific findings to practical policy suggestions have to deal with the clinical translation and the overarching issue of between-subject variability (as you know, an important issue for me).


Now, there are two ways to deal with variability. This is a bit of a generalization, but it may help clarify what I think is the issue. One approach mainly comes from engineering, where, as long as the true mechanism of the intervention is known and can be represented mathematically, variability can be reduced to a minimum. The other one comes from statistics, where variability is intrinsic to the system being studied and can be measured. This is the main reason why (bio)engineering models tend to be fantastically detailed (mechanism-driven) while statistical models are often so simplified (data-driven) as to be mechanistically irrelevant.


To me, the winning approach is… neither. The idea (new in some quarters) is that the precise definition of relevant mechanisms of health and disease (and thus the basis for translation in individuals and populations) has to come through the reduction of observed variability. This is not new for some mathematical modelers: after all, regression methods are nothing but an attempt to reduce observed variability (by postulating certain trends that, by “fitting the data”, account for variation in the observation and partition them among deterministic and stochastic). The new part of this is that scientists, basic of clinical, need to be aware that variability can be quantified, thus providing a natural approach for intelligent model selection and deployment (a good model .


When looking at phenomena this way, modeling and simulation’s purpose is the reduction of unexplained variability, through which putative mechanisms can be postulated and tested, and thus accepted or rejected on the basis of their relationship with prior knowledge and observation. So, one should go from the statistical model to the mechanistic model, and there is no preferred approach, rather one builds on the other.


As we also have discussed some time ago, the development of mechanistic models from “prime principles”, on the other hand, has little in the way of rigorous testing of added complexity, so that (as it happens in so much of the toxicology modeling literature, for example) models tend to be unwieldy and too detailed to be of use with the data at hand.


The interesting part of the TRIPP concept was to try and formalize the utility of these models in the decision making process. This is already happening in some quarters. For example, clinical trial simulation has now been integrated as a decision support tool in drug development. It only makes sense that decisions are made on the basis of quantitative models. Where most modelers would object is that “no model is right”. This is obviously true, but something not widely appreciated by the (bio)engineering community is that a natural way to deal with the models’ lack of detail is through the incorporation of uncertain knowledge. The definition of a precise, rigorous way to do this is where statisticians’ and engineers’ worlds come together.


In short, there is a need for programs like TRIPP, but it is my opinion that there are some barriers to overcome before the acceptance of tools like the one proposed by this conceptual program becomes universal.


030308 Russ Altman:

I find these two paragraphs to be very insightful. Simbios is generally creating physics-based models that can explain mechanisms, however--we are also creating some "knowledge-based" models that are more phenomenological or "inductive" (like the RNA coarse grain model). The problem, of course, is that inductive models can be much less expensive and have very good performance, at the expense of mechanistic understanding. There is a role for both, but I agree with the author (Marco) that it is really critical to be clear on which one you are trying to do. Our use of knowledge-based "pseudo-energies" is particularly misleading because we use a physical paradigm, but insert forces that are...induced from data and not from first principles.


021408 Grace Peng:

I think your response brings up very important and fundamental points that differentiate the types of modeling that is being done at different biological scales. The idea of this initiative is to interface the mechanistic (deductive) and the phenemonological (inductive) models, and that will indeed be very challenging. Will the deductive models truly be able to logically inform the inductive models? How can we make that happen?


021108 Rod Smallwood:

Your observation is very well made, and more elegantly than I would make it - I refer to phenemonological and mechanistic models to make exactly the same point.


021108 Marco Viceconti:

I read with great interest this draft of the TRIPP roadmap. I found the general attitude of the stake-holders behind it very positive and in line with the perspectives that are emerging in the context of integrative research.

However, I have a comment of general nature. Since I am not deep in the USA system, I leave to you to decide if and how to relay it to the TRIPP officers.

The document refers generically to models. Now this could be dangerous. Indeed, there are two types of models, quite different in nature. If I make some experimental observations and plot the results on a diagram, if the trend is sufficiently regular I can consider to make a mathematical regression of the experimental data, and eventually use it to extrapolate (predict) another observation. while this looks like a model and can be used in some cases like a model, it is not a true model. Or better, it is an "inductive" model. Quite different is when from the interpolation we derive a physics-based explanation of why observations occur with that pattern, and make a "deductive" model.

An inductive model might correctly predict a future observation, but it will never tell you you why that observation occurs. A deductive model is epistemologically superior, because it contains also an explanation to the why these observations occur. Inductive models can be very useful and powerful, but they can also be very dangerous, especially when they are confused with they superior cousins, and used to make fake deductions.

I admit this is can of worm, and there are probably better qualified persons in the IMAG Friends group to discuss this point. Still, I guess we should try to reach some consensus on this point and then formulate it, so to make sure that a deductive physics-based, multi-scale model is not confused with some sort of statistical regression of experimental observations. We can use both, but we should recognise them as different, and the earlier superior to the latter.

Table sorting checkbox
Off