Draft Final Report -- uploaded August 4, 2010
IMAG Futures Meeting Feedback
- This page will be used to post all IMAG Futures Meeting written reports
- The Final report is compiled from the individual scale Chair reports, Discussion notes and Public Commentary below.
- Individual scale Chair reports are also informed from the slides linked from the speaker name on the IFM Agenda page
- Please post your edits or email it to email@example.com
--updated 7/1/10 (Peng, Bolser, Germain, Plevritis, Sabourin, Gregurick, McCulloch, Sabourin, German)
Pathways and Networks Scale
- Meeting Notes (Grace Peng)
- If not already known, I wanted to make the group aware of the FDA's approval of an insilico model for the pre-clinical evaluation of automated insulin delivery algorithms for the treatment of diabetes. More information is available at http://www.jdrf.org/files/General_Files/Emerging_Technologies/ET_May08.pdf
- Brian Hipszer, Ph.D.
- Assistant Research Professor, Department of Anesthesiology, Artificial Pancreas Center at Thomas Jefferson University
- Much of the discussion I've seen so far centers on serious matters regarding "what varies, and what is fixed. Where are the sources of randomness?" These matters have great bearing on the generalizability of any conclusions, and it's difficult to make the discussion precise without detailed examples. Statistical sample reuse methods such a Bradley Efron's bootstrap technique can be adapted to specific situations, but the devil is always in the details. To put it into the context of discussion I've seen thus far, "simulation" is a fine notion so long as simulation is faithful to sources of randomness in populations to which any found conclusions apply. Issues involve specification of parameters, conditional distributions of features given parameters, and so on. These get to issues of stratification, instrumental variables, conditional distributions, what influences what through what mechanism(s).
One example involving a combination of technologies, I refer to joint efforts with Sangho Yoon (now at Google), Tom Quertermous (Chief of Research in the Division of Cardiology of Stanford), and others (now includuing young Stanford statistical colleague Bala Rajaratnam). We try to define "insulin resistance" in a group of Chinese women, and to predict it based on SNPs in candidate genes and functions of other simple features (age and BMI). Insulin resistance leads pretty obviously to type 2 diabetes, and almost as obviously to hypertension. Although it's not all that difficult to tease apart two groups, one "insulin resistant" and one not, it's extremely difficult to predict to which group an individual belongs. It seems that models with only "main effects" are not useful, but a model with them and "interactions" is. Further, with a gold standard "clamp measurement," women who are naturally hypoglycemic can appear to be insulin resistant, even though they are the farthest thing in the world from being so.
- Richard Olshen, Ph.D. Richard's text
- Professor of Biostatistics, Department of Health Research and Policy, Stanford University School of Medicine
- Richard also sent this very interesting New York Times article on the acceptance of out of the box ideas, which actually links to a mathematical model, http://www.nytimes.com/2009/12/29/health/research/29cancer.html?_r=1&ref=todayspaper
- I was very interested in the fact that people use models to evaluate the effectiveness of screening and therapy for breast cancer, for example, and in the fact that people develop policies based on what they see with these models. I did not get a feel for what the inputs and the outputs of these models were, but in the process of developing policies based on these models, the issue of controlling the behavior of a system arises. Based on this, instead of developing policies intuitively, might we ask what our target goals are with these models, and then examine what specific policies might be best suited to achieving these goals most precisely, and then think about them? I did not see this issue addressed in the discussion, and so I wanted to offer it here. Control is a modeling issue I have not seen specifically addressed in such sessions, and it might be useful to think of it. Just as we develop drug dosage regimens to achieve desired therapeutic target goals most precisely, using multiple model Bayesian adaptive control, might one ask what specific policies (inputs to the models) might most precisely achieve desired social and societal goals most precisely, in the same way? But I wish I knew more of what I am talking about.
- Roger Jelliffe, M.D.
- Professor of Medicine , Keck School of Medicine, University of Southern California
- After looking at the materials on the website I was wondering whether there would be place to look at the dynamics on a level of networks. This is a very intermediate step between many scales: for example in the brain (cellular mechanisms ->network structure and dynamics ->behavior), population (dynamics of individuals -> interactions of individuals -> social behavior). This approaches cold be also applied to spread of diseases and/or any system that function is based on interaction of many units. It is also clear that the feedback between these different scales is highly nonlinear.
Examples of the work we do focusing on these issues and trying to use network dynamics to link cellular dynamics to network dynamics and then to actual brain function can be found below.
- Michal Zochowski
- Associate Professor, Department of Physics, Biophysics Program, University of Michigan
- I was delighted to read that several IMAG conference recommendations include a mention of “temporality” in biology and disease as a potentially important key to understanding systems even though examples were not prominent among the presentations.
- Michael Twery
- The 2009 JASON report on Rare Events has some relevant discussions on modeling and model evaluation (PDF pages 35-44), http://www.fas.org/irp/agency/dod/jason/rare.pdf
- An excerpt: "From a programmatic standpoint of funding research, the main problem with standalone research projects that aim to create new (insight) models is that they separate the model’s creator from the model’s user community, so they tend to face an adoption barrier. Experts are rightly skeptical of new tools developed by non-experts, especially if a model appears complex, mathematical, and highly abstracted rather than hewing closely to real-world data analysis needs. Success of an insight tool should ultimately be judged by how many experts use it and find it indispensable in their work. “Useful to experts” necessarily includes many factors that become just as important as the scientific validity of the model – issues such as software quality and usability, in the case of computer models. Therefore an important part of any research plan to develop new models is the researchers’ plan for collaboration and adoption by experts. Will the tool be used and evaluated by real-world analysts? Do they find it useful? Will it spread to other analysts if it is successful?"
- Zohara Cohen