Special Session: Bridging Mechanistic, Causal and Predictive Models – what can machine learning do for the MSM?

Return to agenda


1:00-1:20 - Dr. Olaf Dammann, Tufts University



“Nested Boxes” - Causal-Mechanical Explanation in Multi-Scale Modeling of Disease Occurrence


Multiscale modeling (MSM) is a research technique of growing importance in biology and medicine. The goal of MSM is to explain differences among patients regarding disease occurrence and/or therapy by opening the “black box of disease." Causal inference and mechanistic reasoning are at the core of systems medicine. Therefore, one of the major questions in this field is: how should one transition from mechanistic to causal and ultimately to predictive models in MSM? Writing from my personal perspective as perinatal neuroepidemiologist, I will briefly review recent developments in philosophy of the health sciences. Most pertinent to the question at hand is the debate unleashed by what has come to be called the Russo-Williamson Thesis (RWT), which states that the health sciences make claims based on evidence of both physical mechanisms and probabilistic dependencies. The scales (i.e. levels of evidence generated in support of causal hypotheses) in the health sciences range from molecular to societal. As an epidemiologist interested in population modelling, I suggest taking the etiological stance, which calls for association-based mechanistic evidence to be explained by some sort of mechanistic hypothesis that in turn explains overall causal claims, as per W. Salmon’s causal-mechanical explanation, (1984). I suggest using the biostatistical model of capturing the strength of an association between two phenomena by calculating odds ratios (ORs), which are dimensionless, do not refer to a time frame, and can be adjusted for confounders. These ORs can be used to capture, in a single number, association-based mechanistic evidence. As such, they can be incorporated as co-variables in nested regression models, depicting nested black boxes. They may also be used in mediation analysis, designed to identify mediators between cause and effect. The concept will need to be in explanatory coherentist frameworks, for which Paul Thagard’s ECHO system appears to be a reasonable candidate.  



Dr. Olaf Dammann, M.D. (U Hamburg, ’90), SM Epidemiology (Harvard, ’97) is Professor and Vice-Chair of Public Health and Community Medicine, with secondary appointments in Pediatrics and Ophthalmology at Tufts University School of Medicine in Boston, USA.  He is also Professor and Director of the Perinatal Neuroepidemiology Unit in the Dept. of Obstetrics and Gynecology at Hannover Medical School, Germany.

In the spirit of lifelong learning, Olaf is registered as a PhD student in the Department of Philosophy at the University of Johannesburg, South Africa. He is working on his thesis entitled “Etiological Explanations” under the supervision of Professor Alex Broadbent, Dean of Humanities at UJ and author of “Philosophy of Epidemiology” (Palgrave, 2013). 

Olaf’s research interests include the elucidation of risk factors for brain damage and retinopathy in preterm newborns, the theory of risk and causation in population health research, and the development of computational population models of disease occurrence.

His current and recent grant support is from the National Institutes of Health and the European Union. His bibliography lists more than 200 publications.


1:20-1:40 - Dr. Todd Coleman, UCSD

Talk Title:  Statistical Measures of Causality: Challenges and Opportunities


Dr. Todd P. Coleman is currently Professor of Bioengineering and ECE Affiliate Professor of Electrical and Computer Engineering at the University of California, San Diego.  He joined the Jacobs School of Engineering in 2011 as an associate professor in the Department of Bioengineering. He received bachelor’s degrees in electrical engineering (summa cum laude), as well as computer engineering (summa cum laude) from the University of Michigan, Ann Arbor, in 2000, along with master’s and doctoral degrees in electrical engineering from the Massachusetts Institute of Technology , Cambridge, in 2002, and 2005. During the 2005-06 academic year, he was a postdoctoral scholar in computational neuroscience at MIT and Massachusetts General Hospital. From fall 2006 until June 2011, he was an assistant professor of Electrical & Computer Engineering and Neuroscience at the University of Illinois, Urbana, Champaign.

Professor Coleman’s research is multi-disciplinary at its core. His main goal is to use tools from information theory, neuroscience, machine learning and bioelectronics to understand – and control – interacting systems with biological and computer parts. His research in developing multi-functional, flexible bio-electronics are enabling wireless health applications that are minimally observable to the user. His brain-machine interface research uses information theory, control theory and neuroscience to interpret – and design – systems from the viewpoint of multiple agents cooperating to achieve a common goal. The benefits of this research include helping subjects with disabilities as well as enabling all members of society to enhance capabilities in many daily activities. His research on causal inference uses information theory and machine learning to understand causal relationships in time series of data. Within the context of neuroscience, it is being used to understand dynamical aspects of brain function. The approach is applicable to arbitrary modalities and to a variety of applications, including financial networks, social networks and network security.

Comment from Gary An: Personally, I would like to move away from discussions of "causailty" (as presented in these talks) towards the idea of "generative" models, the key point being that the inferred causal structure (which remains a hypothesis) needs to be turned into a generative computational model/simulation that is able to produce not only the data from which the statistical models are derived, but evaluate "new" configurations incorporating perturbations to the system not previously done (i.e. the concept of experimental testing of a hypothesis). For example, I cannot see how these models would be able to evaluate/predict the effect of a new intervention (i.e. a new drug) on the system. Perhaps the alluded to "interventionalist" approaches might address this.


1:40-2:00 - Dr. Timothy Lillicrap, Google DeepMind

Title: Recent advances in model-free and model-based reinforcement learning,


Abstract: Recent work in machine learning has led to rapid progress in solving difficult problems such as playing video games from only raw-pixels and score, controlling high-dimensional motor systems, and winning at the games of Go, Chess, Poker, and Shogi. This progress has been made possible by combining reinforcement learning with deep neural network function approximators.  Taking a synoptic view of these recent results, interesting questions emerge about the best way to merge model-free and model-based reinforcement learning approaches to succeed in difficult domains.  Model-based approaches aim to use `causal' models of the environment to improve exploration and planning, but for real-world problems, models of environment dynamics must be learned from data.  Using learned models --- as opposed to the perfect models available for board games such as Chess and Go --- to improve performance on tasks has been more difficult than anticipated.  I will explore this issue and point to work in the literature aimed at resolving the problem.


Dr. Timothy Lillicrap is currently a Staff Research Scientist at Google DeepMind and an Adjunct Professor at University College London.  He received an Hon. B.Sc. in Cognitive Science & Artificial Intelligence from the University of Toronto and a Ph.D. in Systems Neuroscience from Queen’s University in Canada.  He moved to the University of Oxford in 2012 where he worked as a Postdoctoral Research Fellow.  In 2014 he joined Google DeepMind as a Research Scientist and became a Senior Research Scientist in 2015.  His research focuses on machine learning for optimal control and decision making, as well as using these mathematical frameworks to understand how the brain learns.  He has developed new algorithms for exploiting deep neural networks in the context of reinforcement learning, and new recurrent memory architectures for one-shot learning problems.  His recent projects have included applications of deep learning to robotics and solving games such as Go.


Comment from Gary An: Exactly on board with this, with extension to detailed environment model given the limitations of data scarcity: see our preprint on Deep Reinforcement Learning as described applied to our Sepsis ABM: https://arxiv.org/abs/1802.10440 . Also, Poster 80.



2:00-2:30 Discussion


Question for Dr. Coleman from Saleet Jafri:  I was wondering if you have thought about how your approaches can be applied to stochastic PDE models to look at causality?   Can you post some references for those of us interested in applying these methods?


Question for Dr. Lillicrap (from Raj Vadigepalli): What happens if the objective was changed from "run as fast as you can" to "get from here to there within X time"? Do new movement/planning/jumping/evading patterns emerge? Can you comment on the inverse problem of trying to figure out the objective, even if categorically, based on the behavior of the model? Is there value for a catalog of objectives X behaviors X models that could help? Or, is it so many-to-many that it would not be helpful? This is in relation to try and understand how a biological system works... by first trying to figure out "what is the system doing or is optimized to do?".

Bill Cannon for Dr. Lillicrap: Reinforcement Learning seems very similar to Evolutionary Algorithms. How are they different and how are they similar?


Depends on what "the model"

Your name
James A Glazier

Depends on what "the model" is. First two talks methods assume causal networks are static.. Interaction networks may be dynamic. The interactions may occur as a result of the dynamics of the model and not be predictable a priori except by running the model. Third talk can address this with extensions, but even here is not included as presented.

Submitted by conference_guest on Wed, 03/21/2018 - 14:09

Table sorting checkbox