BRAIN Theories, Models and Methods

Back to BRAIN TMM WG page

Back to BRAIN page on IMAG wiki


The following (numbered) BRAIN TMM Awardees have expressed their interest to present their tools during the April 2019 PI meeting (BOLD is confirmed to present Friday April 12, 2019 9:30-12:30pm poster session):

Emery Brown, MIT

Moo K. Chung, University of Wisconsin-Madison

Carina Curto, Pennsylvania State University-Univ Park

Brent Dorion, U Pittsburgh BRAIN Math Project - DoironSmithYu.pptx

Tatiana Engel, Cold Spring Harbor Laboratory BRAIN Math Project - Engel.pptx, BRAIN Initiative Alliance video - July 2019: BRAIN Initiative Funding is Advancing Theoretical & Computational Research

Bei Wang &Tom Fletcher, Utah BRAIN Math Project - Fletcher.pptx

Kathleen Gates, University of North Carolina BRAIN Math Project - Gates.pptx

Joshua Gold, U Pennsylvania  BRAIN Math Project - Gold.pptx

Steve Hanson, Rutger BRAIN Math Project - Hanson.pptx

Marc Howard, Boston University BRAIN Math Project - Howard.pdf

Stephanie Jones, Brown University BRAIN Math Project - Jones.pptx

Mark Kramer, Boston University NIBIB Math Project Kramer R01EB026938.pptx

Xi (Rossi) Luo, UT Health BRAIN Math Project - Luo.pptx

Bill Lytton, SUNY Downstate Medical Center eee.pdf

Hernan Makse and Andrei Holodny, City College of New York Office Makse-Holodny_NIH_NIBIB_2019.03.06.ppt

Vinod Menon, Stanford University BRAIN Math Project - Menon.pptx

Partha Mitra, Cold Spring Harbor Laboratory

Ilya Nemenman, Emory University BRAIN Math Project - Nemenman.pptx

Il Memming Park, SUNY Stonybrook BRAIN Math Project - Park.pptx

Ashish Raj, Weill Medical Coll of Cornell Univ

Dario L Ringach, University of California Los Angeles BRAIN Math Project - Ringach.pptx

Harel Shouval, University of Texas Hlth Sci Ctr Houston BRAIN Math Project - Shouval.pptx

Vikas Singh, University of Wisconsin-Madison (BRAIN) NIBIB Math Project - Singh.pptx

Fritz Sommer, UC Berkeley

Michael Buice & Daniela Witten, University of Washington, BRAIN Initiative Alliance video - February 2019: A Statistician and A Neuroscientist Walk into a BRAIN Grant

DRAFT BRAIN Website Descriptions for BRAIN TMM

Moo K. Chung, University of Wisconsin-Madison

The permutation test is the most widely used nonparametric test procedure in brain imaging. It is known as the exact test in statistics since the distribution of the test statistic under the null hypothesis can be exactly computed if we can calculate all possible values of the test statistic under every possible permutation. Unfortunately, generating every possible permutation for large-scale brain image datasets such as HCP and ADNI with thousands of images is not practical. Many previous attempts at speeding up the permutation test rely on various approximation strategies such as estimating the tail distribution with a known distribution. Our tool called Rapid Acceleration will rapidly accelerate the permutation test without any type of approximate strategies by exploiting the algebraic structure of the permutation test. The method is applied to large-scale brain imaging database HCP in localizing the group differences a well as more accurate estimation of twin correlations in ACE model. Datatype: Brain images (fMRI, DTI, MRI).  Website:

Marc Howard, Boston University

Effective cognition requires us to orient ourselves in space and time.  A great deal of neural evidence suggests that the hippocampus and related brain structures maintain a cognitive map of the world using spatial and temporal coordinates.  We describe a computational theory to estimate continuous variables such as space and time using the cooperative activity of many neurons.  According to this hypothesis, functions over space and time are not estimated directly but via estimating the Laplace transform of those functions.  The inverse transform can be computed using a well-known neural circuit, resulting in a close correspondence with well-known neural findings from place cells and time cells.  Recent evidence from rodent and monkey recordings provide dramatic evidence that the entorhinal cortex maintains an estimate of the Laplace transform of functions of time, confirming a unique prediction of this computational approach.

Stephanie Jones, Brown University

Human Neocortical Neurosolver (HNN): Electro- and magneto-encephalography (EEG/MEG) are among the most powerful technologies to non-invasively record human brain activity with millisecond resolution. They provide reliable markers of healthy brain function and disease states. A major limitation is that it is often difficult to connect the macroscopic scale measured signals to the underlying cellular and circuit level neural generators. This difficulty limits the translation of EEG/MEG studies into novel principles of information processing, or into new treatment modalities for neural pathologies. HNN is a user-friendly software tool that provides a novel solution to this challenge. HNN gives researchers and clinicians the ability to test and develop hypotheses on the circuit mechanism underlying their EEG/MEG data in an easy-to-use environment. The foundation of HNN is a computational neural model that simulates the electrical activity of the neocortical cells and circuits that generate the primary electrical currents underlying EEG/MEG recordings based on known biophysics. We provide tutorials on how to import your data and to begin to understand the underlying circuit mechanisms, including layer specific responses, cell spiking activity, somatic voltages and both time and frequency domain signals. Data types: EEG/MEG, layer specific responses, cell spiking, somatic voltages, time and frequency domain signals. Weblink:

Mark Kramer, Boston University

Cross frequency coupling (CFC) is emerging as a fundamental feature of brain activity, correlated with brain function and dysfunction. Analysis of CFC focuses on relationships between the amplitude, phase, and frequency of two rhythms from different frequency bands. We propose a new statistical modeling framework to estimate CFC. This framework provides a principled approach to assess multiple types of CFC, and is easily extendable to study additional relationships that may impact CFC. The method is broadly applicable to field data such as the electroencephalogram (EEG), magnetoencephalogram (MEG), and local field potential (LFP). Weblink to access tool:

Xi (Rossi) Luo, UT Health

Functional MRI (fMRI) is a popular approach to investigate brain connections and activations when human subjects perform tasks. Because fMRI measures the indirect and convoluted signals of brain activities at a lower temporal resolution, complex differential equation modeling methods (e.g., Dynamic Causal Modeling) are usually employed to infer the underlying neuronal processes. However, ODE modeling is computationally expensive and remains to be a confirmatory or hypothesis-driven approach. The critical challenge was to infer, in a data-driven fashion, the underlying differential equation models from fMRI data. To address this challenge, we developed a causal dynamic network (CDN) framework to estimate brain activations and connections simultaneously without prespecified ODE models. Built on machine learning principals and optimization theory, we developed fast algorithms to fit large-scale ODE network models with up to hundreds of nodes. Compared with various effective connectivity methods, our method achieved higher estimation accuracy while improving the computational speed by from tens to thousands of times. Our method applies to both resting-state and task fMRI experiments. A Python implementation of our method is publicly available on PyPI at

Partha Mitra, Cold Spring Harbor Laboratory

Software to compute Persistence Vectors from SWC files is available open source at GitHub The tools have been deployed on  Details can be found in the associated publication,

This tool uses topological methods to characterize and classify neuronal shape, i.e. the morphologies of the dendritic and axonal arbors of a given neuron. Previous methods rely on ad-hoc, handcrafted feature vectors to characterize this tree shape. We use the Persistent Homology of descriptor functions defined on the neurons, to provide a principled method that respects the intrinsic characteristics of neuronal shape without having to hand-craft feature vectors.

Ilya Nemenman, Emory University

We are interested in understanding complex motor skills acquisition by animals. We proposed and validated the theory that posits that animals control the entire distribution of motor commands they generate, and update it using Bayesian principles. The distribution of motor commands has long, non-Gaussian tails, which explains why many animals do not respond to large sensory errors, but correct small ones. The data analyzed includes pitch sung by birds after a sensorimotor perturbation, and additional data sets are being tried now. Numerical implementation of the model can be found at

William Lytton, SUNY Downstate; Srdjan Antic, UCHC: Embedded Ensemble Encoding Theory

We are developing a novel theory of excitatory postsynaptic potential (EPSP) integration at the subcellular scale of networks with implications for understanding cell and network scales. Intense glutamate activation produes NMDA-dependent plateau potentials which an profoundly change neuronal state -- a plateau potential triggered in one basal dendrite will depolarize the soma and shorten membrane time constant, making the cell more susceptible to firing triggered by other inputs. Our simulation makes predictions about the manner in which plateaus are triggered and the interaction of these plateaus with back-propagating signals. At the network level, plateaus across multiple cells would provide an activated ensemble lasting 200-500 ms, within which synchronous spiking could readily occur, forming embedded ensembles. 

Dario L Ringach, University of California Los Angeles

Active Learning of Cortical Connectivity: Applications to Two-Photon Imaging

(Martin Bertran, Natalia Martinez, Ye Wang, David Dunson, Guillermo Sapiro, Dario Ringach)

Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real two-photon data to infer cortical connectivity in the visual system. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model. The data and code are available at


Harel Shouval, University of Texas Hlth Sci Ctr Houston

  • Networks with fixed connectivity, storing fixed point attractors: A firing rate model with a learning rule that is constrained by in vivo data in inferior temporal cortex and produces attractor dynamics.
  • Networks of with fixed connectivity, storing sequences: Firing rate and spike-based models with fixed connectivity that can store sequences of activity
  • Networks with plastic synapses that store the order of sequences: A firing rate model that stores sequences of activity through a simple temporally asymmetric unsupervised learning rule. Weblink: in preparation
  • Network models that can learn the order and duration of sequences using a columnar architecture:  A network model with a pre-specified columnar architecture in which different layers of a micro-column have different dynamics can learn to represent sequences of inputs as well as the duration of each element.
  • Calcium-based synaptic plasticity model: A calcium-based synaptic plasticity model that fits hippocampal slice data for various concentrations of extracellular calcium, time differences between pre and post-synaptic firing, firing frequency, and number of spikes in a burst.
  • Unified calcium-based model for unsupervised and reinforcement learning: A calcium-based plasticity model that also incorporates a role for neuro-modulators in synaptic plasticity. This model can unify previously proposed calcium-based models of unsupervised learning and also reinforcement-based models with eligibility traces. Such a model fits data of cortical slices with and without the application of neuromodulators.


Vikas Singh, University of Wisconsin-Madison


Daniela Witten, University of Washington

Fast non-convex deconvolution of calcium imaging data, applied to the Allen Brain Observatory
PIs: Daniela Witten (University of Washington) and Michael Buice (Allen Institute for Brain Science)

The Allen Brain Observatory is an unprecedented survey of neural activity in the mouse visual cortex, recorded using high-throughput two-photon calcium imaging in the awake mouse. It consists of data from nearly 60,000 cells from six areas of the mouse visual cortex, 13 Cre lines, and 4 layers, recorded from over 200 mice in 432 sets of three experimental sessions, as the mice were exposed to artificial (static and drifting gratings, locally sparse noise) and natural (images and movies) stimuli. The data are publicly-available at We have developed a fast new algorithm for deconvolving this calcium imaging data --- that is, estimating the times at which each neuron spikes --- using an l0 penalization approach. Our publicly-available software is described at The results of applying our deconvolution algorithm to all 60,000 cells in the Allen Brain Observatory can be easily accessed through the Allen Software Development Kit, described here: Together, the Allen Brain Observatory combined with our new software for spike deconvolution constitutes a valuable publicly-available resource for the study of visual coding, as well as a generalizable tool that can be applied to other calcium imaging data sets.

Table sorting checkbox