Data Reuse Effort Abstracts

BRAIN Initiative Data Reuse Abstracts

Back to Main Data Reuse page

 

U19 Abstracts

OXT - Oxytocin Group. Richard Tsien. Oxytocin Modulation of Neural Circuit Function and Behavior

Abstract 1:

Title: Oxytocin modulation of neural circuit function and behavior

Abstract: We are studying how oxytocin signaling and synaptic plasticity across multiple brain systems enables socio-spatial behavior in mice: how animals recognize and remember each other, with a particular focus on parenting.

We hypothesize that social stimuli are arousing and activate a number of modulatory systems, including oxytocin centers in the hypothalamus. Oxytocin is released in target areas such as lateral septum, hippocampal CA2, and auditory cortex, to affect responses to social stimuli and thus change behavior in C57Bl/6 mice. Our data collection spans a wide range of techniques, from molecular profiling of oxytocin receptor expression throughout the brain, the biophysical and biochemical signals and effectors directly and indirectly downstream of oxytocin receptor activation (immunocytochemistry, mass spectrometry, western blot), cellular and synaptic responses to socio-spatial stimuli and oxytocin signaling (electrophysiological recordings in vitro and in vivo, including whole-cell recordings and single-unit recordings), population-level responses to the same stimuli (with large-scale array recordings and 2-photon imaging), and behavioral data (documentary film-type footage, 1000s of hours of continuously recorded mouse homecage life and behavioral testing data).

Open questions include: what are the collection of molecular effectors of oxytocin receptor signaling in different brain areas, what are the large-scale multi-areal dynamics when animals interact socially or parentally and how plastic are these responses, what are the multi-modal receptive fields of oxytocin neurons in terms of the high-resolution moment to moment social interactions that occur during complex social encounters or the parental experience?


Abstract 2:

Title: Theta rhythm perturbation by focal cooling of the septal pacemaker in awake rats

Abstract: Hippocampal theta oscillations coordinate neuronal firing to support memory and spatial navigation. The medial septum (MS) is critical in theta generation by two possible mechanisms: either a unitary “pacemaker” timing signal is imposed on the hippocampal system, or it may assist in organizing target subcircuits within the phase space of theta oscillations. We used temperature manipulation of the MS to test these models. Cooling of the MS reduced both theta frequency and power and was associated with an enhanced incidence of errors in a spatial navigation task, but it did not affect spatial correlates of neurons. MS cooling decreased theta frequency oscillations of place cells and reduced distance-time compression but preserved distance-phase compression of place field sequences within the theta cycle. Thus, the septum is critical for sustaining precise theta phase coordination of cell assemblies in the hippocampal system, a mechanism needed for spatial memory.

Highlights

  • Cooling the medial septum slowed down theta oscillations in the hippocampus
  • The spatial representation in the hippocampus remained intact
  • Choice errors increased in a spatial task
  • Distance-time, but not distance-theta phase, compression was altered

Dataset details: It is a unique dataset in the sense that the animals had brain temperature manipulations and temperature probes implanted together with bilateral silicon probes while they were performing behavioral/spatial tasks.

Thermometer implanted in Medial Septum together with a thermal perturbation probe in freely awake Long Evans rats. 2000 single cells spike sorted, up to 150 bilateral simultaneous cells recorded from CA1, all sessions with behavior.

Behaviors

  • Circular track: up to 180 alternation trials, always with 40 control trials
  • Linear track
  • Wheel running.

Most sessions were recorded with Optitrack, a 3D tracking system (120Hz), and a ceiling-mounted video camera recorded at 10Hz. The animal's positional data was determined with Optitrack.

All data collected for this paper: Cooling of Medial Septum Reveals Theta Phase Lag Coordination of Hippocampal Cell Assemblies Peter Christian Petersen, György Buzsáki. Neuron, June 2020.

Further information in our databank with links for downloading: https://buzsakilab.com/wp/projects/entry/4919/


Abstract 3:

Title: Population Ca2+ activity across limibic system during social behaviors

Abstract: Oxytocin is an important neuropeptide for promoting the formation of social bond in animals of various species, including humans. Interestingly, oxytocin has also been implicated in promoting aggressive behaviors. A key site through which oxytocin can act to increase aggression is the VMHvl. VMHvl is an essential locus for male and female aggression and is enriched for oxytocin receptors. Furthermore, a cluster of oxytocin neurons are found right next to the VMHvl and may provide a specialized “local” source of oxytocin to the VMHvl. Thus, the overall goal of the project is to understand the oxytocin signaling in the VMHvl during aggressive encounters and its potential role in the aggression modulation. To achieve this goal, we have performed a series of in vivo optical recordings from the oxytocin receptor expressing neurons in the VMHvl and the oxytocin neurons neighboring the VMHvl during social behaviors. Additionally, we have recorded cell activity from multiple regions that are connected to the VMHvl during social behaviors using multi-channel fiber photometry systems. Thus, the main forms of our data are (1) behavioral data of mice during freely moving social interaction; (2) bulk Ca2+ activity from multiple limbic regions during social interaction. The behavioral data are both annotated manually and tracked with Deeplabcut while the recording data is processed and analyzed mainly using custom scripts written in Matlab. We are interested in understanding the relationship of activity in different brain regions and how they collective determine the timing of social behaviors. A second question we would be interested in exploring is how network activity in the limbic system varies with behavioral state (e.g. aggressive vs. non-aggressive state) and how oxytocin may contribute to the change in functional connectivity of the network.


SCC Martyn D Goulding Spinal Circuits for the Control of Dexterous Movement

Title: Spinal Circuits for the Control of Dexterous Movement

Abstract: Local networks within the spinal cord represent an essential computational layer for the control of limb-driven motor behaviors, integrating descending and sensory inputs to coordinate dexterous motor output. Significant advances have been made in characterizing the developmental programs that specify the core cardinal interneuron types that make up these motor networks. This knowledge has been used to develop a battery of mouse genetic reagents, which have been primarily used to study locomotion and spinal reflexes in the lumbar spinal cord.

Given the wider range of dexterous motor behaviors that are produced by cervical circuits and their modulation by descending motor pathways, the mouse cervical spinal cord provides a unique and tractable mammalian model system for understanding how coordinated movements are generated by local motor networks and how these motor behaviors are regulated by the brain. The functional interrogation and modeling of these circuits, based on real behavioral outcomes and detailed information about the cell types that generate these behaviors, will ensure that the overall project is greater than the sum of its parts. Specifically, we will address two overarching questions: 1) How do rhythmic spinal networks control non-rhythmic movements, which represent the majority of forelimb motor behaviors, and 2) How are these spinal circuits modified to control more complex joint movements to achieve forelimb dexterity? To address these questions, we will generate: (a) a pre-motor interneuron connectome that includes information on cell positions and synaptic weightings, (b) a comprehensive index of the physiological properties and molecular identities of genetically distinct neuronal subtypes within each cardinal interneuron class, (c) a functional description of spinal circuit control of natural forelimb motor behaviors, and (d) a working model of the motor network that describes how circuit connectivity and dynamics give rise to key elements of forelimb behavior. Ultimately, these data will be used to generate a searchable web-based portal with 3D visualization tools linked to the molecular, electrophysiological, functional, and network model databases. Together, this work will lead to a deeper understanding of the organization and function of cervical circuitry, which will be of great value to groups that are grappling with the issue of how motor centers in the brain communicate with spinal sensorimotor circuits to control movement.


Ripple - hREM, hippocampal Ripple related Episodic Memory. Ivan Soltesz. Towards a Complete Description of the Circuitry Underlying Sharp Wave-Mediated Memory Replay

Title: Cellular mechanisms of memory consolidation

Abstract: Our U19 group studies the cellular bases of memory consolidation, particularly how sharp wave ripples serve as a transfer mechanism between hippocampus and neocortex. We use cutting-edge large-scale electrophysiology and optophysiological recording technologies to study and manipulate identified cell types in behaving animals, coupled with data-driven simulations. Our goal is to elucidate the cellular mechanisms responsible for memory replay and its role in memory transfer and consolidation. The methods used by our group can be applied to many situations in which brain mechanisms of behavior and cognition are explored.

Our data spans several neurophysiological techniques that provide insights into the mechanisms sharp wave ripples:

  • Large-scale electrophysiological recordings of freely moving mice and rats in various learning and navigational tasks.
  • Behavioral measures span from classic position-and-heading-direction measures to a high-resolution continuous monitoring of the entire movement repertoire of the animal
  • Calcium imaging data and extraction of sharp wave ripples from the optical signal.
  • Novel fiber photometry and voltage imaging techniques that provide unprecedented information about cell types-specific contributions to population cooperativity.

We are particularly interested in application of analytical tools that could help determine specific cellular connectivity and contributions to the sharp wave ripple processes, and can assist our large-scale computational model building. Related analytical topics could include:

  • Tuning properties of hippocampal cells in various tasks
  • Examination and reliability of sharp wave ripples extracted from calcium imaging data
  • Novel analysis or visualization techniques applied to the simulated neural activity output by our large-scale computational model

ABC - Anything But Cortex. David Kleinfeld. Reverse Engineering the Brain Stem Circuits that Govern Exploratory Behavior

Title: The adaptive atlas

Abstract:

Scientific Topic: The anatomy and function of nerve circuits in the brain stem of the mouse.

Types of data: Image stacks from a scanning microscope. This includes both brightfield imaging as well as fluorescent imaging. Different channels in the fluorescent imaging. We are developing a computation and visualization pipeline for aligning an atlas with a stack of brain sections. The result is to overlay a standardized coordinate system on the stack. The coordinate system enables experimental results from multiple brains to be related to each other. Our current focus is on projections of the rabies retrovirus from muscles to locations in the mouse brain-stem.

Open Questions: what textures (microarchitectures) can be identified reliably in the brain? How can we efficiently find the best diffeomorphism to map the atlas to a brain?


FlyLoops - feedback loops of flies. Michael Dickinson. A Brain Circuit Program for Understanding the Sensorimotor Basis of Behavior

Title: Leg motor control in Drosophila

Abstract: To move the body, the brain must precisely coordinate patterns of activity among diverse populations of motor neurons. We used in vivo calcium imaging, electrophysiology, and behavior to understand how genetically-identified motor neurons control flexion of the fruit fly tibia. We found that leg motor neurons exhibit a coordinated gradient of anatomical, physiological, and functional properties. Large, fast motor neurons control high force, ballistic movements while small, slow motor neurons control low force, postural movements. Intermediate neurons fall between these two extremes. This hierarchical organization resembles the size principle, first proposed as a mechanism for establishing recruitment order among vertebrate motor neurons. Recordings in behaving flies confirmed that motor neurons are typically recruited in order from slow to fast. However, we also find that fast, intermediate, and slow motor neurons receive distinct proprioceptive feedback signals, suggesting that the size principle is not the only mechanism that dictates motor neuron recruitment.

Given the conservation of the size principle across species, one open theoretical question is whether hierarchical motor recruitment represents an optimal organization for neuromuscular control. If so, can violations of the recruitment order, seen in both vertebrate and invertebrate motor systems, help us understand the limits of hierarchical recruitment as a coding and control scheme? What can we infer about the structure of premotor networks from the coordinated activity of motor neurons controlling a particular joint?

We recently made the data from this project publicly available: https://doi.org/10.5061/dryad.76hdr7stb


Learning2Learn. Elizabeth Buffalo. Computational and Circuit Mechanisms Underlying Rapid Learning

Title: Learning2learn: Rapid learning in humans and non-human primates

Abstract:

Scientific topic: Our U19 collaboration studies rapid learning in primates (both human and non-human). We are interested in the ways in which flexible behavior arises from the apparent ability of primates to very rapidly learn complex tasks and rules, and to adjust these rules, as the environment changes.. This topic is particularly interesting and important, because the ability to learn how to learn seems to be a hallmark of human behavioral flexibility and high-level cognitive abilities. Moreover, while artificially-intelligent systems have evolved to perform many kinds of learning tasks at near-human performance levels, this kind of flexibility still largely eludes these systems altogether.

The types of data you collect (subjects, modalities, tasks): Multi-channel electrophysiological recordings of brain activity are obtained in awake behaving humans and non-human primates, while they perform complex behavioral tasks. In non-human primates, recordings are conducted using novel multi-channel drives that provide dense chronic high-quality recordings from multiple regions, including areas of the prefrontal cortex and the hippocampus. In human epilepsy patients, intracranial electrocorticography (ECoG) recordings are conducted in multiple areas on the cortical surface and stereoelectrocorticography (SEEG) recordings are conducted in the temporal and frontal cortices, as well as the hippocampus and amygdala. The recordings from deep structures capture both the local field potential and activity of single neurons.. The behavioral tasks used include the classic Wisconsin Card-Sorting Task (WCST), as well as a novel task of stimulus association. One of the remarkable strengths of these data is that very similar tasks were performed in both the human and non-human primate experiments.

Open questions that a theorist might be able to help you answer: We are interested in answering a host of questions about the representations of task information in brain circuits. For example, an open question is how rule representations emerge as a function of task performance, how rapid rule switching is implemented,, and how these flexible representations relate to coding of task information in distributed circuits. It would be potentially interesting to implement artificially-intelligent systems that can learn how to learn in ways that accurately emulate the behaviors exhibited in these tasks by biological systems.


brainCOGS - circuits of COGnitive Systems. Carlos Brody. Mechanisms of neural circuit dynamics in working memory and decision-making

Abstract 1:

Title: Multi-region calcium imaging during two decision making tasks

Abstract: I am using a two-photon mesoscope to record multi-region single-cell resolution calcium signals from 3 cortical regions while mice perform a decision making task in virtual reality. In this ‘Accumulating Towers task’ head-fixed mice are required to gradually accumulate visual evidence as they navigate in a virtual T-maze. The side on which the majority of the evidence appears informs them which maze arm the reward is located in. In an alternate version of the task (‘Visually Guided task’), mice navigate the same virtual T-maze and receive the same visual evidence cues but they do not have to accumulate evidence. Rather, they simply have to turn in the direction of a large visual guide. While mice perform these tasks, we record calcium signals simultaneously from the secondary motor area, retrosplenial cortex, and anterolateral visual cortex. This work provides a rare opportunity to explore how the neural underpinnings of decision making emerge on every level from single-cell responses to between region interactions.

Previous work from our lab comparing these tasks has shown that optogenetic inhibition of nearly any dorsocortical region impairs performance in the Accumulating Towers task whereas only inhibition of visual regions impairs performance in the Visually Guided task. Additionally, widefield calcium signals across cortical regions are less correlated in the Accumulating Towers task. Lower correlations were also observed during more difficult trials and more difficult task epochs. This seems to indicate that higher cognitive loads are supported by decreased neural correlations. A current objective is to replicate this last result with single-cell resolution data.

Questions that would benefit from theoretical modeling: If true, why are neuron-neuron correlations lower in the accumulating towers task? Is task-related information shared between brain regions (e.g. via a communication subspace)? How does neural activity coordinate between regions to produce behavior? In these tasks, an individual’s performance can vary from session to session and even between blocks of the same session. Can we identify performance-related neural correlates?


Abstract 2:

Title: The role of the hippocampus in context-dependent decision-making

Abstract: Many decisions depend on context; for example, “which shirt should I wear?” depends on whether you are going to work or a party. Decision-making (DM) thus often requires individuals to evaluate the consequences of multiple actions based on stored contextual memories. How does the mammalian brain make such context-dependent decisions? On the one hand, cellular recordings in rodents have characterized the neural circuit mechanisms involved in DM in multiple frontoparietal brain regions. On the other hand, lesion and inactivation studies have shown that the hippocampus (HC) is necessary for context-specific memory retrieval. Yet, the role of the HC in guiding context-dependent DM is unknown. We developed a virtual-reality T-maze navigation task in which head-fixed mice are required to make context-dependent spatial decisions. In one T-maze context, mice are trained to turn toward a visible turn guide; in the other context, mice are trained to turn away from a turn guide. This task is decomposable into sensory (e.g., visual cues), behavioral (e.g., running speed) and cognitive (e.g., context) components. While mice perform this task, we plan to conduct i) cellular-resolution two-photon imaging of the dentate gyrus (DG) HC subfield, ii) time-dependent optogenetic inactivation of the DG, iii) activity-dependent optogenetic reactivation of context-specific DG neural populations, and iv) electrophysiological recordings of prefrontal brain regions known to be involved in DM (e.g., premotor cortex) during simultaneous optogenetic inactivation or reactivation of DG. We hope these data could contribute to:

  1.  Models of HC contextual separation, such as attractor neural networks or autoassociative memory models.
  2. The development of models characterizing how HC contextual separation might drive DM-related activity in cortical brain regions.

With regard to the second point, for instance, HC context representations might gate or modulate the entire DM process itself, such that all DM computations occur in a context-dependent manner. Alternatively, HC contextual separation might gate or modulate only the output of DM-related processes, to decide the most suitable action given the current context. Adjudicating between these alternatives would strongly benefit from a flexible theoretical model that links HC contextual separation to DM computations at different stages of the DM process, which could then be constrained through targeted experimental manipulations.


Abstract 3:

Title: Participation of Edinger-Westphal Nucleus as an attentional gate to accumulate visual evidence in head-fixed mice

Abstract: Many decisions require individuals to accumulate evidence and preserve it in memory to guide their decisions towards the obtention of desirable consequences. This evidence accumulation process requires multiple components: a gate that tells the brain when to start and stop accumulating the evidence, update the information, and retain it in memory until a decision is made. Despite extensive prior research on the neural correlates of visual evidence accumulation, no conclusion about which brain region might support the mechanisms to initiate or stop the accumulation has been achieved. Visual information from the optical nerve reaches the Edinger-Westphal Nucleus (EWN), a midbrain region whose activation and inhibition improves and impairs attention, respectively. However, the role of EWN as a gate to start visual evidence accumulation remains unanswered. To address this issue, we will optogenetically silence EWN while already-trained head-fixed mice are performing a flashes accumulation task. During the task, flashes will appear on either lateral side and, after a delay epoch, the mice are required to emit a response to the side with a higher number of flashes. EWN optosilencing will be performed during whole-trial, cue-epoch (accumulation), or delay-epoch (memory), or starting at different times during the cue epoch (gate) to assess if evidence-leakage/open-gate could be induced. Pupillometry, an attentional measurement, will be tracked to serve as a gate-closing indicator: when accumulation starts. The aforementioned together with a passive version of the task, where the reward will be delivered in either lateral spout despite the evidence, will allow us to dissect the contribution of motor vs. attentive pupil diameter changes. Electrophysiological recordings of EWN and simultaneous optoinactivation of EWN plus electrophysiological recordings of the visual cortex will be obtained.

Data type: Mice, behavior, optogenetics, electrophysiology.

Open questions: Identify the components (gate, accumulation, memory) of the accumulation process that are being affected during optogenetic perturbations, via modeling behavior. Implement a Drift Diffusion Model that incorporates attentional open-close gate to account for choice and bias. Determine if these components could be encoded by population dynamics (e.g., manifolds). Quantify the contribution of pupil diameter changes over single-unit and population neural activity encoding of accumulation and gate closing.


Sensation - coding, sensation, behavior. John Maunsell. Readout and control of spatiotemporal neuronal codes for behavior

Title: Sound encoding of neurons in input and associative layers of auditory cortex

Abstract:

Scientific Topic: Sound encoding of neurons in input and associative layers of auditory cortex

Data Types: 2-photon calcium imaging (GCaMP6s) in L2/3 and L4 of auditory cortex in awake head-fixed mice 3.

Open Questions: How is sound information encoded after stimulus offset? How is sound reliably encoded when trial-to-trial variability is so prevalent in these neuronal responses? What are the differences in population encoding from L4 to L2/3 of auditory cortex? The primary auditory cortex processes acoustic sequences for the perception of behaviorally meaningful sounds such as speech. Sound information arrives at its input layer 4 from where activity propagates to associative layer 2/3. It is currently not known whether there is a characteristic organization of neuronal population activity across layers and sound levels during sound processing. Here, we identify neuronal avalanches, which in theory and experiments have been shown to maximize dynamic range and optimize information transfer within and across networks, in primary auditory cortex. We used in vivo 2-photon imaging of pyramidal neurons in cortical layers L4 and L2/3 of mouse A1 to characterize the populations of neurons that were active spontaneously, i.e. in the absence of a sound stimulus, and those recruited by single-frequency tonal stimuli at different sound levels. This dataset allows for the calculation of robust receptive fields for each neuron and observation of the moment-to-moment activity of hundreds of neurons in two distinct areas of the auditory cortical circuit (L4 and L2/3). This dataset facilitates theorists to explore the differences in population encoding from L4 to L2/3 of auditory cortex, how sound information is encoded over time, and how sound is reliably encoded in the presence of overt trial-to-trial variability in responses.

Prior analysis: https://www.frontiersin.org/articles/10.3389/fnsys.2019.00045/full -- paper relies entirely on this dataset https://www.nature.com/articles/s41598-020-67819-4 -- paper uses the transgenic portion of the dataset as the C57BL/6 group


MSCZ - MultiScale Circuits of Zebrafish. Florian Engert. Sensorimotor processing, decision making, and internal states: towards a realistic multiscale circuit model of the larval zebrafish brain

Title: The connectome of the larval zebrafish brain

Abstract: While the neural circuits that underlie behavior are of interest to a substantial part of the neuroscience community, there have been very few technical approaches that actually provide this kind of information across all levels at which circuits function, including the level of synaptic connections. We have established an electron microscopy core which is explicitly designed to provide the “wiring diagrams” of neural circuits in an efficient way. Much of our effort over the past 5 years has been to transform serial electron microscopy of large volumes (such as the fish nervous system) from a heroic to a more mundane enterprise. This transformation required innovations in hardware and software to abbreviate all the time-consuming steps in the connectomic pipeline. In particular we: 1) automated ultra-thin sectioning (using a tape-based approach), 2) automated image acquisition (using a custom multibeam serial electron microscope), 3) automated stitching and registration of the image data on high performance computing clusters, 4) automated segmentation of neurons and synapses on a GPU cluster, and 5) semi-automated proofreading and rendering of the neural circuits with custom software. Using this infrastructure we have collected tens of thousands of sections losslessly at 30 nm thickness and acquired images of them at lateral resolutions of 4 x 4 nanometers. This voxel size (480 nm3) provided enough detail for human or machine vision methods to trace out the finest aspects of neural connectivity. Acquiring these circuits is also relevant if neuronal connectivity can be associated with cells of particular types, hence the significant benefit of doing analysis of cell types that have been defined in the fish atlas associated with our overall project. Importantly, these circuit diagrams provide ground truth for testing and refining computational theories of brain function, and are therefore of obvious interest to theorists working on questions and constraints of circuit function.


Osmonauts - Dmitry Rinberg. Cracking the Olfactory Code

Abstract 1:

Title: High-speed volumetric imaging of piriform cortex during odor stimulation

Abstract: High-speed volumetric multiphoton imaging data of piriform cortex layers 2 and 3 were collected using a 16kHz resonant galvo in paralyzed but awake mice being exposed to different odor sets, in which each set parametrically varied odor distance at a particular scale (as measured in an odor space defined by more than 5,000 known odorants, using a PCA reduction of a set of more than 4,000 physiochemical features). These three sets were defined as “global” “tiled” and “clustered” depending upon inter-odor distances. Acquisition volumes spanned 210 mm in the Z axis across PCx L2 and L3. Volumes were split into 6 optical slices each spanning 35 mm of cortex. Volumes were positioned such that 2 slices resided in L2 and 4 slices resided in L3. This allowed us to monitor similarly sized populations of neurons in L2 and L3 given the approximately 3-fold lower cell density of L3 in posterior PCx. For experiments involving the global, clustered and tiled odor sets in odor-naïve animals, data was analyzed from 3 animals per odor set. In independent experiments, olfactory bulb inputs to the piriform (which reside in layer 1) were imaged in the same configuration (via homogenous viral delivery of GCaMP6s to bulb projection neurons); these experiments yielded >500 boutons per imaging field (x 3 mice) for the tiled odor set only.


Abstract 2:

Title: Automated segmentation of ROIs in odor-evoked glomerular imaging

Abstract: We collected spatiotemporal patterns of activity in the olfactory bulb glomeruli across multiple odors with one-photon calcium imaging. Data is the stack of 256 x 256 pixel fluorescent images collected with CCD or CMOS camera at 100hz. Odors used for a single set of experiments contains 8-10 monomolecular odors and 16-25 binary odor mixtures at two different concentration levels. Although there exist multiple algorithms used for automated segmentation of ROIs in calcium imaging data, application of those algorithms is challenging due to multiple factors that are unique to our experimental preparation. The challenges include densely packed spatial organization of glomeruli, scattering of fluorescence to neighboring ROIs and strong hemodynamics signal that contaminates neuronal activity dependent fluorescence changes.


DOPE Bernardo Sabatini Towards a unified framework for dopamine signaling in the striatum

Title: Dopamine recordings during a self-timed behavior

Abstract: 

GCaMP6f fiber photometry recordings from genetically-defined dopaminergic cell bodies in the substantia nigra pars compacta (SNc), the ventral tegmental area (VTA), and/or dopaminergic axon terminals in the dorsolateral striatum (DLS) were collected from water-deprived, head-fixed mice as they executed a self-timed movement task (n=12 mice). In a separate cohort of animals, dopamine release in the DLS was monitored during the self-timed movement task by fluorescence of one of the two novel dopamine indicators, dLight1.1 (n=5 mice) or DA2m (n=4 mice). We additionally co-expressed tdTomato as a control fluorophore to detect optical artifacts. Ongoing body movements were monitored by neck EMG, high speed video, and back-mounted accelerometer. Mice were given a 5 uL juice reward if the first lick following a start timing cue occurred within a reward window (3.333-7s after the cue). If the mice first-licked before or after the reward window, the mouse was not rewarded for the trial and had to wait the full trial duration before entering a 10 s intertrial interval. Each animal completed up to 26 behavioral sessions with 400-1500 trials each. Mice learned to target their licking toward the reward window, and we sought to relate the natural variability in the timing of these licks to the dopaminergic signal unfolding during the timing interval. Dopaminergic signals were analyzed by aligning to the cue and first-lick events. For analyses of averaged data, trials were pooled based on first-lick time, and we also repeated these analyses on single trial data, Two features were apparent in the data: a baseline offset in dopaminergic signal predictive of single-trial movement timing as well as a ramping signal reminiscent of a threshold process, in which the rate of ramping toward an apparent threshold level was likewise predictive of single-trial movement time. We quantified the relationship between the dopaminergic signal, ongoing body movements/artifacts and movement timing with a generalized linear encoding model. We quantified the predictive power of dopaminergic signals on movement timing with two complementary decoding models: 1) a single-trial threshold model, and 2) a generalized linear decoding model whose predictors included the dopaminergic signal as well as other task variables and movement signals.


MouseV1. Kenneth Miller. Understanding V1 circuit dynamics and computations

Title: Fine spatial organization of orientation tuning in mouse visual cortex

Abstract: 

The absence of spatial organization in orientation tuning had been thought as a major feature of the rodent primary visual cortex (V1). However, recent experimental discoveries have been revisiting and challenging this view. Population imaging studies have suggested that nearby neurons in the layer 2/3 (L2/3) of mouse V1 tend to have stronger tuning similarity than that of distant neuron pairs, indicating a localized spatial clustering of stimulus feature preference (Ringach et al. 2016, Jimenez et al. 2018, Kondo et al. 2016). However, the spatial scale of clustering is still in debate: either spread over hundreds of microns (Ringach et al. 2016), or limited to the scale of tens of microns (Kondo et al. 2016). Those differences could reflect distinct scales of local feedforward/recurrent cortical connectivity, so an accurate measurement of the spatial profile of local clustering will shed light on the underlying neuronal circuits, yielding way to circuit-based mechanisms of visual processing in rodent V1. Here using two-photon calcium imaging, we measured the orientation tuning properties of L2/3 neurons in mouse V1. We found a significant spatial clustering of tuning, but horizontally localized in only approximately 20 um, which is typically the average distance between horizontally neighboring neurons. To understand this narrow clustering, we explored a spiking neuron network model of L2/3 and L4 of mouse V1. Building on past models with broad recurrent wiring over 200 um (Rosenbaum et al., 2017; Huang et al., 2019) we additionally considered an excess connecting probability over a narrow 20 um range. A spatially narrow local tuning similarity matching our data emerges for even weak narrow connectivity, effectively adding only a few extra local connections per neuron. Our combined experimental and modeling work argue for a fine spatial scale of wiring between adjacent neurons in mouse V1.


MoC3. Rui Costa. Computational and circuit mechanisms underlying motor control

Title: Striatal correlates of locomotion

Abstract:

Our U19 group studies the functional and computational logic of connectivity between motor control centers and the spinal cord and muscle. We are anatomically and functionally characterizing the role of projection-specific populations of corticospinal neurons during particular modes of motor control based on cell-type specific connectivity between brain and spinal cord and employing novel imaging and electrophysiological techniques to measure and manipulate functionally and genetically-defined neural populations, and state-of-the-art computational tools. Because even the simplest motor program requires the activation of many neuronal populations across multiple brain areas, we are investigating the contribution of cortical and subcortical areas to the spinal cord and to muscle activity. We aim to dissect the contributions of activity in specific neural populations using closed-loop optogenetic manipulations and implement a dynamic back and forth between anatomical and functional mapping experiments, computational and conceptual models, and causal testing of predictions.
Previous work from our lab and others indicates that direct and indirect striatal projection pathways are concurrently active during movement initiation, this activity is action-specific, and needed for proper movement. However, this work was done by measuring activity in each pathway independently. As part of optimizing several imaging parameters, we have collected calcium imaging of striatal projection neurons during spontaneous locomotion in the mouse:
Dual-color 2-photon imaging of both direct- and indirect-pathway neurons from dorsolateral striatum             
Spontaneous locomotion on a running wheel with encoder for speed output
Simultaneous video of mouse on the wheel
These data are amenable for the application of computational models to help determine the neural dynamics and relation between neurons of both pathways during spontaneous locomotion.



TMM Abstracts

PARK, IL MEMMING (contact); PILLOW, JONATHAN WILLIAM Real-time statistical algorithms for controlling neural dynamics and behavior EB026946

  1. Method for inferring latent dynamical system and neural state trajectory from spike trains. Real-time machine learning tool for time series visualization and dimensionality reduction.
  2. Do low-dimensional continuous trajectories explain the spatiotemporal structure in the neural recordings? If so, what is the underlying dynamical system that governs the neural recording? How are the various task variables related to the neural recording via the low-dimensional manifold?
  3. High-dimensional neural time series recorded while the animal is engaged in a simple behavior with simple stimulus and a small number of randomized task variables
  4. Continuous recording in high-sampling frequency (500Hz or higher), sorted spike trains and multi-unit activities. Our method can run in real-time while the recording is happening.

DOIRON, BRENT D (contact); SMITH, MATTHEW A; YU, BYRON M Neuronal population dynamics within and across cortical areas EB026953

A major goal of theoretical neuroscience is to develop mechanistic network models that can reproduce key aspects of neuronal activity recorded in the brain. There are two key parts of fitting a network model to neuronal recordings: 1) incisive measures to compare neuronal recordings with the activity produced by network models, and 2) automatic methods to efficiently fit network model parameters to data. We propose a systematic framework using population activity statistics based on dimensionality reduction and a Bayesian optimization algorithm to efficiently fit model parameters to data. The proposed population statistics go beyond the commonly-used single-neuron and pairwise statistics and raise the bar for comparing models to data. The Bayesian optimization algorithm efficiently fit the parameters using fewer iterations than brute force methods. We emphasize limits of model capacity where a given model reproduces some, but not all, of the desired features of neuronal recordings. We used our algorithm to study which aspects of neuronal activity recorded in macaque V4 can be reproduced by classical balanced networks (CBN) and spatial balanced networks (SBN). We found that SBN has better capacity compared to CBNs in fitting V4 data and discovered interesting trade-offs between different types of activity statistics, thereby revealing limits of model capacity. These insights can be used to guide the development of future network models whose activity resembles neuronal recordings even more closely.


Modeling the dynamics of history dependent neuronal systems at all scales

EB026939

Fidel Santamaria, PI

My lab has developed a mathematical and computational framework to model and analyze history dependent processes, from the diffusion of molecules inside, on the surface, and around neurons, to electrical network activity. What I can offer to experimentalists are unique tools to study scale free processes. This is not your traditional scale free statistical studies that look at long tail probability distributions, instead we can write and model differential equations that give you access to the dynamics of the problem. Our advantage is that we use fractional order integro-differential equations. This mathematical objects are the natural tool to study history dependent phenomena. The type of data I like are long term recordings, either in resting or active states. These recordings can be fluorescent traces from synaptic or neuronal activity, EEG, single- or multi-unit recordings. As an example, my collaborator in this grant, Maurice Chacron at McGill, records from weakly electric fish as they receive natural stimuli. We have been able to replicate the non-linear response of the neurons he records from using models but also, recently, implementing neuromorphic circuits.

 

 


HOWARD, MARC W Toward a Theory for Macroscopic Neural Computation Based on Laplace Transform EB022864

We have worked on a theory for how populations of neurons represent information and manipulate information. The theory interfaces well with cognitive models, especially for working memory tasks. The theory predicts that neural populations come in pairs. The optimal data for us has spikes from many simultaneously recorded neurons from more than one brain region during a complex behavior that we can analyze. The model is scale-invariant so very ``slow'' tasks (extended over more than a minute per trial) are of special interest.


LYTTON, WILLIAM W (contact); ANTIC, SRDJAN D Embedded Ensemble Encoding EB022903

We have developed a detailed model of signaling within a rodent Layer 5 neocortical pyramidal cell showing NMDA-mediated plateau potentials lasting 100-400 ms with forward- and back-propagating action potentials. These simulations offer the opportunity to reconceptualize the role of the pyramidal neurons as having complex spatiotemporal dendritic properties with localized UP-states that spread to the soma.


DRUCKMANN, SHAUL Dissecting distributed representations by advanced population activity analysis methods and modeling EB028171

The tool we are developing aims to distill simultaneous recordings from neural populations (e.g., from two brain areas) into a spatio-temporal profile of strength of influence. This is the first year of the TMM grant and accordingly we are very much in the development phase. Our approach defines influence by the ability to predict unexpected deviations in the dynamics of one brain area, the modeled area, from the just-preceding activity in another brain area, the influencing area. In more detail, we first predict the dynamics of the modeled area from its own past history. We then detect deviations from the predicted dynamics and determine whether these deviations can be themselves reliably predicted from the state of the influencing area. There are numerous variants of this general description, such as using linear vs. non-linear predictive models, or modeling the full activity space our inferred subsets. We will complement it with non-dynamical approaches that optimize over both populations to find components of maximal correlation such as canonical correlation analysis Our current use involves electrophysiology data, though we would like to extend our approach in the future to calcium recordings. In terms of data requirements: signal-to-noise ratios vary a lot between tasks and brain areas making it hard to define a specific minimal number of neurons but this method is meant for population recordings, e.g., more than ten neurons per population to be modeled. As it is a sub-single trial method, repetitions are required, e.g., 50 repetitions of a task condition, or extended recordings in a non task-based structure.


YE, BING (contact); DIERSSEN, MARA New methods and theories to interrogate organizational principles from single cell to neuronal networks EB028159

Our project “New Methods and Theories to interrogate Organizational Principles from Single Cell to Neuronal Networks” aims to develop a user-friendly modeling toolset to study how single neurons morphology can determine the connectivity pattern of the network and shed light to the rules linking both. The connectivity patterns of a particular brain region will be estimated by generating a morphological neural network model that uses both neural population data and single neuronal reconstruction data extracted from fluorescent whole brain images. We already developed a population analysis tool to compute the location and orientation of each neuron relative to a reference coordinate system. Our tool is able to overcome the three limitations that are commonly found in cell detection algorithms: undetected neurons, false positives in axonal regions and out of memory errors that arise while processing whole brain images. Our software is Open Source; the source code of our population analysis tool, mainly written in Python, will be available in GitHub. User Friendly, as it can be managed through Graphical User Interface (GUI) or Command Line Interface (CLI). Cross-platform, as it can be executed over Linux and Windows. Big Data oriented: our algorithm is able to compute in reasonable time neuronal location and orientation from whole brain images with a resolution of tenths of microns per voxel and a memory size around the order of tens of Terabytes. Computational Tractability: it is able to split whole brain images into small overlapping 3D-images to avoid runtime out of memory errors. Computational Efficiency: the user can select parallel computing parameters to speed up computing time. Hardware Scalability. The performance is scalable to available computing resources and can be executed on a regular laptop, a workstation or a computer cluster. Vaa3d visualization: the location of detected neurons can be stored in a file format allowing visualization using the rendering power of Vaa3D software. The beta version of the Population Analysis Tool will be released very soon and 1) We are seeking labs interested on using our software to evaluate the usability of the tool and to identify those missing functionalities. 2) We need mesoscopic image datasets taken from different microscopes to evaluate the robustness of the tool and to provide support for more image formats and 3) we are interested in whole brain images with high density neural labeling.


ENGEL, TATIANA Discovering dynamic computations from large-scale neural activity recordings EB026949

Core brain functions—perception, attention, decision-making—emerge from complex patterns of neural activity coordinated within local microcircuits and across brain regions, with dynamics down to milliseconds. Recently, massively-parallel technologies enabled activity recordings from many neurons simultaneously, offering the opportunity to investigate how activity is orchestrated across neural populations to drive behavior. To reveal dynamic features in these large-scale datasets, computational methods are needed that can uncover neural population dynamics and identify how individual neurons contribute to the population activity. Existing methods rely on fitting ad hoc parametric models to data, which often leads to ambiguous model comparisons and estimation biases, limiting the potential of these methods for scientific discovery. To push these limits, our BRAIN project team develops a broadly applicable, non-parametric inference framework for discovering population dynamics directly from the data without a priori model assumptions. Our non-parametric methods explore the entire space of all possible dynamics in search of the model consistent with the data, leading to a conceptual shift from model fitting to model discovery. This is achieved by extending latent dynamical models to a general form, where the latent dynamics are governed by arbitrary dynamical-systems equations, in which driving forces are directly optimized. Our framework reconstructs population dynamics with millisecond precision on single trials and infers idiosyncratic relationships between single-neuron firing-rates and the population dynamics, revealing heterogeneous contributions of single neurons to circuit-level computations. With our methods, we examine large-scale physiological recordings during decision-making, to reveal how neural activity is coordinated to drive decisions and how functional heterogeneity of single-neuron responses aligns with anatomical organization of decision-making circuits.

A python package is available on GitHub: https://github.com/engellab/neuralflow

 

 


SOMMER, FRIEDRICH T Building analysis tools and a theory framework for inferring principles of neural computation from multi-scale organization in brain recordings EB026955

Abstract 1:

Title: Identifying correlates of behavior in multi-electrode LFP recordings

Oscillations in the local field potential (LFP) have historically been viewed as coarse-grained indicators of behavioral state. A challenge in understanding the LFP is that it is composed of responses of many thousands of cells and that it is dominated by spontaneous activity, not directly coupled to observable behavioral events or stimuli. Using multi-electrode LFP recordings from the hippocampus, we developed a method to extract precise behavioral information that is embedded within spatio-temporal oscillatory patterns. We have recently extended this approach to extract information from signals that are only weakly and intermittently oscillatory. Not only does our approach offer a robust alternative to spike-based brain-machine interfaces, it suggests how large-scale population codes are embedded within brain dynamics that could subserve inter-regional computation and communication.

Our LFP decoding tool would be most useful for groups interested in identifying the behavioral information embedded within a particular brain region of interest. The data would consist of simultaneous LFP recordings from at least a few dozen sites (the more sites the better), in addition to recordings of relevant behavioral variables (e.g. stimulus properties and behavior). The analysis pipeline offers both supervised and unsupervised modes for identifying dependencies between distributed LFP patterns and behavior. While we have applied this to data sampled in the hippocampus at 25 Hz over ~30 minutes, in principle our approach can be applied to recordings at any sampling rate, as long as there is at least some identifiable oscillatory activity within the signal.


Abstract 2:

Title: Multiplicative encoding of position and head orientation in multichannel hippocampal LFP

Previous work has shown that hippocampal theta-band local field potentials (LFPs) robustly encode position in rats navigating a linear track. This encoding scheme becomes visibly salient by applying ICA to the multi-channel LFP, producing position-tuned components reminiscent of place fields. However, the position tuning is absent in the ICA output of 64-channel LFP recordings of rats foraging in an open field. This is surprising because simulations of place cell-generated LFP predict place-tuning in both the linear track and open field. We hypothesized that this disparity arises from the fact that position is jointly encoded with the rat’s orientation. We explored this hypothesis by analyzing (1) simulated LFP of rats in the open field, containing multiplicatively encoded position/orientation; (2) 256-channel recordings of CA1 LFP from rats in the open field. The results show that jointly position/orientation-tuned components are gradually resolvable as more channels are added. Our simulations and experimental analyses are captured in a polished and well-documented set of Jupyter notebooks (our “tool”) that may be of broad interest to analysts of electrophysiological data.


CARLSON, DAVID E Uncovering Population-Level Cellular Relationships to Behavior via Mesoscale Networks EB026937

The overarching goal of this proposal is to learn how neurons’ action potentials, long considered to be a fundamental unit of information, relate to whole-brain spatiotemporal voltage patterns and behavior. To uncover this relationship, we will develop novel computational methods capable of learning networks that relate voltage signals from multiple brain regions based upon our previously developed explainable machine learning approach. These networks will then be used to stratify neurons into subtypes consistent across a population of subjects to facilitate the statistical aggregation of data to uncover relationships between multiple scale of neural activity and behavior.

What analytical (Theories, Models and Methods) tools have you developed?

We are developing models of mesoscale network activity from implanted, multi-site electrodes to integrate whole brain activity.

What questions can you answer?

Our immediate goal is to use these techniques to gain a greater understanding of how mesoscale networks related to neuropsychiatric disorders and to more basic units of information (e.g., neural firing)

What input do you need? (e.g., cellular activity, sub-cellular, sensory input, complex behavior)

Validation of our methods requires multi-region Local Field Potential recordings (also amenable to EEG measurements), ideally with paired behaviors and neural activity.


CHING, SHINUNG Efficient resource allocation and information retention in working memory circuits EB028154

Short-term working memory is critical for all cognition. It is important to fluid intelligence by definition and is disordered in many psychiatric conditions. It is also an ideal model system for studying the link between the dynamics and functions of neural circuits. Short-term storage requires dynamics that are flexible enough to allow continuous incorporation of new information, yet stable enough to retain information for tens of seconds. Much is known about the neuronal substrate of short-term memory. There is a gap, however, in our knowledge of how neuronal resources are efficiently allocated to store multiple items. This gap is particularly striking given that a multi-item memory task (memory span task) is often used to measure fluid intelligence. Neurons in frontal areas are active during a memory period, and individual neurons are tuned to respond to particular memoranda. It is known that individual cells ramp up or down during a memory period. However, we were surprised to discover in preliminary experiments that 80% of individual cells in memory circuits lose their tuning before the end of a 15s memory period. This loss of tuning occurs at similar times across repeated trials; a neuron that loses tuning at 3s in one trial seldom remains tuned for more than 7s in a subsequent trial, and vice versa. This leads to the question of whether cells with common “drop-out” times are linked together in a subnetwork, similar to the “slot” organization often posited to support multi-item memory. We formulated a theory about how these subnetworks might be organized to enact a form of efficient resource allocation that balances demand for memory capacity against memory duration. The primary goal of our TMM project is to test the validity of this theory, and more generally probe memory circuits for evidence of functional subnetworks, using a unique combination of long-delay multi-item memory tasks, computational modeling and analysis. Our project integrates experimental and computational methods, including formalisms from information and control theories, so as to build tight links between (i) the observed phenomenology; (ii) the mathematical consistency of the theory; and (iii) how (i) and (ii) might be reconciled mechanistically in the dynamics of neural circuits

What analytical (Theories, Models and Methods) tools have you developed?

We are developing bottom-up and top-down circuit-level models to derive new mechanistic understanding of how working memory is encoded. These models are constrained by neuronal biophysics but optimized in order to meet hypothetical high-level functional objectives associated with working memory function.

What questions can you answer?

Our immediate goal is to gain a deeper understanding of how memory resources are encoded and allocated/managed within neural circuits. More broadly, our goal is to develop a general modeling paradigm that can associate dynamics to higher-level circuit function.

What input do you need? (e.g., cellular activity, sub-cellular, sensory input, complex behavior)

Validation of our theory requires recordings of cellular activity alongside behavioral characterizations (working memory performance). Our immediate goals are in the domain of spatial working memory, but it would be of interest to broaden the scope to other memory domains.

What are the data specifications needed for your TMM tool? (e.g. data type, sampling frequency, species type, brain area, modality, cell type, duration of recording)

Our plans are to validate our theory in NHPs with recordings from dorsolateral prefrontal cortex and frontal eye fields while animas are engaged in a spatial working memory task.


DAVID, STEPHEN V Tools for modeling state-dependent sensory encoding by neural populations across spatial and temporal scales EB028155

Models of the functional relationship between dynamic sensory stimuli and neural activity form a foundation of research in sensory neuroscience. The advent of modern machine learning methods has introduced the possibility of new and more powerful models of sensory coding. Studies using convolutional neural networks (CNNs) and related models have shown that they can outperform traditional encoding models, in some cases by a substantial degree. In addition to standard applications describing feed-forward coding by single neurons, these methods can be adapted to multi-channel neural data and to characterization of behavior state-dependent changes in coding.  While potentially powerful, CNNs can be challenging to implement and interpret, especially without expertise in computational methods. We have developed the Neural Encoding Model System (NEMS) as a python-based toolkit for fitting both traditional and machine learning models to sensory neurophysiology data.

NEMS was developed for use in the auditory system but it can be applied to any system representing information about dynamic extrinsic signals. It employs a modular design that allows elements from traditional encoding models (linear filters, synaptic plasticity, gain control) to be incorporated into artificial neural network models with broad flexibility fitting algorithms. Models can be fit using either scipy- or Tensorflow-based backends. A scripting system allows scaling to large datasets and compute clusters. The system also streamlines direct, quantitative comparison of a large family of models on the same dataset and characterizing functional equivalence of different model architectures.

Data types: calcium imaging, single unit, EEG
Sampling frequency: 0.1 Hz to 1 kHz
Time scale: 10s of seconds to hours
Modality/area: auditory system, can be adapted to sensory and motor systems.


KRAMER, MARK ALAN (contact); EDEN, URI TZVI Measuring, Modeling, and Modulating Cross-Frequency Coupling EB026938

What is being modeled?: Interactions between different frequency brain rhythms.

Description & purpose of resource: We provide a statistical modeling framework to estimate high frequency amplitude as a function of both the low frequency amplitude and low frequency phase; the result is a measure of phase-amplitude coupling that accounts for changes in the low frequency amplitude. The proposed method successfully detects cross-frequency coupling (CFC) between the low frequency phase or amplitude and the high frequency amplitude, and outperforms an existing method in biologically-motivated examples.

Spatial scales: tissue

Temporal scales: 10-3 - 1 s and 1 - 103 s

This resource is currently: mature and useful in ongoing research

Has this resource been validated?: No

How has the resource been validated?: Details, simulation results, and applications to in vivo data are published.

Key publications (e.g. describing or using resource): A statistical framework to assess cross-frequency coupling while accounting for confounding analysis effects, Nadalin et al eLife 2019;8:e44287
https://elifesciences.org/articles/44287

DOI link to publication describing this resource: https://doi.org/10.7554/eLife.44287

Links: All code to use and further develop this method is available on GitHub.

Keywords: BRAIN, TMM


MAKSE, HERNAN; HOLODNY, ANDREI I  Application of the principle of symmetry to neural circuitry: from building blocks to neural synchronization in the connectome  EB022720

Grant Title: Application of the principle of symmetry to neural circuitry:
From building blocks to neural synchronization in the connectome

By: Hernan Makse and Manuel Zimmer.

1. Our analytical tool:
We have developed a network theoretical toolbox to extract the
symmetries of the connectome. The symmetries are graph automorphisms
or symmetry permutations, i.e. specific similarities in the
connectivity patterns of the connectome, that predict synchronization
of neural populations. The theory successfully predicted functional
building blocks in the C. elegans connectome, like circuits governing
locomotion (see: https://www.nature.com/articles/s41467-019-12675-8).

2. Questions to answer:
Our central hypothesis to test is if the symmetries in connectivity
underly the synchronization of neuronal population activity, and
therefore can be used to discover functional units within complex
connectome data. Our graph theoretical toolbox will classify the
symmetries of all connectomes thereby identifying neural circuits that
potentially form functional building blocks. Using our symmetry
finder, we aim at predicting which neurons synchronize their activity
and then to further test and investigate these structure-function
relations by (I) measuring neuronal activity and (II) manipulating the
underlying circuits experimentally.

3. What input do you need?
We are calling all connectomes. The archetypical example is the
complete reconstruction of the C. elegans connectome. Partial
reconstructions with similar level of resolution at the neuron-level
and full connectivity are also needed.

4. Specifications:

4.1. Anatomical data: Connectomes could include larval zebrafish,
larval annelid Platynereis, partial reconstructions of the drosophila
adult and larval brain (e.g. visual system or mushroom body) or
partial reconstructions of rodent brains.

4.2. Dynamical data: Alongside these anatomical data, we look for
dynamical single-cell resolution neuronal activity data that can be
acquired from these models, e.g. population wide calcium imaging data
and multi-unit electrophysiological recordings.


MISHNE, GAL Data-driven analysis for neuronal dynamic modeling EB026936

Summary: We are developing methodology for analysis of large-scale neural data, primarily two-photon calcium imaging data. We aim to develop an end-to-end modeling and analysis framework for multi-trial neuronal activity across multiple modalities and spatiotemporal scales:

  1. low-level processing of raw calcium imaging data
  2. mid-level organization of extracted interconnected neuronal time-traces
  3. high-level analysis of evolving network of neurons and behavior over long term learning.

For imaging analysis, we have developed methods for ROI extraction and time-trace demixing of 2p data and parcellation of widefield calcium imaging data. Our approach does not directly model calcium dynamics and can apply to motion-corrected high-dimensional imaging data. To further validate and extend our approaches we would be happy to receive cellular-level 1p data, voltage imaging and spatial transcriptomics datasets.

We are developing tools for visualization of dynamics and analysis of a network of neurons as it learns a task (longitudinal studies), with preliminary results in artificial networks. To apply this tool we need cellular activity from a large identified group of neurons during learning, alongside behavior and/or stimulus.

Finally we are developing tools for unsupervised and semi-supervised behavior annotation. Complex recurring behavior from multiple animals (of the same species) would assist in further developing and validating our approach. For the semi-supervised approach, labels for part of the data / spatial trajectories (such as Deep lab cut) are required.


WITTEN, DANIELA Models and Methods for Calcium Imaging Data with Application to the Allen Brain Observatory EB026908

Abstract for Data-Modeling Match: We have developed a statistical model, algorithm, and corresponding software to estimate the times at which a neuron spiked on the basis of calcium imaging data. We are also in the process of developing a model and algorithm and software to perform inference on these estimated spike times, i.e. to obtain a p-value quantifying how likely it is to have observed such a large increase in fluorescence in the absence of a true spike. Our software is implemented in python and R and is described here: https://jewellsean.github.io/fast-spike-deconvolution/. 

What is the analytical tool you have developed: We have developed a model for spike estimation on the basis of calcium imaging data, and are currently developing an approach to conduct inference on the estimated spikes.  The model is implemented in R and python, and is available at https://jewellsean.github.io/fast-spike-deconvolution/. 

What input do you need?: These methods require calcium imaging data as input. For each neuron, the fluorescence trace should be DF/F transformed.

What are the questions you can answer?: On the basis of calcium imaging data, we can answer the question of "When did the neuron spike?" We are also working to answer the question of "What is the probability of observing such a large increase in fluorescence in the absence of a spike?" (The latter amounts to computing a p-value associated with each spike.)

What are the data specifications needed for your TMM tool?: The TMM tool requires DF/F derived from calcium imaging data, for a single neuron. It can be applied repeatedly to a large number of neurons.  So far we have applied it to data from the visual cortex of mice, from the Allen Brain Observatory. We are in the process of applying it to dopamine neurons. 


SHOUVAL, HAREL ZEEV Learning spatio-temporal statistics from the environment in recurrent networks EB022891

My lab is interested in how circuits of neurons can learn and represent the spatio-temporal dynamics of external stimuli. We have developed models with spiking neurons and local biophysically plausible learning rules that can accomplish this. In our framework the ability of circuits to accomplish this task depends a pre-existing local structure of cortical microcircuits. What we can offer experimentalists is a theoretical framework that can make sense of specific microcircuits circuits in brain, and make sense of specific synaptic plasticity observed experimentally in the brain. We make specific predictions about different classes of temporal profiles of cortical cells. We are interested in both electrophysiological recording results and calcium imaging results both from in vivo experiments and from slice experiments. These should come from experiments in which animals or slices were exposed to patterns or paradigms with temporal regularity over expended time periods of at least hundreds of milliseconds. What we can offer experimental labs is to use unsupervised dimensionality reduction and clustering methods in order to classify single cells within the networks, and correlation methods to uncover effective connectivity. We can use these results to help verify or reject our current models, and to generate hypothesis as to the circuit wide implications of the results. We are also interested in results related to synaptic plasticity in similar types of experiments, and especially in neuromodulator dependent synaptic plasticity. Here too we can analyze the data and offer model-tested hypotheses as to the implications of the experimental results for learning in circuits.


RAJAN, KANAKA Multi-region Network of Networks Recurrent Neural Network Models of Adaptive and Maladaptive Learning EB028166

We in the Rajanlab at Mount Sinai design neural network models constrained by experimental data, and reverse engineer them to figure out how brain circuits function in health and disease. Here are two scalable, flexible, and robust tools we have developed, through our TMM grant (R01EB028166-01), that will find wide adoption across to the broader neuroscience community, particularly in a few U19 consortia. We are already collaborating fruitfully with the BRAIN_COGS consortium at Princeton University and would be delighted to share our collective findings at the upcoming meeting. That said, we are always looking for opportunities to collaborate with experimental labs and are particularly keen to get involved at the ground-floor of such collaborations, i.e., having a key role in designing experiments, not only leveraging existing data. 

1) The first tool we have developed is named Current-based Decomposition or CURBD for short. This powerful new theory-based framework is intended for “in-vivo tract tracing” from multi-regional neural activity collected experimentally.  CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

2) We have additionally developed a second data-inspired recurrent neural network (RNN)-based tool, termed TRAKR. This tool enables us to detect state transitions from time series data. Using TRAKR, we can identify neural state transitions that are predictive of animal behavior under different experimental and experiential conditions. For example, we can pick up behavioral, neural, and circuit mechanistic changes in decision making, those that occur during shifts in attention or task engagement in multi-task paradigms, those accompanying learning of such tasks in the lab, etc. The current implementation of TRAKR works with time series data such as behavioral monitoring from pose detection methodologies, EEGs, ECoGs, LFPs, spiking neural data, and calcium fluorescence signals. This is currently a human-in-the-loop method that detects state transitions after correlation with behavior, so behavioral recordings are helpful but not necessary. There is no threshold for sampling frequency, but larger datasets are desired since RNNs are more robust when supplied with greater amounts of data. This tool is agnostic to the species, brain areas, or cell types. It is similarly oblivious to modality and can work equally well with large amounts of EEG, LFP, and even imaging time series.

Table sorting checkbox
Off