Current Research

Currently I'm working on several different projects, focusing on methods for novel two-photon microscopy (including volumetric imaging and validation tools), statistical modeling of neural spiking behavior, theoretical analysis of recurrent network properties, and statisictical models for correlated, strucuted signals (e.g. new dynnamic filtering methods and for fMRI). Previous work includs analysis of hyperspectral imagery, computational neural networks, and stochastic filtering.

Neural Anatomy and Optical Microscopy (NAOMi) Simulation

Functional fluorescence microscopy has become a staple in measuring neural activity during behavior in rodents, birds and fish (and more recently primates!). Recent advances have created both novel optical set-ups (e.g. vTwINS for volumetric imaging) and more complex algorithms for demixing the neural activity from the recorded fluorescence videos, however methods to validate these improved methods are lacking. NAOMi seeks to use the extensive literature on neural anatomy and knowledge of cellular calcium dynamics to provide simulations of fluorescence data with full ground truth of neual activity for validation purposes.

Two-Photon Microscopy

Two-Photon microscopy (TMP) is a vital tool for recording large numbers of neurons over long time-scales. TPM recordings have allowed researchers to analyze entire networks of neurons in a number of cortical areas during awake behavior. Traditional TPM focuses on raster-scanning a single slice of neural tissue at relatively high frame-rates. To record additional neurons, volumetric imaging has been explored as well. Volumetric scanning, however, requires scanning a number of planes sequentially, lowering the overall scan rate. In this area I am collaborating with Dr. David Tank's lab working on volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS) method for imaging entire volumes with no reduction in frame rate. Specifically, our method records stereoscopic projections of the volume where each neuron is imaged twice, and the distance beween the neuron's images encodes the depth of that neuron. Our novel greedy demixing method, SCISM, can then decode these images and return the neural locations and activity patterns. In addition to volumetric imaging, I am also working on on alternative pre-processing techniques to denoise TPM data and to robustly filter out structured fluorescence contaminations to get more accurate extract neural activity.

Statistical Modeling of Neural Spiking

Interpreting neural spike trains is a critical step in decoding the relationship between neural activity and external stimuli. Basic probabilistic models of nerual firing assume Poisson firing statistics. Recent work studying the over-dispersion of neural firing, however, has found that often the statistics of neural are decidedly not Poisson and may vary between neurons. Subsequent work has saught more flexible models which can account for the variablity of neural firing. In this area I am working on creating simple yet flexible models for neural firing for which a small number of parameters can be learned directly from neural recordings.

Matrix-normal Models of fMRI

Functinal magnetic resonance imaging (fMRI) is a widely used modality for whole-brain imaging in humans. In order to better understand the cognitive processing taking place in the brain, many methods have been independently created to infer correlations between activity in different brain areas and accrross subjects and tasks. To further improve such inference I, with Michael Schvartsman and Mikio Aoi, have shown that the most prominant of these models can be placed in a single matrix-normal framework. This observation allows us to connect these models, devising faster, more flexible, and more accurate inference techniques for fMRI.

Analysis of Recurrent Neural Networks

Interconnected networks of simple nodes have been shown to have a computational abilities far beyond the sum of the individual neurons. Understanding how the network connectivity facilitates this huge increase in computational capacity has recently become increasingly important, in particular in the context of relating better understood theoretical network models and biological neural systems. In this area I've worked on both mathematical models of networks which compute the solution to various optimization problems, as well as deriving theoretical bounds on the short-term memory (STM) of linear neural networks. In particular I've worked on theory for single-input networks with sparse inputs and multiple-input networks with wither sparse or low-rank inputs.

Sparsity-aware Stochastic Filtering

While the majority of work related to inferring and learning sparse signals and their representations has focused on static signals, (e.g., block processing of video), many applications with non-trivial temporal dynamics must be considered in a causal setting. I work to expand the ideas of sparse signal estimation to the realm of dynamic state tracking. The foundational formulation of the Kaman filter does not trivially expand to cover regimes of sparse signal and noise models, due to the loss of Gausiannity in the state statistics, as well as the impracticality of linear dynamics or retention of full covariance matrices. Instead other methods need to be developed. I am currently working on a series of algorithms based on probabilistic modeling to find fast updates for consecutive sparse state estimation. As an extension, similar methods can be employed to spatially correlated signals, resulting in more general, efficient multi-dimensional stochastic filtering techniques for correlated sparse signals.

Hyperspectral Imagery (HSI)

The sparse coding framework is particularly fitting for remote imaging using HSI. HSI uses many more spectral measurements than other imaging modalities (e.g., multispectral imaging (MSI) which typically takes ~8-12 spectral measurements), capturing data at 200-300+ wavelengths spanning the infrared to ultraviolet ranges. This level of spectral detail allows HSI to capture much richer information about the materials and features present in a scene. To dsicover the materials in a dataset, we can perform sparsity-based dictionary learning. This unsupervised method extracts the specific spectra corresponding to different materials using only the basic assumption that few pure materials are present in any voxel. In addition to spectral demixing, I also use the sparsity-based inference procedures and learned dictionaries for unmixing, classification and other inverse problems that may arise in the use of HSI data. In particular, I have focused on spectral super-resolution of multispectral measurements to hyperspectral-level resolutions.