Benjamin D. Singer, PhD

picture of ben
Director of Scientific Computing
Princeton Neuroscience Institute
B14B Neuroscience
Princeton University
bdsinger@princeton.edu
I work on neuroimaging analysis methods such as cortical alignment, realtime fMRI correlation and classification, surface-based analysis and visualization, and model-based neural networks in multiple voxel pattern analysis. In addition to implementing novel methods, I work to speed up and streamline neuroimaging analysis algorithms via low-level optimizations and parallelization. The latter is achieved with the help of computer clusters containing thousands of processors that I help to acquire, write software for, and maintain.

Selected work:

Neuroimaging Analysis Methods


Vision Science

At the Center for Visual Science, University of Rochester, I did psychophysics, electrophysiology, and real-time retinal imaging software as a research associate, following doctoral work in color vision psychophysics at UCI Cognitive Science and undergraduate work on the development of perception in infants Cornell Psychology.

Scientific Computing

Past. A theme throughout my life has been fooling with computers and programming. My main contribution to the above work was (and still is) implementing the algorithms underlying research questions in software (algorithm development, stimulus presentation, device control/communication, parallelization/optimization for multiple processors, and data analysis.) I started as a BASIC and Pascal programmer in my teens, writing apps to graph data in my dad's lab, then writing 2D and 3D graphics apps in Matlab and C in grad school for my thesis work, C++ realtime adaptive optics software running on Macs as a research associate.

Present. HPC work is heavy in shell scripting, parallel programming, job scheduling, and delving into the internals of Linux -- from drivers to shell scripts. Luckily most labs here have Macs as desktop machines, and being a Mac enthusiast that means an opportunity to make intuitive graphical interfaces for powerful backends, but science doesn't wait for pretty interfaces! It has to be done on one's own time.

Future. I'm working towards the day when we can run analyses that use thousands of cores on a cluster (ie "in the cloud"), or within our desktops, without needing to know how that power has been brought to bear, to the level we need to know today. Graduate students now know their smartphone better than a traditional computer, yet to get their work done they need to live part time in the 1970s-- ie, on the command line. We want scientists doing science, not tool-making (unless, like me, they enjoy that part). There is a very compelling story here, that has worked its way through physics, biology, and several fields. Computational neuroscience is getting its big breakthroughs via massive, parallel, flexible, data-intensive computing power, but the pace is being slowed by what might be called the Science REPL. Retrofitting research to HPC systems is a huge up front cost. Weeks can pass between inspiration and results in the form of text output spread across hundreds of log files. Latency stunts the spirit of exploration and discovery, and there's lots of room for improvement not only in compute time, but in reducing the impedance mismatch between text-based input & processing and graphical interfaces & visualization of results.

I've managed to keep myself programming by-- for better or worse-- avoiding management and by not delegating the fun stuff!

Software:


My neurotree node.

bdsinger@princeton.edu