Benjamin D. Singer, PhD

picture of ben
Director of Scientific Computing
Princeton Neuroscience Institute
B14B Neuroscience
Princeton University
bdsinger@princeton.edu
I work on neuroimaging analysis methods such as cortical alignment, realtime fMRI correlation and classification, surface-based analysis and visualization, and model-based neural networks in multiple voxel pattern analysis. In addition to implementing novel methods, I work to speed up and streamline neuroimaging analysis algorithms via low-level optimizations and parallelization. The latter is achieved with the help of computer clusters containing hundreds of processors that I help to acquire, write software for, and maintain.

Currently working on the Princeton FCMA toolbox for high-performance computing clusters


Selected work:

Neuroimaging Analysis Methods


Vision Science

At the Center for Visual Science, University of Rochester, I did psychophysics, electrophysiology, and real-time retinal imaging software as a research associate, following doctoral work in color vision psychophysics at UCI Cognitive Science and undergraduate work on the development of perception in infants Cornell Psychology.

Scientific Computing

Past. A theme throughout my life has been fooling with computers and programming. My main contribution to the above work was (and still is) implementing the algorithms underlying research questions in software (algorithm development, stimulus presentation, device control/communication, parallelization/optimization for multiple processors, and data analysis.) I started as a BASIC and Pascal programmer in my teens, writing apps to graph data in my dad's lab, then writing 2D and 3D graphics apps in Matlab and C in grad school for my thesis work, C++ realtime adaptive optics software running on Macs as a research associate.

Present. Currently I use a wide variety depending on the task, but mostly Matlab, shell scripts, the occasional MPI and GPU programming. Over the last 8 years or so I've delved deep into clusters, job scheduling, and internals of Linux -- from drivers to shell scripts. Luckily most labs here have Macs as desktop machines, and being a Mac enthusiast that means an opportunity for more interfaces in Cocoa + Objective-C.

Future. I'm working towards the day when scientists here can run analyses that use hundreds of cores on our cluster, and/or the variety of cores in their desktop-- heck, via the Cloud if they want-- without needing to know how that power has been brought to bear, without ever needing to open Terminal.app. We want scientists doing science, not tool-making (unless, like me, they enjoy that part!). There is a very compelling story here, that has worked its way through physics, biology, and several fields. Neuroscience will get its big breakthroughs via massive, parallel, flexible, data-soaked computing power. So when compute-intensive work is quick and easy, work such as correlating and classifying massive amounts of fMRI data-- a whole new level of discovery can take place. If you are getting meaningful visual feedback from large scale data analysis in only seconds, rather than via hundreds of stdout text file logs from day-long batch jobs-- that you had to launch only after learning a suite of esoteric site-specific Terminal commands (no finger pointing here, I probably wrote those commands)-- my hunch is that this will be a turn for the exceedingly better. Takes lots of work, but it will be worth it, for everyone involved.

I've managed to keep myself programming by-- for better or worse-- avoiding management and by not delegating the fun stuff!

Software:


My neurotree node.

bdsinger@princeton.edu