Research Projects and Emerging Audio Technologies
- BACCH™ 3D Sound
- BACCH™ Filters: Optimized Crosstalk Cancellation for 3D Audio Over Two Loudpseakers
- Loudspeaker Directivity: An Ongoing Experimental Survey
- The Princeton Headphone Open Archive (PHOnA)
- Binaural Rendering of Recorded 3D Soundfields
- Individualization of 3D Sound
- 3D Audio for 3D TV
- Advanced Hearing Aids for the Hearing Impaired
BACCH™ 3D Sound (previously called "Pure Stereo 3D Audio™") is a recent breakthrough in audio technology, based on BACCH™ Filters, (and licensed by Princeton University) that yields unprecedented spatial realism in loudspeaker-based audio playback allowing the listener to hear, through only two loudspeakers, a truly 3D reproduction of a recorded soundfield with uncanny accuracy and detail, and with a high level of tonal and spatial fidelity that is simply unapproachable by even the most expensive and advanced existing high-end audio systems. Learn more about BACCH™ 3D Sound through these 20 Questions and Answers.
BACCH™ Filters are optimized crosstalk cancellation (XTC) filters that allow 3D audio reproduction over a pair of loudspeakers. They yield maximum crosstalk cancellation level without introducing any spectral coloration to the input signal. An introduction to BACCH™ Filters can be found here.
Loudspeaker directivity is the extent to which loudspeakers focus the sound in a particular direction (typically towards the listener) instead of broadcasting it in all directions around the room. Highly directive loudspeakers are ideal for 3D audio with crosstalk cancellation (XTC), since room reflections (which are weaker when using more directive loudspeakers) directly degrade the level of XTC. Consequently, the 3D3A Lab is conducting detailed measurements of the directivity of various loudspeakers using the lab's anechoic chamber. At present, the database contains the measured directivity data for seventeen loudspeakers.
The Princeton Headphone Open Archive (PHOnA) is a dataset of measured headphone transfer functions (HpTFs) from many different research institutions around the world. Visit this webpage to access the dataset.
Traditional binaural recordings are inherently restricted to the vantage point of the recording individual and only possess the idiosyncratic 3D localization cues for that individual. However, using an array of microphones, such as the Eigenmike by mh acoustics, the incident soundfield can be extracted and the binaural signals that a given listener would hear from a certain vantage point in the soundfield can then be computed numerically. It is the aim of this research project to develop tools and techniques to capture and process real 3D soundfields to generate individualized and navigable binaural renderings of the recorded soundfields.
We perceive sound in three dimensions in everyday life. That is, without looking at a sound source, we can tell with reasonable precision, its location in space relative to us. We can do this because our brains process the sound signals that reach our eardrums in a manner that is unique to each of us. This should not be surprising as everyone's morphology is different (especially that of the outer ear), and this affects the sound reaching our eardrums in a highly idiosyncratic way. The processing that our brains do is tuned to our unique morphologies, and so swapping ears with someone else for instance would lead to a disorienting listening experience. To enable certain types of 3D sound reproduction systems, one of the tasks is to devise mathematical models that describe the effects our individual morphologies have on the sound we hear. The current project focuses on this task.
The 3D3A Lab is developing 3D audio for 3D TVs and other 3D video and cinema applications. The approach relies on advanced loudspeaker and DSP technologies. Visit this webpage to learn about the latest development.
The 3D3A Lab is developing techniques for improving the sound localization capability of hearing aids for the hearing impaired. More on this will soon be published on this webpage.