Think about the last time you were driving down an unfamiliar street among unfamiliar cars searching for an unfamiliar house.
We perform this incredible feat effortlessly because we immediately recognize novel objects as members of highly familiar
semantic categories (streets, cars, houses). Semantic categories join together items that share common features, affordances,
and meaning, while glossing over their superficial differences, such as visual appearance. As such, semantic categories are a
fundamental building block of our perception and cognition.
I use psychophysics, functional neuroimaging (fMRI), real-time neurofeedback, and machine learning to investigate how
we associate novel stimuli with an appropriate semantic category, how processing of semantic information adapts to task demands,
and how establishing a causal link between neural semantic representations and perception could improve our interactions with the
world. In each of these domains, I leverage the power of advanced, custom-built computational tools to probe, elucidate, and most
recently modify the contents of mind.