Assistant Professor
Brain and Cognitive Sciences
Neuroscience
Goergen Institute for Data Science
University of Rochester


Research Interests
Visual and Semantic Cognition
Learning and Neural Plasticity
Computational Neuroscience

   mci (at) rochester (dot) edu

  @MCatalinIordan

   Google Scholar

   she/they

Travel and Presentations

2024


Aug 6-9 CCN                  
Boston, MA


2023


Jul 31 BCS Summer Seminar
Rochester, NY
Aug 28 BCS Retreat
Canandaigua, NY
Nov 16-19 Psychonomics
San Francisco, CA

marius cătălin iordan
home      research      papers      teaching      outreach      travel      cv

about me
I'm an Assistant Professor in the Brain and Cognitive Sciences & Neuroscience Departments at the University of Rochester. Previously, I was a Postdoctoral Fellow at the Princeton Neuroscience Institute, working with Jon Cohen, Ken Norman, and Nick Turk-Browne. I earned my Ph.D. in Computer Science from the Vision Lab at Stanford University, co-advised by Fei-Fei Li and Diane Beck. My academic journey started at Williams College with a B.A. in Computer Science, Mathematics, and Cognitive Science.

research brief
I'm a computational cognitive neuroscientist studying the elusive link between how the human brain learns & organizes conceptual information into categories, stories, and events, and how we use that information to understand & interact with our complex, noisy world.
lab culture
  • Our lab will always be a safe + inclusive + welcoming space for everyone, including trainees, collaborators, and participants.
  • We believe that diversity of backgrounds, identities, & perspectives greatly strengthens our ability to tackle research as a team.
  • We believe that respect, support, kindness, & work-life balance are prerequisites of a productive lab environment.
  • We believe in collaboration over competition.
  • We value individually-tailored mentorship highly and we always strive to learn from one another.

                             


news
12/2023. New Preprint on bioRXiv: Inducing Representational Change in the Hippocampus through Real-Time Neurofeedback. We decribe a new way to induce coactivation of competing memories in visual cortex using real-time neurofeedback and machine learning!

05/2023. I am grateful to have been awarded the University of Rochester College Course Development Fellowship for the course I will be teaching in Fall 2023: Advanced Topics in Cognitive Neuroscience.

08/2022. I am incredibly happy and grateful to announce that I'll be starting a new adventure in January 2023 as an Assistant Professor at the University of Rochester, with joint appointments in the Brain and Cognitive Sciences & Neuroscience Departments.

05/2022. Presenting a Talk at the Vision Sciences Society (VSS) 2022 Annual Meeting: Sculpting New Visual Concepts into the Human Brain.
02/2022. New Publication in Cognitive Science: Context Matters: Recovering Human Semantic Structure from Machine-Learning Analysis of Large-Scale Text Corpora.

We show that incorporating semantic context into the training procedure of word embedding models improves prediction of empirical similarity judgments and feature ratings.

11/2021. Presenting a Poster at the Society for Neuroscience (SfN) 2021 Annual Meeting: Sculpting New Visual Concepts into the Human Brain.

05/2021. Presenting a Poster at the Vision Sciences Society (VSS) 2021 Annual Meeting: Context Matters: Recovering Human Visual and Semantic Structure from Machine-Learning Analysis of Large-Scale Text Corpora.
10/2020. New Preprint on bioRXiv: Sculpting New Visual Concepts into the Human Brain.

We decribe a new way to provide humans with visual and conceptual knowledge by directly sculpting activity patterns in their brains using fMRI, real-time neurofeedback, and machine learning!
06/2020. Our team was awarded a Research Grant from the GRAMMY Museum Foundation to investigate the neural hierarchy of audio-motor integration during natural music performance.

Co-PI, $19,758 (33% share). PI: Elise Piazza, Princeton University, co-PI: Uri Hasson, Princeton University.