Skip to main content
Princeton University

Main navigation

  • Meet Princeton
    • In Service of Humanity
    • Facts & Figures
    • History
    • Honors & Awards
    • Contact Us
    • Visit Us
    • Our Faculty
    • Our Students
    • Our Alumni
    • Our Staff
    • Our Leadership
    • Academic Freedom and Free Expression
    • Mission Statement
  • Academics
    • Studying at Princeton
    • Library
    • Areas of Study
    • Humanities
    • Social Sciences
    • Engineering
    • Natural Sciences
    • Advising
    • Academic Calendar
    • Course Tools
    • Learning Abroad
    • Career Development
    • Continuing Education
    • Innovative Learning
  • Research
    • Engineering & Applied Science
    • Humanities
    • Natural Sciences
    • Social Sciences
    • Interdisciplinary Approach
    • Dean for Research Office
    • External Partnerships
    • Facilities & Labs
  • One Community
    • Lifelong Connections
    • Student Life
    • Arts & Culture
    • Athletics
    • Living in Princeton, N.J.
    • Housing & Dining
    • Activities & Organizations
    • Cultural & Affinity Groups
    • Health & Wellness
    • Religious Life
    • Serving the Public Good
    • Families
  • Admission & Aid
    • Affordable for All
    • About Financial Aid
    • Current Undergraduate Financial Aid
    • Undergraduate Admission
    • Graduate Admission
    • For International Students

Utility menu

  • News
  • Events
  • COVID-19
  • Work at Princeton
    • Services & Resources
    • Work-Life Balance
  • Links for
    • Students
    • Faculty & Staff
  • Alumni
  • Giving
Jan
31
Share to Twitter Share to Facebook Share via email Print this page

CITP Seminar: Diag Davenport - Human Bias and Social Algorithms

Princeton School of Public and International Affairs

While the failures of industrial-scale algorithms are often attributed to some failure of machine learning engineering, many of these failures actually stem from something else entirely: the human beings whose behavior generates the data used to build these algorithms. So the solutions to these algorithmic problems are as likely to require tools from behavioral economics as from computer science. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the way people choose. When people behave automatically, biases creep in: quick, snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error.

We empirically test these ideas in a fully controlled randomized lab experiment, and find that more automatic behavior does indeed lead to more biased algorithms. We also explore the potential economic consequences of this idea by carrying out algorithmic audits of Facebook in its two biggest markets, the US and India, focusing on two algorithms that differ in how users engage with them: News Feed (people interact with friends’ posts fairly automatically) and People You May Know (people choose friends fairly deliberately). We find significant outgroup bias in the News Feed algorithm (e.g., whites are less likely to be shown Black friends’ posts, and Muslims less likely to be shown Hindu friends’ posts), but no detectable bias with the PYMK algorithm. Together, these results suggest a need to rethink how large-scale algorithms use data on human behavior, especially in online contexts where so much of the measured behaviors might be quite automatic.

Bio:

Diag Davenport is a Presidential Postdoctoral Research Fellow at the Princeton School of Public and International Affairs, where he studies various topics at the intersection of big data and behavioral economics. Much of his research has been informed by his industry experience as an economic consultant for corporate litigation and as a data scientist at a variety of organizations, ranging from a small DC startup to the Board of Governors of the Federal Reserve. His research blends a variety of methods to understand the societal impacts of imperfect humans interacting with imperfect algorithms and imperfect institutions.

Before Princeton, Diag earned a Ph.D. in behavioral science from the University of Chicago, an MS in mathematics & statistics from Georgetown University, and bachelor’s degrees in economics and management from Penn State.

To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will not be recorded.

Click here to watch the webinar.

Event Details

Date

January 31, 2023

Time

12:30 p.m.

Location

Campus Location

Related Events

Catalina Muñoz | Historias para lo que viene: Podcasting for Social Justice in Colombia

216 Burr Hall (open to students, faculty, staff & guests)
12:00 p.m.
Feb 8
The participation of history in transitional justice processes has tended to be one of setting the record straight by providing objective evidence about past…

James Loxton | The Puzzle of Panamanian Exceptionalism

216 Burr Hall
12:00 p.m.
Feb 15
In the three decades since the U.S. invasion that overthrew the dictatorship of General Manuel Noriega, Panama has undergone a remarkable transformation. It…

CITP Distinguished Lecture Series: Jon Kleinberg, The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

Campus Location
4:30 p.m.
Feb 15
Please register here to attend in person. In collaboration with the Department of Computer Science and the Department of Electrical and Computer Engineering…

Contact links

  • Contact Us
  • Accessibility
  • Advanced People Search
  • Website Feedback

Visiting links

  • Plan a Visit
  • Maps & Shuttles
  • Varsity Athletics
  • Giving to Princeton

Academic links

  • Library
  • Academic Calendar
  • Student Links

Footer social media

  • Facebook
  • Twitter
  • Instagram
  • Snapchat
  • LinkedIn
  • YouTube
  • Social Media Directory
Equal Opportunity and Nondiscrimination at Princeton University: Princeton University believes that commitment to principles of fairness and respect for all is favorable to the free and open exchange of ideas, and the University seeks to reach out as widely as possible in order to attract the ablest individuals as students, faculty, and staff. In applying this policy, the University is committed to nondiscrimination on the basis of personal beliefs or characteristics such as political views, religion, national or ethnic origin, race, color, sex, sexual orientation, gender identity or expression, pregnancy, age, marital or domestic partnership status, veteran status, disability, genetic information and/or other characteristics protected by applicable law in any phase of its education or employment programs or activities. In addition, pursuant to Title IX of the Education Amendments of 1972 and supporting regulations, Princeton does not discriminate on the basis of sex in the education programs or activities that it operates; this extends to admission and employment. Inquiries about the application of Title IX and its supporting regulations may be directed to the Assistant Secretary for Civil Rights, Office for Civil Rights, U.S. Department of Education or to the University's Sexual Misconduct/Title IX Coordinator. See Princeton’s full Equal Opportunity Policy and Nondiscrimination Statement.
Princeton University
Princeton, NJ 08544
Operator: (609) 258-3000
© 2023 The Trustees of Princeton University

Subfooter links

  • Copyright Infringement
  • Privacy Notice