Last year, a widely cited research paper on racial bias in policing caught the attention of Jonathan Mummolo, an assistant professor of politics and public affairs, and his collaborator, computational social scientist Dean Knox. The study, which made national headlines and was cited in congressional testimony, claimed to ﬁnd no evidence of anti-Black or anti-Hispanic disparities across fatal police shootings. The study concluded that, “White officers are not more likely to shoot minority civilians than nonwhite officers.”
Although the study was peer-reviewed and published in a respected journal, Mummolo and Knox immediately noticed a ﬂaw in the authors’ logic. The authors of the original study tallied only fatal shootings, not all encounters, and they implicitly assumed that white and nonwhite ofﬁcers encounter minority civilians in equal numbers. When police ofﬁcers involved in fatal shootings were nonwhite, the study found that the person fatally shot was more likely to be nonwhite.
Yet, without accounting for the races of the ofﬁcers and civilians in all encounters, whether fatal or not, Mummolo and Knox said, it is impossible to determine the role that race played in ofﬁcer shootings. If white officers have few encounters with minority civilians, a small number of fatal minority shootings might translate to a large proportion of the total encounters with minorities.
To understand why, consider this thought experiment: Imagine a white officer encounters 90 white civilians and 10 Black civilians, while a Black officer encounters 90 Black civilians and 10 white civilians, both under identical circumstances. If both officers shot ﬁve Black and nine white civilians, the results would — according to the reasoning of the original study — appear to show no racial bias.
However, once encounter rates are taken into account, one would see that the white officer shot 50% of the Black civilians he or she saw while the Black officer shot 5.6%. Failing to incorporate information on encounter rates could mask racial bias.
“Data on fatal shootings alone cannot tell us which officers are more likely to pull the trigger,” Mummolo said, “let alone account for all relevant differences between incidents to allow us to isolate the role of race.”
Studies on police racism are difﬁcult to conduct and often controversial. One of the challenges is how to isolate race to see if, in cases where all else is held equal, ofﬁcers treat civilians of one race different from civilians of another.
Mummolo’s work aims to improve how we measure and understand the potential for racial bias in policing. The ability to amass credible evidence on the relationship between race and ofﬁcer behavior, Mummolo said, has been hindered by two issues — methodological errors and a lack of reliable data.
Much of the existing literature on policing uses ﬂawed statistical reasoning and inadequate or biased data, Mummolo said. To rectify this issue, he and his co-authors propose statistical methods that more faithfully evaluate police behavior and call on law enforcement agencies to keep detailed records of civilian encounters. His hope is that improved accuracy and transparency of police data will encourage policymakers, ofﬁcers and civilians to work together to develop a fairer law enforcement system for the nation.
To respond to the ﬂawed policing study, Mummolo and Knox, an assistant professor of operations, information and decisions at the Wharton School of the University of Pennsylvania, co-authored a letter to the journal, the Proceedings of the National Academy of Sciences, explaining why the study’s claims were mathematically unsubstantiated. At ﬁrst, the journal editors rejected the letter, stating that the errors were merely a matter of preference over how to study the issue, and calling its tone “intemperate.” Frustrated, Mummolo and Knox turned to Twitter, sparking a ﬁery debate among academics.
“Sometimes errors are very nuanced or even debatable, and these disputes can exist in gray areas where it’s not clear who is right and who is wrong,” Mummolo said. “But sometimes, errors are very clear-cut, provable, and simple matters of logic, and as scientists we need to be able to tell the difference.”
After several months of academic arguments, the authors of the original study ultimately retracted their report, admitting that the language they used was not careful enough, which “directly led to the misunderstanding of [the] research.” Although at least one conservative news outlet claimed the retraction was politically motivated, the authors said their decision was strictly because of issues in the content of the study.
“Policing is receiving unprecedented attention from policymakers and the public, but the scholarship is lagging behind,” said Knox, a frequent research partner of Mummolo’s.
“There are a lot of important questions that we don’t have answers to, and a lot of work that’s less careful than it should be. Our aim is to put rigorous research front and center in ongoing policy debates, push back on ﬂawed science, quantify the limits of existing approaches, and do the most statistically sound research possible.”
Along with ﬂawed approaches, Mummolo, Knox and colleagues found that published studies often rely on incomplete data. Incomplete data make it difﬁcult to compare situations that are equal in all relevant factors except for race.
Police administrative records often do not include essential information for research, such as the total number of individuals that ofﬁcers encounter. Without accounting for the total number of potential police-civilian interactions, the records may not be able to shed light on the role of race in ofﬁcers’ decisions to engage civilians, which can skew estimates of subsequent events, like racial bias in the use of force.
With no federal standardized reporting requirements for law enforcement in the United States, data are inconsistent among different departments and agencies. The lack of federal regulation also allows for the persistence of practices that reduce transparency. For example, many police departments purge their data after ﬁve to 10 years, and some law enforcement agencies impose high fees to access their records.
These issues directly inhibit research, Mummolo said. “Right now, policing remains a very difﬁcult thing to study across many jurisdictions,” Mummolo said. “We often have to keep our scope of research limited to places where we can actually get the data we need.”
Mummolo hopes the government will play a more prominent role in improving the accuracy of police data. He suggests that the federal government should mandate police departments to adopt a standard set of practices and report basic data in a centralized way that is easily accessible to the general public.
He also hopes that others will use a discerning eye when it comes to research. Studies that rely on biased data can underestimate the role of race in police-civilian encounters, leading researchers to conclude an absence of racial bias or even an anti-white bias. Unless these errors are addressed in a meaningful way, a broader audience of people unfamiliar with the issues may continue to cite studies that do not reflect reality. “It’s hard to blame people for being misinformed when flawed studies continue to be published,” Mummolo said.
Improvements in police data collection, combined with better research methodologies, can help leaders and politicians gain a more complete understanding of how race affects policing and help to change long-standing disparities in how people of color are treated by law enforcement. Mummolo’s research is furthering the national conversation on accountability in law enforcement.
“We are at a time when there is both intense interest in these topics and increasing data availability that are going to allow us to test ideas in better ways than we have in the past,” Mummolo said. “But we still have a long way to go in terms of having the data we need — science needs to keep pushing for more.”
This article was originally published in the University’s annual research magazine Discovery: Research at Princeton.