Kena Betancur/AFP/Getty

What We Know About Racial Bias In Police Use Of Force

Police brutality isn’t just a problem when someone winds up dead

Police killings were the cause upon which Black Lives Matter was founded, but they’re only one part of the story of racialized police violence. Nonlethal force — things like shoving, pepper spraying, using batons and Tasers — is another item in the police arsenal, and might even be the more significant way communities experience violence at the hands of the police.

Understanding police use of nonlethal force might help us understand the environment in which police killings happen; if police are racially biased when using lower levels of force, they’re creating an adversarial, confrontational relationship with black people — which can only lead to more killings. Much of what we know about police usage of nonlethal force is based on lived experience and anecdotes, but what does the data we do have tell us?

Why is it so hard to do research about the use of force?

Rule No. 1 of any data-based study is that it can only be as good as the data it uses — garbage in, garbage out, in other words. The problem that anyone trying to find information about police use of force will immediately run into is that the data is often, well, garbage.

For one thing, there’s no real centralized national dataset on police use of force in this country. Congress directed the Justice Department to start keeping track of it more than 20 years ago (as one of the few good provisions of the 1994 crime bill), but it wasn’t until 2013 that questions about use of force were even included on the survey that the Justice Department asks local police departments to fill out every few years. This survey isn’t perfect, and the questions are broad — what types of force are authorized, what types are documented, and how many documented instances of force have been used over the past year — but it would have still been something. We could have found out, for instance, whether police are more likely to use force in poor or minority communities, or whether body cams have any impact on use of force.

But when the data from 2013’s surveys came out last year, it turned out to be so incomplete and inconsistent that the only thing it was really useful for was understanding how little we know. After reviewing the data, I’ve found it to show that while almost every department reported keeping track of the number of times police officers used force, only about half of the departments said that they actually knew how many times it happened; the rest said they didn’t know, or declined to say at all. Even among the departments that said they knew how many times force was used, 20 percent said that their number was just a guess.

To make matters worse, what each department counted as “force” varied wildly. Some, for instance, didn’t require officers to track when they pulled a Taser on a suspect if it wasn’t actually fired, while others did. Some required officers to record when they used chokeholds or beanbag guns; others didn’t. But the biggest reason the data is shoddy is that while the federal government recommends that departments record these actions, there’s no real incentive for them to actually do so. Even if they do keep their own records, there’s also nothing that compels them to turn these over to the Justice Department, or make sure that they’re accurate when they do. After all, transparency invites scrutiny, and from the police department’s perspective, what’s the upside to more scrutiny? If federal funding were contingent on keeping track of data, more departments might do it. But it isn’t.

So what do we know?

Without a comprehensive national database of police shootings, researchers typically fall back on two methods: They convince small groups of police departments that do collect such data to hand it over, or they rely on surveys of civilians. Harvard economist Roland Fryer’s much-publicized (and problematically reported) recent paper used both of these methods. Although the headlines focused on Fryer’s surprising finding of no racial bias in police killings (something I broke down last month), the paper also tries to determine if there is racial bias when police officers use less-lethal forms of force. One set of data it relies on is from New York City’s stop-and-frisk program, and the other is from a national survey of civilians called the Police-Public Contact Survey.

The nice thing about each of these datasets is that not only do they include individual encounters, but they both keep track of encounters in which the police don’t use force as well as the times when they do. Each has its limitations, however; the PPCS survey only has the civilian’s side of the story, and there isn’t a lot of information about each encounter they have with police. The stop-and-frisk data has only the police side of the story, and only covers New York. Still, combining their results can give us a more complete view, and they say the same thing: Police are more likely to use nonlethal force on black people they encounter than on white people. This is true even after Fryer accounts for a large number of factors, like the civilian’s age, whether they followed police orders, or whether the encounter occurred in a high-crime area.

Another study, out last month from the Center for Policing Equity, found results consistent with Fryer’s findings using a different kind of dataset and different methods. The CPE collected data on incidents of use of force from a dozen police departments across the country. But since the data only has force incidents, and not encounters where force isn’t used, they can’t just make a straightforward calculation of the odds of police using force on each race, as Fryer did. So instead, they divide the number of times the police used force on each race by the arrest rates for each race. (I went in-depth into possible problems with this method in my aforementioned article from July). Doing this directly confronts the idea that we see the police using force on black people more often because black people are arrested more often, not because of racist police. If this is the case, then dividing the use of force rates by the arrest rates should eliminate racial differences. But that isn’t quite what happened. Even after taking into account that black and white people get arrested at different rates, police are still more likely to use force on black people than they are on white people. But when they divided by arrest rates just for violent crime, much of the disparity disappeared, leaving us with a kind of muddy picture. Still, the bulk of the evidence in both of these recent studies confirms what we already knew from previous research — that racism plays a role in the police’s decision to use force.

It’s encouraging that researchers have made such recent headway getting hard evidence on nonlethal force, but it’s happening slowly because of the effort it takes for individual researchers to collect data. Pushing for a national database for use of force is one of the policy recommendations of Campaign Zero, an anti–police brutality organizing group that emerged from the Black Lives Matter movement. This makes sense, because we need more studies, but more than that, we need better data — like a national, comprehensive database of police usage of nonlethal and lethal force. That would knock down one big barrier to understanding the scope of the problem, and help us get started figuring out how to fix it.