Police Can Now Use Software To Scan Social Media And Calculate Your Threat 'Score'

MTV News checked in with the ACLU to confirm our suspicions that this is definitely not cool.

Some police departments are taking "proactive policing" to a whole new level -- using software to scan people's social media posts for "flagged" words, and, in some cases, to assign people a "threat score" intended to indicate how dangerous someone could become in the event of a police encounter.

The American Civil Liberties Union (ACLU) thinks this is something we should all be very, very worried about.

"We’re seeing police experimenting with social media monitoring software all around the country," Jay Stanley, a Senior Policy Analyst for the ACLU's Speech, Privacy & Technology Project, told MTV News, "which we definitely don’t think they should be doing."

Here's why:

Unreliable Data

"Beware" is the name of a new software system being used by police in Fresno, California to assign citizens a "threat score."

According to a report from the Washington Post, the program searches "billions of data points, including arrest reports, property records, commercial databases, deep Web searches," and social media posts to assign people and addresses color-coded scores (red, green or yellow).

"This is the first time I've seen software that actually assigns a threat score to individuals," Stanley told MTV News, "which creates a whole host of additional problems."

Among those problems are that the algorithms used to scan social media often work by keyword, which Stanley said can be dangerously misleading without context.

"Computers are good at a lot of things," Stanley said, "but interpreting human conversations and dissecting posts to determine whether someone is 'dangerous' is not something that can be automated. The use of keywords is highly unreliable because key words without context are meaningless -- they might be used ironically; violent terms might be referring to the title of a novel, song lyrics or a video game; or you might be quoting someone else."

Stanley also reported that beyond social media keywords, the other types of commercial data these programs use is often highly unreliable. He noted that criminal records can sometimes be attributed to the wrong person because of similarities between names, that the information pulled from commercial data can be very out of date, and that tests have revealed that it's "often just wrong."

He added that the ACLU also finds it problematic that most of these techniques are being used and tested in secret, and that the public has no understanding of how they're created, how threat scores are weighted or what's being scanned for on social media. While you can request details about your credit score, at present there's no way for people in cities like Fresno to request a report on their "threat score."

Stanley pointed out that the consequences of drawing conclusions from such inaccurate data can be extremely dangerous -- or even deadly -- since misinformation on a suspect or even a person calling the police for help could "cause the police to come into an encounter with a highly prejudicial mindset or frightened mindset."

Monitoring Activists

When asked whether these sorts of programs are used to gather data about political activism, Stanley responded that this is also a major concern for the ACLU. He reported that in Fresno, one of his colleagues was able to obtain a list of the keywords being flagged in social media scans, and the contents were alarming.

"We found that the key words being flagged by the software in social media [in Fresno] include things like the hashtag Black Lives Matter and other terms political activists might use, like 'Mike Brown,' 'We organize,' 'Don't shoot' and 'It's time for a change.'"

Although law enforcement agencies in the U.S. have a long history of tracking the activities of political activists, scanning social media for these terms is still pretty dicey territory. Police in Oregon are currently dealing with a federal probe for searching Black Lives Matter mentions on social media in its threat-detection process.

In October of 2015, Stanley wrote a post for the ACLU titled "China’s Nightmarish Citizen Scores Are a Warning For Americans," in which he detailed China's alleged system of assigning citizens "political compliance" scores along with their credit scores, which one Twitter described as "authoritarianism, gamified."

Stanley told MTV News said that if we sit back and do nothing, we could be headed in a similar direction in the U.S."

"I don’t think we're anywhere near that kind of system now," he said, "but if people just sit back and ignore things, we could easily see a version of that type of thing. Law enforcement too often count political activities as seditious ... and there's a lot of this happening on federal level, too."

He reported that the ACLU is currently monitoring the situation closely, gathering information and raising the alarm on the risks associated with these kinds of monitoring techniques.

"I think we as a society need to confront the question -- how much intrusion are we going to allow by law enforcement and security agencies into our lives in the name of supposed security?" Stanley said. "Do we really want governments to go down the road of looking into our online lives and making judgments about us?"