Article Lead Image

Sabrina M/Flickr (CC BY 2.0)

Your neighborhood watch app is racist

What happens when apps created for 'safety' perpetuate bigotry?

 

Derrick Clifton

Via

Posted on Oct 16, 2015   Updated on May 27, 2021, 7:19 pm CDT

What happens when apps created to help ensure personal safety perpetuate bigotry instead?

A number of tech innovations now help users report suspicious or criminal behavior, helping others stay out of harm’s way while collecting potentially useful data for law enforcement officials. These apps sound like a great idea on the surface, as the culture of smartphones and social media often precludes real-life interactions focused on community safety. Before now, neighborhood watch groups and precinct meetings with police were the primary interface for discussions about safety near one’s home. And now that there’s apps for that, more locals can engage in a virtual neighborhood watch.

But these apps are revealing an ugly truth about the biases of many concerned residents. The problem? They’re enabling and further normalizing racial profiling and the surveillance of Black bodies, buttressing it through the powers of crowdsourcing and technology.

As the Washington Post reports in the nation’s capital, GroupMe enjoys ever-growing popularity in the Georgetown neighborhood, a posh enclave with much to offer in the way of entertainment, shopping, historic landmarks, and upscale living. The app, as its website notes, is the “best way to chat with everyone you know,” operates with free group messaging, photo sharing and geotagging capabilities and works on virtually every device—and even through SMS.

But in Georgetown, the app’s not necessarily being used for things like mundane group chats with roommates about groceries and chores. Rather, for almost two years, the locally promoted “Operation GroupMe” has functioned through a partnership with local police and the Georgetown Business Improvement District, ostensibly created to open quicker, more direct lines of communication amongst all parties. Roughly 380 users now use the app to report and photograph shoppers, as the Post highlights, in an attempt to deter crime.

There’s just one thing: The community, more often than not, has a serious problem with Black people entering their spaces.

As the Post notes, the messages documented on the app between businesses and community members look something like this:

“AA male. He just left. Headed towards 29th St. About 6 foot. Tats on neck and hand. Very suspicious, looking everywhere.”

“Suspicious shoppers in store. 3 female. 1 male strong smell of weed. All African American. Help please.”

“Suspicious tranny in store at Wear. AA male as female. 6ft 2. Broad shoulders.”

No one can be exactly sure what any of these messages mean by “suspicious,” a word that’s become all too common in conversations about race, class, gender, and profiling. But it’s a meaningful phrase, one that evinces a knee-jerk negative perception of bodies that aren’t white or that don’t pass as cisgender. Their presence isn’t met with the friendly welcome and professional assistance often afforded most mainstream white customers by default, and those momentary, implicit biases trigger quick reports and clandestine photographs seen by community members and police officers alike.

These communications often lack details about any specific behavior, let alone actual shoplifting attempts, that would merit an actual 911 call. On the whole, most Operation GroupMe messages use overt or coded phrases to describe Black people—including the terms “ratchet” and “sketchy”—denoting a type of low-class, no-good person who supposedly has no business moving through a predominantly white and affluent neighborhood.

If that’s not racism, what is?

https://www.youtube.com/watch?v=UG4wo7ValM4

It’d be one thing if the majority of reports led to an actual arrest or police confrontation that confirmed the so-called suspicions of businesses and residents alike. But that’s not the case with platforms such as Operation GroupMe, SketchFactor, GhettoTracker, and other emerging online communities like them. Instead, it’s more about crying wolf and prompting police officers to patrol and enforce laws based on the prevailing racial biases in the business community.

That’s a privilege white people in America can continue enjoying—vigilante justice in police meetings, through apps, and even through the use of lethal force.

When apps or virtual spaces ostensibly created for “safety” or “open communication with police” become cesspools for racial prejudice and stereotypes, the stated purpose no longer serves the common good. These zones work no better than the collection of subreddits dedicated to sharing denigrating images and hate-filled messages about marginalized groups like African American. In those cases, technology isn’t used for community uplift, but rather to keep those with racial power and privilege amused while reminding all the others of their subjugated place in society.

If only there were an app for people of color to surveil all the suspicious white people who follow them, profile them, harass them about their movements and intentions, or any other number of racially biased behaviors. To be sure, it wouldn’t be seen as a necessary evil, the masses wouldn’t consider it a form of justice, and it wouldn’t have buy-in from the police.

And that’s a privilege white people in America can continue enjoying—vigilante justice in police meetings, through the use of lethal force, and even with a simple app.

Derrick Clifton is the deputy opinion editor for the Daily Dot and a New York-based journalist and speaker, primarily covering issues of identity, culture, and social justice.

Photo via Sabrina M/Flickr (CC BY 2.0)

Share this article
*First Published: Oct 16, 2015, 3:12 pm CDT