Gaza bombing with TikTok logo

Nick_ Raille_07/Shutterstock (Licensed)

What the Israel-Palestine conflict taught us about TikTok’s algorithm

The algorithm has an inequity in it.

 

Viola Stefanello

Tech

Published Jun 16, 2021   Updated Jun 18, 2021, 11:28 am CDT

Content moderation is a tricky business even at the best of times. When a social media platform is caught between virtual crossfire as violence plays out on the ground, it can become downright impossible. Every major social network experienced that for itself in May 2021, when the tensions between Israel and Palestine escalated once again into 11 days of violence, military aggression, civil unrest, and chaos. 

Instagram and Facebook removed posts calling for attention to the clashes at the al-Aqsa Mosque, erroneously considering them linked to “violence or dangerous organizations.” WhatsApp had to deal with viral messages circulating in Israel warning against “the Palestinian threat,” as well as other misinformation. Twitter temporarily and inexplicably suspended the profile of journalist Mariam Barghouti as she was covering the conflict. And all over the internet, Palestinian activists have been decrying the disappearance of hashtags and posts, the suspension of accounts expressing solidarity for Palestine, and the removal of ads for events supporting the cause.

The youngest of the most popular social media platforms, TikTok, wasn’t spared its own controversies. The ByteDance-owned app played a central role in providing an unprecedented platform for Palestinian voices, which were almost completely absent from mainstream global discourse during previous rounds of the Israeli-Palestinian conflict. But the social network also lent itself extremely well to the kind of misuse and propagandizing that governments have always turned to in times of war.

The Israeli government has spent an enormous amount of time and energy on social media campaigns to shape foreign perceptions of their actions. The Israeli Defence Forces (IDF), one of the world’s most advanced armies, has been memeing on social media for years, and they landed on TikTok in September, amping up their presence on the platform as the conflict escalated in May. Their efforts were supported by one of TikTok’s most distinctive features: Its notorious preference for generically beautiful dancing girls. Even when they brandish a weapon.

Thin, white, blonde, young, and conventionally attractive, Natalia Fadeev is the kind of girl that would fight right in any horny teenager’s “For You” page. And in May 2021, as rockets flew between Israel and the Gaza Strip for 11 days, she did. A reservist for the Israeli Defence Force’s military police, Fadeev rose to popularity on TikTok as part of an unsettling phenomenon: Israeli soldiers posting obvious thirst traps to sway the narrative around a decades-old conflict. 

https://www.tiktok.com/@nataliafadeev/video/6967744066662550785?is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6974092115446105606

As the days passed and the two sides counted their dead—13 in Israel; 256 in Gaza, of which 128 were civilians, according to the U.N.—videos of pretty girls like Fadeev (1.2 million followers) or Yael Deri (1.3 million followers) in full Israeli military uniform reached hundreds of thousands of people as they repeated key IDF talking points. They asked their viewers if they looked like they could really kill innocent civilians and hopped on the latest trends while showing off their weapons. 

https://www.tiktok.com/@nataliafadeev/video/6961394443815587073?_d=secCgYIASAHKAESMgowtiQo%2FiCR4623NyS0q%2Bi%2FS7fPJeZG2AKQGsaMhQxzZCxHJCdBk%2BCKcEl%2B7zajd8VjGgA%3D&checksum=02090e17ecfd3f8a378b589e33e6b55a6c2b2c03d0679101231e6b4719b675ca&language=it&preview_pb=0&sec_user_id=MS4wLjABAAAAbmV6BzImjcarRKRva6McHYt4gQr-z4j6dHmDWHqW64KXY7LHdOwGO795noBsmSJL&share_app_id=1233&share_item_id=6961394443815587073&share_link_id=4420c5c4-3483-46eb-9007-02bc5265f820&source=h5_m&timestamp=1623835990&u_code=d4m65aa145f55h&user_id=6661351399740325894&utm_campaign=client_share&utm_medium=android&utm_source=copy&_r=1&is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6974092115446105606

“Content that feels more generically appealing spreads faster on the app,” says Os Keyes, a Ph.D student at the University of Washington who has often looked at how power dynamics play out in the TikTok algorithm. “So, since whiteness is generic and young, pretty women are generic and TikTok dances are generic, the result is that even within a universe where moderators are completely neutral, we would still find this kind of propaganda. It spreads very, very easily.”

This, in turn, has a dramatic effect on viewers even hundreds of thousands of miles away from the conflict zone: It helps normalize armed forces and their actions. 

“At best, it promotes a kind of depoliticization: People are isolated from having to care. The particular lack of context on TikTok means that people see a post of girls in uniforms dancing, they think it’s a cool dance and they react to it without having a wider context. It’s the reason the IDF engages in these spaces to begin with: To promote an extremely sanitized image of military force and silence voices that can dispute that image,” Keyes adds. “At best you see no evidence to the contrary, and at worst you internalize the messages that come with their propaganda.” 

Even without considering the geopolitical weight it pulls in the international arena, the amount of energy and pressure that the Israeli government has been pouring on social media over the past decade is obviously no match for the Palestinian National Authority’s resources. 

Yet, for the first time, regular Palestinians and activists managed to push their narrative online, obtaining considerable international support. Content under the #FreePalestine hashtag on TikTok has been viewed over 6 billion times, and activists are matching

Still, they faced an uphill climb, loaded with obstacles. Because, for one, proving atrocities online is simply much harder than whitewashing them. The kind of content pointing to the reality of what’s happening on the ground doesn’t benefit from TikTok’s passion for seemingly positive videos. 

“Algorithms don’t like the sort of violent content that comes with documenting forms of mass destruction,” Keyes points out. “So if you’re an activist trying to draw attention to injuries that people are experiencing as a result of military action, it’s much harder to show what harm has been caused—because that constitutes graphic content. Whereas content with soldiers dancing is considered fine even when it’s positively valorizing militarism.”

Content creation, though, wasn’t the only issue Palestinians faced. Content regulation was an even thornier issue for the Palestinian side, with requests from the Israeli government to keep the platforms less than neutral as violence escalated in May.

“Israelis have invested a lot in trying to control their image on social media and directly lobbying the platforms to make sure that their government requests were being met,” says Marc Faddoul, a researcher at UC-Berkeley focusing on algorithmic fairness and geopolitics. 

On May 19, the Israeli State Attorney’s office stated that it asked social media platforms to remove over one thousand posts whose content “constituted a danger to public safety or violated the terms of the platform.”

Although it’s not clear what exactly the posts were, the results were inarguable. TikTok removed a staggering 89% of the 258 flagged posts. 

“If, as a company, you’re treating moderation as a problem to be solved, then your interest is to get through as many reports as possible, as quickly as possible,” says Keyes. “You are not encouraging moderators to take the time and think about the implications of their practices or the politics behind it—in fact, you’re going to actively avoid that, because it would require your organization to have a political opinion about what’s going on.”  

For its users—many of whom use the app to keep up with the news—this means getting a sanitized view of complex global issues in the shape of apparently innocuous, easy-to-digest videos. 

Especially since the platform is designed to prioritize content that most people react affirmatively to—and, at least in the past—actively undermine content that the app thinks people won’t want to see, this risks reinforcing offline power dynamics at every turn.

“The risk behind this trend is that automatic flagging models are usually trained on moderation data,” Faddoul explains. “Therefore, if you overflow the moderation pipelines with requests asking to remove pro-Palestinian content, perhaps flagging it as terrorist content, there’s a risk of biasing the algorithm, turning it more systematically against that type of content. Especially if you have enough pressure to make sure that these requests go through.”

TikTok did not respond to a request for comment on their algorithm.

Share this article
*First Published: Jun 16, 2021, 8:27 am CDT