Why dangerous content thrives on Facebook and TikTok in Kenya

Comment

NAIROBI — The shooter approaches from behind, raising a pistol to his victim’s head. He pulls the trigger and “pop,” a lifeless body slumps forward. The shot cuts to another execution, and another.

The video was posted on Facebook, in a large group of al-Shabab and Islamic State supporters, where different versions were viewed thousands of times before being taken down.

As Facebook and its competitor TikTok grow at breakneck speed in Kenya, and across Africa, researchers say the tech companies are failing to keep pace with a proliferation of terrorist content, hate speech and false information , taking advantage of poor regulatory frameworks to avoid stricter oversight.

“It’s a deliberate choice to maximize labor and profit extraction, because they view the societies in the Global South primarily as markets, not as societies,” said Nanjala Nyabola, a Kenyan technology and social researcher.

About 1 in 5 Kenyans use Facebook, which last year renamed itself Meta, and TikTok has become one of the country’s most downloaded apps. The prevalence of violent and inflammatory content on the platforms poses real risks in this East African nation, as it prepares for a bitterly contested presidential election next month and deals with the threat of terrorism posed by a resurgent al-Shabab.

“Our approach to content moderation in Africa is no different than anywhere else in the world,” Kojo Boakye, Meta’s director of public policy for Africa, the Middle East and Turkey, wrote in an email to The Washington Post. “We prioritize safety on our platforms and have taken aggressive steps to fight misinformation and harmful content.”

TikTok’s head of government relations and public policy in sub-Saharan Africa, Fortune Mgwili-Sibanda, also responded to The Post by email, writing: “We have thousands of people working on safety all around the world — and we’re continuing to expand this function in our African markets in line with the continued growth of our TikTok community on the continent.”

The companies’ content moderation strategy is two-pronged: Artificial intelligence (AI) algorithms provide a first line of defense. But Meta has admitted that it’s challenging to teach AI to recognize hate speech in multiple languages and contexts, and reports show that posts in languages other than English often slip through the cracks.

In June, researchers at the London-based Institute for Strategic Dialogue (ISD) released a report outlining how al-Shabab and the Islamic State use Facebook to spread extremist content, like the execution video.

The ISD’s two-year investigation revealed at least 30 public al-Shabab and Islamic State propaganda pages with nearly 40,000 combined followers. The groups posted videos depicting gruesome assassinations, suicide bombings, attacks on Kenyan military forces and Islamist militant training exercises. Some content had lived on the platform for more than six years.

Reliance on AI was a core problem, said the report’s co-author, Moustafa Ayad, because bad actors have learned how to game the system.

If the terrorists know the AI is looking for the word jihad, Ayad explained, they can “split up J.I.H.A.D with periods in between the letters, so now it’s not being read properly by [the] AI system.”

Ayad said most of the accounts flagged in the report have now been removed, but similar content has since popped up, such as a video posted in July featuring Fuad Mohamed Khalaf, an al-Shabab leader wanted by the U.S. government. It garnered 141,000 views and 1,800 shares before being removed after 10 days.

Terrorist groups can also bypass human moderation, the second line of defense for social media companies, by exploiting language and cultural expertise gaps, the report said. Kenya’s national languages are English and Swahili, but Kenyans speak dozens of other tribal languages, dialects and the local slang, sheng.

Meta said it has a 350-person multidisciplinary team, including native Arabic, Somali and Swahili speakers, who monitor and handle terrorist content. Between January and March, the company claims to have removed 15 million pieces of content that violated its terrorism policies, but did not say how much terrorist content it believes to still be on the platform.

In January 2019, al-Shabab attacked the DusitD2 complex in Nairobi, killing 21 people. A government investigation later revealed they planned the attack using a Facebook account that remained undetected for six months, according to local media.

During Kenya’s last election in 2017, journalists documented how Facebook struggled to rein in the spread of ethnically charged hate speech, an issue researchers say the company is still failing to address. Adding to their worries now is the growing popularity of TikTok, which is also being used to inflame tensions ahead of the presidential vote on August 9.

In June, the Mozilla Foundation released a report outlining how election-related disinformation has taken root on TikTok. The report examined more than 130 videos from 33 accounts that had been viewed more than 4 million times, finding ethnic-based hate speech, as well as manipulated and false content that violated TikTok’s own policies.

One video clip mimicked a detergent commercial in which the narrator told viewers that the “detergent” could eliminate “madoadoa,” including members of the Kikuyu, Luhya, Luo and Kamba tribes. Interpreted literally, “madoadoa” is an innocuous word meaning blemish or spot, but it can also be a coded ethnic slur and a call to violence. The video contained graphic images of post-election clashes from previous years.

After the report, TikTok removed the video and flagged the term “madoadoa,” but the episode showed how the nuances of language can elude human moderators. A TikTok whistleblower told report author Odanga Madung that she was asked to watch videos in languages she didn’t speak and determine, from images alone, whether they violated its guidelines.

TikTok did not directly respond to that allegation when asked by The Washington Post, but the company issued a statement recently about efforts to address problematic election-related content.

TikTok said it moderates content in more than 60 languages, including Swahili, but declined to give additional details about its moderators in Kenya or the number of languages it monitors. It has also launched a Kenya-specific operations center with experts who detect and remove posts that violate its policies. And on July 14, it rolled out an in-app user guide containing election and media literacy information.

“[We] have a dedicated team working to safeguard TikTok during the Kenyan elections,” Mgwili-Sibanda wrote. “We prohibit and remove election misinformation, promotions of violence and other violations of our policies.”

But researchers still worry that violent rhetoric online could lead to real violence.

“One will see these lies really turn into very tragic consequences for people attending rallies,” said Irungu Houghton, director of Amnesty International Kenya.

Researchers say TikTok and Meta can get away with lower content moderation standards in Kenya, in part because Kenyan law does not directly hold social media companies responsible for harmful content on their platforms. By contrast, Germany’s “Facebook Act” fines companies up to U.S. $50 million if they do not remove “clearly illegal” content within 24 hours after a user files a complaint.

“This is quite a gray area,” said Mugambi Laibuta, a Kenyan lawyer. “[W]hen you’re talking about hate speech, there’s no law in Kenya that states that these sites should enforce content moderation.”

If Meta and TikTok do not police themselves, experts warn, African governments will do it for them, possibly in anti-democratic and dangerous ways.

“If the platforms don’t get their act together, they become convenient excuses for authoritarians to clamp down on them across the continent … a convenient excuse for them to disappear,” Madung said. “And we all need these platforms to survive. We need them to thrive.”

Loading…

Source: WP