The coronavirus is exacerbating a crisis on social media. Human rights activists could pay the price.

Public health officials warned that rumors and conspiracy theories, including false claims that 5G technology was causing the virus or that bleach could cure it, would cost lives. Social media companies came under mounting pressure to fight the problem, which has long plagued their platforms. A Facebook boycott organized in recent weeks by hundreds of major advertisers over misinformation and hate speech on the site, amid widespread demonstrations against racism and police brutality, compounded that pressure. Next week, Facebook CEO Mark Zuckerberg, along with the heads of other tech giants, is set to testify before the House antitrust subcommittee, as part of an “ongoing investigation of competition in the digital marketplace.”

As the pandemic took hold, platforms began to implement new measures: working with governments and the World Health Organization to push accurate information; introducing misinformation warning systems; and removing more content than ever, increasingly through the use of algorithms as the pandemic forced companies to send human moderators home.

While major social media companies have diverged somewhat in their approach to content moderation during a pandemic, amid civil unrest and in the face of rampant hate speech, they are all ramping up efforts to police misinformation online. But those efforts could have inadvertent consequences, activists say, for researchers and advocates who study the spread of information online and who use the Internet to document and monitor conflicts and human rights abuses.

“Platforms are under a lot of pressure to remove a lot more content than before with a lot fewer resources,” said Jeff Deutch, a researcher at the Syrian Archive, a nongovernmental organization that collects and preserves documentation of human rights violations in Syria and elsewhere.

But the automated tools cannot compete with humans when it comes to nuanced judgment, and are ill equipped to analyze context, according to human rights researchers. Such systems are largely unable to differentiate footage documenting war crimes from extremist propaganda.

For example, Syrian citizen journalist Mohammed Asakra’s Facebook posts documenting conflict and injustice in the country were tracked by human rights advocates thousands of miles away. But in early May, Asakra’s Facebook profile disappeared, caught up in the moderation web.

“After investigating this, we’re restoring these accounts and will continue looking into our processes and how we enforced our policies in this case,” Facebook representative Drew Pusateri said in an email.

On YouTube, almost twice as many Syrian human rights-relevant videos were unavailable in May compared with the same period last year, based on data from the Syrian Archive.

Meanwhile, Zuckerberg in June said the company would ban ads that include hate speech, among other proposed changes. AI could play a part in these measures, although human eyes “will continue to play an important role in content review at Facebook,” Pusateri said.

Facebook said in a statement that it had “put warning labels on about 50 million pieces of content related to COVID-19 on Facebook” in April. The tools involved were “far from perfect,” the company acknowledged.

In April, the Syrian Archive was one of more than 40 organizations that penned an open letter to social media and content-sharing platforms, such as YouTube, Facebook and Twitter, arguing that data on content removed during the pandemic “will be invaluable to those working in public health, human rights, science and academia.”

Content relevant to human rights “shouldn’t be permanently deleted,” said Dia Kayyali of Witness, a nongovernmental organization that promotes the use of technology to protect human rights.

While automated online content moderation has become an especially pressing matter in the United States in recent weeks, it has been a contentious issue, particularly in Europe, for years.

New concerns began to arise in 2017, when human rights researchers noticed “hundreds of thousands of videos being removed,” Deutch said.

E.U. lawmakers last year passed a directive that — once implemented by members — will put more pressure on platforms to automatically scan uploaded content.

E.U. lawmakers have also pushed ahead with anti-terrorism content rules, with potential global repercussions. Some proposals have suggested the use of automated moderation tools, drawing a rebuke from three U.N. special rapporteurs, who argued in a joint statement in 2018 that it would aggravate “the risk of pre-publication censorship.”

The rapporteurs’ message, according to Stanford researcher Daphne Keller, was that “using filters in this context is going to take down news reporting. It’s going to take down counter-speech. It’s going to take down things like the Syrian Archive.”

Several major platforms argue the changes are only temporary and are necessary to meet growing demand for speedy action to take down hate speech.

In a statement earlier this year, YouTube acknowledged that this may lead to the unjustified removal of some content.

But the platform, which said it removed 6.1 million videos in the first quarter of 2020 for violating its policies, argues that users — as on other platforms such as Twitter — have the right to appeal, and that complaints are reviewed by human moderators. “When it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it,” said Alex Joseph, a YouTube representative.

Following around 165,000 appeals in the first quarter of the year, more than 40,000 videos were reinstated, the company said.

Even though YouTube has recently begun to release some details on the kind of content it takes down and what percentage of it is eventually restored, researchers say they can only guess the overall extent of automated moderation’s effect on legitimate content.

Human rights groups and advocates disagree, however, over how platforms should provide more transparency. While some argue that independent groups should receive access to content that is removed, others acknowledge that such practices could violate privacy laws and are instead demanding to view data already screened and aggregated by platforms.

The release of such data could back up concerns voiced by groups such as the Syrian Archive, which says it has so far spotted and preserved more than 3.5 million records from more than 5,000 sources.

The group downloads videos and other posts from a large network of sources in the country, aiming to store a backup copy before platforms take down the evidence. The researchers hope the items they preserve and catalogue will be used “in advocacy, justice and accountability,” for instance to prosecute human rights abusers.

Such efforts are already underway in Europe. In April, a regional court in the German town of Koblenz began to hear a landmark case against two alleged former members of the Syrian regime accused of committing or aiding and abetting crimes against humanity. The evidence includes tens of thousands of photos smuggled out of Syria.

Read more:

Source:WP