Tech companies brace for the long haul with Trump’s unrelenting attacks on election outcome

“There has never been a plan to make these temporary measures permanent, and they will be rolled back just as they were rolled out — with careful execution,” Facebook spokesman Andy Stone said.

Twitter has softened the aggressive approach it took in the days after the election, when the company covered up hundreds of tweets by Trump, his campaign and allies including his son Eric Trump and White House press secretary Kayleigh McEnany. It labeled thousands of others. Spokesman Nick Pacilio said it will continue to restrict the proliferation of tweets that get labels and disable the retweet button indefinitely in an effort to slow the flow of potentially harmful content until the election results are settled.

A Google spokeswoman, Charlotte Smith, declined to share information about the company’s timeline for lifting its political ad ban, which applies to all Google properties, including Google-owned YouTube.

Technology companies face a Catch-22, experts say. In extensively preparing to ward off election turmoil, they raised the bar on their willingness to alter their services to fight misinformation. Now the public expects them to maintain that high bar, even if it is almost impossible.

The companies are being pushed to the brink of their capacity, researchers and insiders say. Employees from senior executives to mid-level engineers have been working round-the-clock since before the election to field and spot problems, say people who work at the companies, an exhaustive effort even for people known to work long hours.

In another sign of strained resources, Facebook redirected its content moderation teams last week to focus on “high-severity content,” with election-related content taking top priority, according to a Nov. 4 memo obtained by The Washington Post. The memo noted lower-severity takedown appeals and other tasks would be immediately closed out due to a lack of bandwidth and the need to reduce a significant “backlog” resulting from a “spike” in the amount of content awaiting review.

“This is an unsustainable pace” for the social media companies “to have so many resources aimed at one event,” said Alex Stamos, Facebook’s former chief security officer and the director of the Stanford Internet Observatory, in a Zoom call last week with the Election Integrity Partnership, a coalition of disinformation researchers.

Stamos said in an interview Wednesday that Facebook is charged with protecting the integrity of democratic debate in 80 to 100 elections around the world each year, including a national election in Myanmar last weekend and a coming municipal election in Brazil. He questioned whether some of the new policies employed by social media companies — reducing the spread of tweets, pausing group recommendations and ads — will be applied globally and whether certain countries will be moved to lower-priority status as a result of the resource crunch.

“The big platforms are running very large teams dedicated to election disinformation to get the level of responsiveness we’ve seen. It’s still very labor-intensive,” he added.

Executives say a protracted period in which Trump or others may contest the results was always part of their scenario-planning, and they acknowledged there are trade-offs during unprecedented times, according to two people familiar with the thinking at Facebook and Twitter.

The trade-off is particularly evident in Georgia, where Democrats are pressuring Facebook to allow political ads during the Senate runoffs there. But the company has no technical ability to turn on political ads for a specific state or advertiser, Rob Leathern, a Facebook ads executive, tweeted Wednesday. Executives are aware that turning on political ads is likely to unintentionally allow the proliferation of even more false claims about the vote, according to a person familiar with the company’s thinking who spoke on the condition of anonymity to discuss sensitive matters.

Facebook also said Wednesday it would continue to label problematic content with a notice showing Joe Biden as the projected winner and that it has no immediate plans to restore its groups-recommendation feature, in which software algorithms recommend online groups for users to join. The feature was paused in the week before the election as a preemptive measure to slow the growth of problematic groups, though such groups proliferated anyway.

Twitter has stopped covering up claims disputing the election results, as well as misleading stories about voter fraud. Since then, Trump has tweeted false claims numerous times, and Twitter has responded by placing a smaller label beneath those tweets noting that the content is disputed and linking to authoritative information. Twitter also permanently banned Trump’s former chief strategist, Stephen K. Bannon, for violating its policies prohibiting incitement to violence.

“With the election now called by multiple sources, we will no longer apply warnings on tweets commenting on the election outcome. However, we will continue to apply labels to provide additional context on tweets regarding the integrity of the process and next steps where necessary,” Twitter’s Pacilio said.

Experts said the changes companies have made are a tacit admission that their products cause harm, which raises the question of why they should not be made permanent.

“The fact that these companies had to label so much content since the election shows how intractable the problem is,” said Joan Donovan, director of the Technology and Social Change Research Project at the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School. “Social media companies now see how their inaction has harmed many and that, while they may try to return to the features they have limited in this election period, I hope they see how these interventions are necessary if they truly care about the health of our democracy.”

Facebook, Twitter and YouTube have said that in less charged times, the features and products that have been suspended provide more good than harm. In Facebook’s case, chief executive Mark Zuckerberg has emphasized — in a recent call with investors and in a companywide meeting — that his personal commitment to giving wide latitude to freedom of speech had not changed.

Source: WP