Facebook says it labeled 180 million debunked posts ahead of the election
The company estimated it helped register 4.5 million voters in the United States this year across Facebook, Instagram and Messenger, and helped 100,000 sign up to be poll workers. Since its launch, 140 million people have visited the company’s voting information center, and on Election Day, 33 million people visited its election center, which included results as they came in.
The report comes days after chief executive Mark Zuckerberg was grilled about Facebook’s handling of content during the election on Capitol Hill. He said at the time that Facebook was working on a post-mortem of its election actions but did not say when it might be completed.
The prevalence of hate speech continues to be a problem on Facebook. About one out of every 1,000 things users see on the flagship site contains hate speech, Facebook said in its third-quarter Community Standard Enforcement Report. It did not release a similar metric for its photo-sharing app Instagram.
Facebook also said in its update its artificial intelligence systems are getting significantly better at rooting out posts with hate speech, even as the content continues to proliferate on its social media sites.
The technology now identifies 95 percent of hate speech posts that the company eventually removes before a user reports them. Nearly three years ago, the AI proactively found about 24 percent of the violating posts.
Facebook has been more aggressive in recent years about expanding its policies that define hate speech and trying to quickly take down those posts. In October, the company reversed course on a long-held controversial policy and banned Holocaust denial posts after years of Zuckerberg defending the hands-off approach.
In its quarterly standards report, Facebook said it took enforcement action on nearly 29 million posts on Facebook and Instagram that contained hate speech between July and September. It also took action on 23 million pieces of violent and graphic content.
Facebook’s ability to police content has been hampered by the pandemic. It has had to send much of its moderation workforce home, and says the majority of those workers are still working remotely. While they can handle many of their tasks remotely, Facebook can’t send them its most problematic content, such as sexual exploitation content.