YouTube says it’s getting better at taking down videos that break its rules. They still number in the millions.

But because of the immense scale of YouTube — more than 1 billion hours of video are watched on the site every day — that still amounts to potentially millions of views. The metric relies on a sample of videos the company says is broadly representative but doesn’t account for all the content posted to the platform.

The numbers underline a core issue facing YouTube and other social networks: how to keep their platforms open and growing while minimizing harmful content that might trigger harsher scrutiny from governments already keen to regulate them.

“My top priority, YouTube’s top priority, is living up to our responsibility as a global platform. And this is one of the most salient metrics in that bucket,” said Neal Mohan, YouTube’s chief product officer and a longtime Google executive known for increasing the company’s ad business.

It wasn’t long ago that social networks such as Facebook and YouTube denied that they were even part of the problem. After Trump’s election in 2016, Facebook chief executive Mark Zuckerberg rejected the idea that his site had a notable impact on the result. For years, YouTube prioritized getting people to watch more videos above all else, and ignored warnings from employees that it was spreading dangerous misinformation by recommending it to new users, Bloomberg News reported in 2019.

In the years since, as scrutiny from lawmakers intensified and employees of YouTube, Facebook and other major social networks began questioning their own executives, the companies have taken a more active role in policing their platforms. Facebook and YouTube have both hired thousands of new moderators to review and take down posts. The companies have also invested more in artificial intelligence that scans each post and video, automatically blocking content that has already been categorized as breaking the rules.

At YouTube, AI takes down 94 percent of rule-breaking videos before anyone sees them, the company says.

Democratic lawmakers say the company still isn’t doing enough. They have floated numerous proposals to change a decades-old law known as Section 230 to make Internet companies more liable for hate speech posted on their platforms. Republicans want to change the law too, but with the stated goal of making it harder for social media companies to ban certain accounts. The unproven idea that Big Tech is biased against conservatives is popular with Republican voters.

Researchers who study extremism and online disinformation say there are still concrete steps that YouTube could take to further reduce disinformation. Companies could work together more closely to identify and take down rule-breaking content that pops up on multiple platforms, said Katie Paul, director of the Tech Transparency Project, a research group that has produced reports on how extremists use social media.

“That is an issue we haven’t seen the platforms work together to deal with yet,” Paul said.

Platforms could also be more aggressive in banning repeat offenders, even if they have huge audiences.

When YouTube and other social networks took down Trump’s accounts, false claims of election fraud fell overall, according to San Francisco-based analytics firm Zignal Labs. Just a handful of “repeat spreaders” — accounts that posted disinformation often and to large audiences — were responsible for much of the election-related disinformation posted to social media, according to a report from a group that included researchers from the University of Washington and Stanford University.

In the days after the Capitol riot, YouTube did ban one such repeat spreader — former Trump adviser Stephen K. Bannon. The YouTube page for Bannon’s “War Room” podcast was taken down after another Trump ally, Rudolph W. Giuliani, made false claims about election fraud on a video posted to the channel. Bannon had multiple strikes under YouTube’s moderation system.

“One of the things that I can say for sure is the removal of Steve Bannon’s ‘War Room’ has made a difference around the coronavirus talk, especially the talk around covid as a bioweapon,” said Joan Donovan, a disinformation and extremism researcher at Harvard University.

YouTube is invaluable to figures such as Bannon who are trying to reach the biggest audience they can, Donovan said. “They can still make a website and make those claims, but the cost of reaching people is exorbitant; it’s almost prohibitive to do it without YouTube,” she said.

YouTube’s Mohan said the company doesn’t target specific accounts, but rather evaluates each video separately. If an account repeatedly uploads videos that break the rules, it faces an escalating set of restrictions, including temporary bans and removal from the program that gives video makers a cut of advertising money. Three strikes within a 90-day period results in a permanent ban.

“We don’t discriminate based on who the speaker is; we really do focus on the content itself,” Mohan said. Unlike Facebook and Twitter, the rules don’t make an exception for major world leaders, he said.

Mohan also emphasized the work that the company has done in reducing the spread of what it calls “borderline” content — videos that don’t break specific rules but are close to doing so. Previous versions of YouTube’s algorithms may have boosted those videos because of how popular they were, but that has changed, the company says. It also promotes content from “authoritative” sources — such as mainstream news organizations and government agencies — when people search for hot-button topics such as covid-19.

“We don’t want YouTube to be a platform that can lead to real-world harm in an egregious way,” Mohan said. The company is constantly seeking input from researchers and civil rights leaders to decide how it should design and enforce its policies, he said. That process is global, too. In India, for example, the interpretation of anti-hate policies may be more focused on caste discrimination, whereas moderators in the United States and Europe will be more attuned to looked for white supremacy, Mohan said.

Most of the content on YouTube isn’t borderline and doesn’t break the rules, Mohan said. “We’re having this conversation around something like the violative view rate, which is 0.16 percent of the views on the platform. Well, what about the remaining 99.8 percent of the views that are there?”

Those billions of views represent people freely sharing and viewing content without traditional gatekeepers such as TV networks or news organizations, Mohan said. “Now they can share their ideas or creativity with the world and get to an audience that they probably wouldn’t have even imagined they could have gotten to.”

Still, even if the metric is accurate, that same openness and immense scale means content that could have real-world harm remains a reality on YouTube.

“You see the same kind of problems with moderating at scale on YouTube like you do on Facebook,” said Paul, the disinformation researcher. “The issue is there’s such a vast amount of content.”

Source: WP