Arrow-right Camera
The Spokesman-Review Newspaper
Spokane, Washington  Est. May 19, 1883

Hateful, violent videos are a sliver of the content YouTube removes

YouTube’s enforcement report, the fourth of its kind for the Google subsidiary, covers July through September and is the first to break out the reasons for removing videos. (AP)
By Craig Timberg Washington Post

YouTube removed 7.8 million videos and 1.6 million channels in the third quarter of this year, mostly for spreading spam or posting inappropriate adult content, the company said in a report Thursday.

The Community Guidelines Enforcement Report comes amid growing questions – including in a congressional hearing Tuesday – about how YouTube monitors and deletes problematic content from the platform, including videos depicting violent extremism and hateful, graphic content. Such videos remain a small percentage of the overall number that YouTube deletes, but the prevalence of such content has been the subject of news reports and congressional scrutiny.

The enforcement report, the fourth of its kind for the Google subsidiary, covers July through September and is the first to break out the reasons for removing videos. It is also the first to report the number of channels removed in their entirety for violating YouTube’s “community guidelines.” Channels are removed when they get three strikes within 90 days, or for a single particularly egregious offense, such as predatory behavior.

The report does not say how many videos get flagged by users as inappropriate but are not removed after company moderators review them.

“Finding all violative content on YouTube is an immense challenge, but we see this as one of our core responsibilities and are focused on continuously working towards removing this content before it is widely viewed,” the company said in a blog posted with the release of the report.

The report offers little new insight into how YouTube is managing the large amount of hateful, conspiratorial videos posted to the platform or on its role as a video library for users of Gab.ai and 4chan, social media sites that are popular with racists, anti-Semites and others pushing extremist ideologies. Users of Gab and 4chan’s “Politically Incorrect” board link to YouTube thousands of times a day, more than to any other outside site, researchers have found.

The report said that 81 percent of videos that end up being removed are first detected by automated systems, and that of this group, the detection happened before a single view by users in 3 out of 4 cases. However, YouTube and its parent Google still rely heavily on humans to help in the effort. Google had previously set a goal of having 10,000 people working on content moderation by the end of the year.

More than 90 percent of videos uploaded in September and removed for violating guidelines against violent extremism or child safety had fewer than 10 views. (Child safety is a broad category, including videos that portray dangerous behaviors, with child pornography amounting to a small percentage of the overall content.)

YouTube also removed 224 million comments during the three-month period covered by the report, mostly for violating rules against spam.

Conservatives have repeatedly accused YouTube and other leading technology companies of seeking to suppress their views, but others have pushed for the platform to act more aggressively toward content that spreads clearly false and hateful messages.

During Tuesday’s congressional hearing, Rep. Jamie B. Raskin (D-Md.) questioned Google chief executive Sundar Pichai about a report in The Washington Post on the spread of videos falsely claiming that Democrat Hillary Clinton had attacked, killed and drank the blood of a girl. Pichai promised more action was coming from the company in addressing such issues.

YouTube said that 6,195 videos it removed in September were found to have violated guidelines against “hateful or abusive” content, about 0.2 percent of the total deleted that month. And 94,400, or 3.4 percent of the total deleted in September, were found to have violated guidelines against “violent or graphic” content. (YouTube didn’t provide specific numbers on some other metrics for the entire three-month period covered by the report overall.)

“We know there is more work to do and we are continuing to invest in people and technology to remove violative content quickly,” the company said in its blog.

YouTube’s report comes the same week that another tech giant, Twitter, released new data about its efforts to combat hate speech and other abusive content online. More than 6.2 million unique accounts were flagged to the company in the first six months of 2018 for violating its rules, including prohibitions on violent threats and hateful conduct. Twitter said it took action against more than 605,000 unique accounts.

Explaining its work in a blog post, Twitter said that the figure does not reflect when it uses its own tools to “limit the reach and spread of potentially abusive content,” including hate speech, and that in some cases the reports it receives actually are from “bad actors.”