Youtube, the architect of its own downfall

Richard Herlihy

Credit: Alison Clair

Scandals are once again threatening to swamp the world’s most popular video platform as critics circle and a backlash from advertisers and content creators grows. Will a second so-called ‘adpocalypse’ lead to a clear out of amateur and semi-pro video uploaders from the platform?

Last month, Matt Watson stumbled across “the wormhole”.

“Over the past 48 hours I have discovered a wormhole into a soft-core pedophilia ring on Youtube,” wrote Watson in the description of his now infamous YouTube video.

In his 20-minute analysis, titled ‘Youtube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized’, Watson lays bare a network of insidious videos fuelled by YouTube’s recommendation algorithm.

He demonstrates how just five clicks from the site’s homepage leads him into a highway of autoplaying, suggestive videos of children and thousands of predatory comments. Watson believes YouTube is inadvertently helping paedophiles “to connect with each-other, trade contact info, and link to actual child pornography in the comments.”

Adpocalypse 1.0

This isn’t YouTube’s first rodeo.

In 2017, YouTube faced some of its biggest tests. The platform was struck by a wave of boycotts by advertisers. Ads were pulled en masse following revelations that YouTube had allowed the monetisation of ads containing hate speech and extremist content.

A raft of major companies joined the exodus, including Coca-Cola, Pepsi, and even the UK government, leading to what has become known as the ‘adpocalypse’ among YouTube’s creator community.

Highly popular content creators such as Philip DeFranco, h3h3productions, and PewDiePie – who receive an outsize share of the platforms’ creator ad money – reported plummeting revenues following the ad boycott. DeFranco said in a video that his ad earnings initially dropped by 80 per cent, before averaging out at 30 per cent – pushing him to seek out other ways of earning money like crowdfunding.

But the worst would come later in the year.

“Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale,” announced writer James Bridle in a 2017 Medium post.

From June, users and critics like Bridle began to sound the alarm over ElsaGate. They identified thousands of videos marked “child-friendly” – many appearing in the YouTube Kids app – which contained inappropriate and sometimes violent and sexual themes.

In one example highlighted by the BBC, a video depicted an imitation of Peppa Pig having her teeth painfully extracted by a dentist.

More noticeably, alongside these niche, malevolent videos were a wave of hugely popular, yet bizarre videos. They featured child-friendly characters like Elsa and Spider-Man, wordlessly carrying out mundane tasks, or taking part in jarring, adult scenarios such as being kidnapped or buried alive.

Initially, the videos had the trappings of a vast conspiracy. But other critics, like Ben Popper writing for The Verge, suggested the bulk of videos featuring children’s characters – one particularly strange video depicted a pregnant Princess Elsa struggling to climb stairs – had emerged as a trend of “adults dressing up in costume and acting out weird, wordless skits [that had] become a booming industry on the world’s biggest video platform.”

As the YouTube algorithm picked up on these videos (possibly signal-boosted by bot networks being used to drive ad revenue), the trend exploded. In one case, the New York Times reported a counterfeit cartoon channel in Vietnam called Super Zeus TV which had “a team of about 100 people” pumping out imitation videos of popular children’s characters.

The purge

YouTube’s reaction was swift and merciless.

In November 2017, the company announced to BuzzFeed News that it had deleted 150,000 offending videos.

The platform also introduced a controversial policy of automatic demonetisation for videos falling foul of new AI-powered filters. Ads were shortly removed from two million such videos and 50,000 channels operating under the guise of family-friendly content.

YouTube had previously imposed a new minimum of 10,000 views per channel for uploaders to monetise content. But it upped the ante in late 2017 by stipulating a further minimum on video uploaders of 1,000 subscribers and 4,000 hours of watch time within the previous year for uploaders to qualify for ad money.

Counting the pennies

Many of YouTube’s most popular creators with dedicated fan bases have since hopped on alternative ways of earning money from their content – such as partnerships or offering voluntary or exclusive subscriptions via Patreon.

But the adpocalypse and other controversies have only added to the squeeze on earnings for small and medium creators. Wide-ranging automatic demonetisation has left smaller creators with no income.

Writing in Bloomberg, author Chris Stokel-Walker penned a piece titled ‘Success’ on YouTube Still Means a Life of Poverty. According to research by Mathias Bärtl of Offenburg University in 2016, Stokel-Walker believes the partner model of YouTube has already become almost exclusively closed to all but the largest of stars.

96.5 per cent of those uploading five or more videos to the platform earn less than the American poverty line according to estimates in the study. Even the top 3 per cent of YouTubers, Stokel-Walker claimed, could still be in poverty as over 1.4 million views per year might still only amount to about $16,800 in ad revenue – a figure from the time prior to the adpocalypse.

Adpocalypse 2.0

Fast forward to February 2019 and the debate sparked by Matt Watson’s “wormhole” video. During his takedown, Watson called out several companies whose ads appear on children’s content in which paedophiles appear to plague comment sections.

In a rerun of the first adpocalypse, major brands have begun to pull ads once again – including Disney, Fortnite-producer Epic Games, Nestlé, and AT&T (who had only just started to run ads again following a two-year hiatus prompted by the original controversies).

YouTube, too, has reacted hastily – using its automated detection systems to disable comments on videos where predatory comments have appeared.

“Over the past week, we disabled comments from tens of millions of videos that could be subject to predatory behavior,” announced the platform in a blog post detailing the new measures.

But critics say the move has now led to a shutdown of comments on virtually every video that includes young children.

“I just got an email from Youtube saying they will be disabling comments on my channel. Is this really happening?” writes YouTube creator Chadtronic in a tweet shared 16,000 times.

“We’re letting the less than 1 per cent steer the entire direction of the platform? Should all of Twitter be shut down next because a few predators simply EXIST? Incredible.”

While YouTube scrambles to safeguard its largest content creators and protect its image, the platform’s small and medium creators may be left counting the pennies.

Richard Herlihy

Image credit: Alison Clair