Toxic Recommendations and the 3am YouTube Binge

Jasmin Narisetty

 

The internet is overwhelmed by algorithms trying to tell you what to consume. From video services like YouTube and Netflix pushing content they calculate you’ll watch, to social media platforms filtering and reorganising posts in your interests, and their own. The influence of corporate interest in what you see online can surface toxic, derisive content and misinformation.

 

Recommendation algorithms collect every iota of data about us, including search keywords, watch history, engagement, and a gamut of other undisclosed data points. This data is then utilised to push an array of loosely-related content, including conspiracy theories and pseudoscience, which algorithmic recommendations have been found to amplify. As a result, the divide across our political sphere grows, giving rise to extremist ideologies; the most harmful being the alt-right.

 

In 2018, the Director of Research at Columbia University’s Centre for Digital Journalism, Jonathan Albright, discovered that a search for “crisis actors” after the Parkland shooting led to a network of over 9,000 conspiracy videos. This is simply not tolerable, considering platforms manifest rabbit-holes for such misinformation to spread. Since algorithms are designed to give users more of what they’ve been viewing, if you watched a few flat-earth conspiracy videos you’d be led down a path of more conspiracies, including the aforementioned Parkland crisis actors. They’d stay in your recommendations for weeks on end, probably until you threw your laptop into the ocean.

 

This is because the business model of social media platforms seek to maximise the time we spend online. And it works, too. YouTube has reported that more than 70% of its viewing time comes from AI-driven recommendations. The thing about this though, is that “AI is optimised to find clickbait”, according to Guillaume Chaslot, a former YouTube engineer. In order to combat this, Chaslot has created a website which works to document YouTube’s recommendation system’s flaws: algotransparency.org. A study by The Guardian using Chaslot’s service found that the top 500 ‘Up Next’ videos of a search of the word “Clinton” consisted of 81% partisan videos favouring the Republican Party. Most featured slanderous accusations about Hillary Clinton.

 

Bias and corporate interest are prevalent online, and we should be aware of this when we interact with web content. Often, this bias leans towards right-wing populist videos, as they use clickbait tactics which satisfies the AI recommendation system. The danger is that these videos make sweeping assumptions based on fear and hysteria, without any evidence, aiming to radicalise. This is epitomised in Alex Jones-esque videos. The endless hours of such anti-SJW content creates spaces for white supremacy to grow and become legitimate talking points, which is then recommended to users on YouTube dot gov. This is especially worrying considering more than half of YouTube’s audience use the platform for news and information.

 

If social media platforms continue to promote toxic recommendations riddled with misinformation and hysteria-fuelled assumptions, we’ll continue to see a rise in alt-right and fascist recruitment all around the world.

 

The only way we can combat the internet’s petri dish of conspiracy, fabrication, and hateful rhetoric, is for platforms to focus on viewer satisfaction rather than viewing time. Social media profits off our extended usage time, resulting in more clicks and ads, and less time spent on competitors’ sites. However, startups like Canopy (canopy.cr)—a private, controllable discovery architecture—uses machine-learning to deliver a small handful of quality items to read or listen to every day. This is based on centralised data stored on users’ devices, meaning data is not shared with the company and thus, cannot be fed into a recommendation algorithm. Podcast app Himalaya tested a version where users were asked, point blank, what they wanted from recommendations and tuned them accordingly. As a result, over 100 volunteers were more satisfied with steering their recommendations, and consumed 30% more of the content they wanted.

 

Some are hopeful that Silicon Valley tech companies will follow start-ups and smaller platforms. Others, like myself, are a little more pessimistic. I’m not sure that big companies are willing to radically change their highly successful business models. I mean, YouTube has a market value of more than $75 billion. Mark Zuckerberg alone is worth $67 billion. Twitter’s market cap is $24.7 billion.

 

These platforms aren’t market leaders because users are satisfied with the content they host; it’s because we’re coerced into hours of jumping down rabbit-hole after rabbit-hole as a result of data-hungry algorithms that push corporate interests. Even if those rabbit-holes promote harmful alt-right ideologies that have a role to play in terror attacks; like Charlottesville, Pittsburgh, and now, Christchurch.

 

In the Christchurch shooter’s ‘manifesto’, he cited conservative YouTube personality Candace Owens as someone who helped him choose “violence over meekness”. The Pittsburgh attacker openly showed support of Gavin McInnes and the Proud Boys, a violent alt-right group with a heavy online presence, who were present at the Charlottesville massacre.

 

It’s easy to think recommendation algorithms are partisan monoliths, designed by the government or corporations to keep us in line, like some kind of Matrix-esque reality. The truth is, they’re engagement monoliths, and their only governing ideology is keeping users’ eyes glued to their screen for just a few more minutes.

 

After all, clickbait rules everything around us.