Google has lauded the abilities of artificial intelligence (AI) to remove extremist content on its platforms and has pledged to develop advanced machine-learning programmes to continue to combat the growing mass of videos featuring hate speech on YouTube.
The tech giant has traditionally used human reviewers to flag and remove content, but a recent switch to a multi-pronged approach using AI has been particularly successful at tackling the spread of controversial content. Google said that three-quarters of the clips that it has removed during the last month were taken down before they were flagged by humans.
The industry’s major players are under increasing pressure to clamp down on illicit content, and Google was criticised earlier this year for its failure to ensure brand safety on YouTube, which prompted a number of advertisers to boycott the platform. It has since introduced tougher standards to make the Internet safer for users and entice brands back.
Google revealed on Tuesday that the rollout of four measures last month, which included a faster review system, has been particularly successful and that machine learning has proven to be both more accurate and faster than human intervention. It is now aiming to develop its tech and algorithms further.
“While these tools aren’t perfect and aren’t right for every setting, in many cases, our systems have proven more accurate than humans at flagging videos that need to be removed,” a YouTube spokesperson said. “Our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism as well as the rate at which we’ve taken this kind of content down.”
The sheer scale of creative content uploaded to YouTube makes it difficult to police the site. as 400 hours of videos are uploaded every minute. The company revealed that an algorithmic approach will be central to its continued fight against extremist content, while work with NGOs and institutions such as the No Hate Speech Movement will improve its understanding of issues associated with terrorism and radicalisation.
Google is also planning to introduce a “limited state” for videos that don’t explicitly breach current policies but do feature “controversial religious or supremacist content” in the near future.