Facebook has developed new ways to delete bad content before anyone sees it or reports it. They summarise ‘bad content’ as anything supporting terrorist activities, hate speech, pornography, violence or crime.
At the moment, there are two ways to remove such content from Facebook; take it down once someone reports it, or actively search for it using technology and then delete it. Advances in technology, specifically artificial intelligence (AI) and machine learning, are making this process quicker, more accurate and more efficient.
Practical uses of the Facebook AI
Bad content is found and deleted by AI programmes before anyone can see it or report it. As soon as the technology identifies bad content, it can act in the appropriate way instantaneously – by either removing it or contacting the relevant authorities, such as suicide prevention agencies. There have been over 1000 cases where Facebook AI has notified suicide watch organisations of potential victims through their Facebook statuses.
The AI is able to scan through more content than any human team. In the first three months of 2018, Facebook AI removed almost two million posts related to terrorist groups such as ISIS and al-Qaeda – 99% of which was removed before anyone reported the content.
This technology acts as an extension of the review team. Human expertise and emotional support are two key factors in understanding the context of a post, which Facebook AI cannot differentiate at the moment. For example, is someone posting about their drug addiction and recovery, or are they promoting the use of drugs? Humans are still necessary to differentiate the context and nuances of some flagged material.
Improving the technology
This AI software has taken years to develop into its current form, but Facebook engineers are continuously improving it. Patterns of behaviour are analysed and ‘taught’ to the AI so that it can better understand and find similar content.
The technology has begun looking for instances of hate speech in English and Portuguese. As mentioned before, context plays a significant role in determining what is hate speech and what is a discussion of hate speech, but the AI is becoming increasingly good at correctly identifying posts that contain actual discriminatory language.
Not only does this technology look for bad content, but it also searches for fake accounts that spread false information, spam or misleading advertising. Accounts that post in quick succession, over and over again, are flagged for problematic behaviour and spam by the AI.
The only downfall of AI is the context identification – knowing the nuances of a post and when ‘bad’ content is actually ‘good’ content. The software needs large amounts of data to train it to recognise context and patterns of behaviour, properly.
Facebook AI Research (FAIR) is developing technology for multilingual embeddings that will allow the AI to work across languages and regions of the world. The feedback that Facebook users provide when they report a post is used to teach the AI, so user participation is an important aspect of keeping Facebook clean for all.
___
Discover the latest trends and tips that shape an ever-evolving digital world.
___
Mobimeme offers content marketing, SEO, analytics, social media management and expert direction in the digital sphere. Building and growing an online audience for your business is what we do best. Get in touch with us to find out more about our package offerings and how you can improve your website and following.
___
Follow us on Facebook, Instagram and Google+ for more articles, videos and content to keep you inspired.