Is AI capable of being a moral compass? Facebook’s anti-terror AI dreams are fantasy

Facebook has a responsibility to find a way to prevent and protect users from dangerous content, but do ideas for the use of ‘superhuman’ AI offer a realistic solution?

Mark Zuckerberg recently published a manifesto for Facebook, Building Global Community, outlining the company’s ambitions for the future. What is clear from the piece are Facebook’s ambitions as a social media platform, and brings to light some of the challenges it is confronted with in order to develop secure, informed and open communities.

Zuckerberg touches upon two main concerns about the role of social media platforms in societal discourse and providing preventative measures in reaction to influences that harm the vulnerable. In his manifesto, Zuckerberg recognises the platform’s influence, in that it controls what content users are exposed to, and acknowledges the site’s responsibility to better help users when they come across dangerous content.

Whether the solution is to provide a broader range of content to members, or helping them identify dangerous signs, what Zuckerberg’s essay clearly demonstrates is that Facebook feels there is a growing requirement to better understand the content shared on the site. Both for diverse and safe content, Zuckerberg signals AI as the best means to flag and block unreliable or dangerous content.

But what type of AI could Facebook use to monitor and tackle the unreliable or dangerous content? Traditional methods span from machine learning techniques to catch ‘fake news’ articles – much the same technology behind spam filters – to a more advanced (and yet to be achieved) leap in artificial intelligence development. Such an advanced AI programme would be capable of analysing the content of a post or article, judge its perspective and its varying levels of interpretation with superhuman ability. What we must bear in mind is that despite the seemingly unrelenting stream of AI advancements, the gap between current machine learning capabilities and that of the ‘superhuman levels’ just described is vast. So, we must be clear and more exact when discussing the sort of AI that Facebook is seeking to implement.

Advancement in artificial intelligence

Recent advancements in the field of artificial intelligence covers a range of fields, from image recognition to online poker, but each solution is field-specific. The AI system that beat the chess Grand Master would be useless faced with running the temperature control of a home. This is in stark contrast with the AI of science fiction TV, which is shown to be capable of performing a broad spectrum of tasks across a wide range of fields, so-called artificial general intelligence. Although Zuckerberg does not explicitly discuss this distinction in his manifesto, we should consider if the depth and range required from such a programme to evaluate all the site’s content touches on an Artificial General Intelligence problem.

Complete comprehension of any content, its point of view, whether it is conveying hard truth or satire, and judging up to where the line of ‘offensive’ can be drawn, requires at least human-like levels of cognitive processing. It would mean navigating the nuances of natural language, ethics and social norms. Creating a complete Artificial General Intelligence ‘superhuman’ is unlikely what Facebook is seeking to create; nor is it necessary for understanding their content. When considering the means to which Facebook might set out to create a solution, we should concentrate on what seems realistic in the near and medium term.

To effectively monitor and act against dangerous or false content, a system must be effective at interpreting nuance. Broken down, these challenges include noise, classification and accuracy, and, following, we shall look at how Facebook might tackle these.

Which signals could Facebook use to characterise content? When discussing the noise of online sites, there is wealth of data to draw upon to define it. These could be user signals, or aspects of the content itself. User signals might be the click through rate, the time per video, or something as simple as the number of likes. Signals could also be measured from those creating or sharing content on the platform and past usage. Facebook has a wealth of such signals.

For choosing which features of the content are most useful in characterising it, either through words or a particular set of pixels, pose different challenges but the task of picking out useful features from content remains the same. Selecting the most useful features to encapsulate a genre of post is an essential step before even considering building a machine learning algorithm. However, for a company such as Facebook, who have access to an incredible amount of such data, this is unlikely to be an issue.

Once these parameters have been defined, Facebook would use an algorithm to cluster content into different topics and then classify dangerous content. The rules, however, for such classification would not be explicitly programmed. Facebook would have to rely heavily on the programme’s learning, by giving it examples of dangerous content and letting the AI make the connections itself – no mean feat. This AI programme would need to be so sophisticated that it could identify the nuances between news stories about terrorism and terrorist propaganda; war photography and graphic content. SO while Facebook’s size gives it this wealth of data, it is also a hindrance due to its sheer scale.

The importance of accuracy in AI

Any AI solution created by Facebook requires extreme accuracy due to its scale – the irony of this is that its scale is what makes such a programme necessary in the first place. A programme with even 0.1 per cent error rate results in over two million of Facebook’s nearly two billion users wrongly effected. The scale, accuracy required, and the downside of wrongly selecting dangerous content makes Zuckerberg’s goal more ambitious than that of other brands using AI.

It’s vital to include machine learning in search marketing since it classifies search keywords into product categories to get a better understanding of clients’ opportunity online. As brands continue to innovate and develop AI, we will see increased reliance on machine learning to operate at greater scale.

While it may seem that Facebook needs to create an AI system with a ‘superhuman’ comprehension and judgement of all content, Facebook does not need this complicated a programme to effectively watch over content. If Facebook can work out a solution to marry its mass of data with accuracy, Zuckerberg’s goal to tackle terrorism though AI is not as science fiction as it may initially seem.

Josh Carty is media executive at iProspect.

Further reading on artificial intelligence

Ben Lobel

Ben Lobel

Ben Lobel was the editor of SmallBusiness.co.uk from 2010 to 2018. He specialises in writing for start-up and scale-up companies in the areas of finance, marketing and HR.

Leave a comment