Artificial intelligence has made progressive strides in capabilities–playing games, driving cars, even predicting cancer, heart attacks, and suicides. While all of those functions of artificial intelligence are impressive, they are all products of narrow intelligence and, mostly, understanding patterns. This sort of pattern-spotting is highly relevant in the AI world as well as the world beyond the lab. In fact, being able to accomplish these sorts of tasks with higher accuracy and reliability than their human counterparts will land AIs jobs that have been traditionally secure for people only.

However, to achieve the artificial intelligence that we all know and love (or are terrified of) from science fiction, AI has substantial room for expansion in terms of “general intelligence.” Relational reasoning, specifically, has been a task that has been difficult to achieve for even the most perceptive AI in years past.

Relational reasoning is what allows humans to understand the connections and comparisons between objects, words, and concepts. For example, while many visual AIs can understand that a photo contains an extraterrestrial alien, a spaceship, and a human, they generally cannot understand that the human is being abducted by the aliens into the spaceship, and that someone needs to save a copy before no one can repost it and government conspiracies around information control begin taking over the internet.

It is relational reasoning that lets humans understand both the individual pieces of information (alien, spaceship, human) as well as a narrative surrounding the relationship between the components (alien abducts human into spaceship, conspiracy ensues.)

Relational reasoning also covers questions as simple as “Which object is further away? Which object is bigger?” in that it also covers a narrative that requires understanding and relating multiple things. Or, in the case of CLEVR which makes use of basic 3D objects in multiple colors and textures, “What’s to the left of the blue thing?”

AIs need to understand how to create these sorts of relations in order to achieve general intelligence that can surpass that of humans.

AlphaGo winning AI creators, DeepMind, have been hard at work on creating a framework for AIs to use relational reasoning to answer questions. In a recent paper, they discuss their method of using Relation Networks (RNs) to identify relationships between simple objects (Which object is closer? etc.) within the CLEVR dataset, answer text-based questions in the bAbI dataset made famous by Facebook, as well as demonstrate “complex reasoning about dynamic physical systems.”

They also use a combination of Convolutional Neural Networks (CNNs) together with RNs to demonstrate the effect of this augmentation on the reasoning abilities of artificial intelligence.

In a quote on New Scientist from Adam Santoro, one of the author’s of DeepMind’s paper, “You can imagine an application that automatically describes what is happening in a particular image, or even video, for a visually impaired person.”

This is a major step in the right direction toward multi-faceted AIs that show the ability to reason, and one that will augment our progress toward general AI.

While relational reasoning is not the end all, be all of human intelligence, it is a vital factor. In a quote on the MIT Technology Review by Harvard psychology professor Sam Gershman, “Relational reasoning is a necessary but not sufficient condition for human-like intelligence.”