When will machines understand us?

Last week’s blog explored the vision for language understanding and our ability to vastly improve on it over the next five years. Today, we continue the journey by exploring what will empower machines to converse. This is a true revolution combining computer science with linguistics in a new way.

Of course confidently predicting a revolution is easier when a system already produces results. Today, while AI models for languages and machine learning have not scaled into accurate systems, I have seen a lab producing the future. The Thinking Solutions lab in Australia has been developing a new language algorithm since 2006.

Its approach is very different to the computer model designed by Alan Turing in the 1930s and that has been scaled up since the 1950s by governments, IBM and others. Rather than that model, the Thinking Solutions language algorithm efficiently uses hierarchical patterns to meet the needs of human language and by incorporating unexploited linguistics. That gives me great confidence that language understanding machines will be rolling off production lines sooner than expected.

Today, I explore the features of a revolutionary linguistics model to see where machine understanding will come from.

Revolutionary RRG Linguistics

Let’s travel back to understand the future. In 1957, a year after AI’s name was coined, another major revolution took place as Professor Noam Chomsky published Syntactic Structures. Amongst many things, it has resulted in today’s computer languages. Its theoretical descendants still dominate today. Roughly 30 years after that publication, while looking for a model that explained ALL human languages more effectively, Professor Robert Van Valin, along with Professor Bill Foley, invented Role and Reference Grammar (RRG).

Now fast forward around 30 years. RRG has been extensively validated with the power to explain how people use language. Its bidirectional linking system maps from meaning to form, as speakers do when producing language, and from form to meaning, as hearers do when understanding language. RRG is really different from the other “direction-neutral” approaches that try to establish a mapping between form and meaning.

The working prototype system I mentioned uses the RRG linking system to enable language understanding, which, in turn, enables translation and conversation in multiple languages. RRG’s algorithm finds the meanings of a sentence in any language.

RRG also predicts the constituents of meanings in a language because unlike today’s statistical or neural-network-based designs, RRG is human-like because observation of the common features in human languages helped create it.

Moving From AI

A problem is AI-Hard if “…it would not be solved by a simple specific algorithm.” We can use this definition to extract natural language understanding (NLU) from the other unsolved AI problems, like computer vision. The working prototypes at Thinking Solutions incorporate the RRG linking algorithm to convert the phrases in a language to their meaning in conversation and vice versa. Therefore, NLU is not AI-Hard.

The Thinking Solutions Language Algorithm combines Patom theory with RRG. Patom theory is the brainchild of the Thinking Solutions founder, John Ball. The implementation of its algorithm moves the work of programmers to the computer to understand language using RRG.

Role & Reference Grammar (RRG)

RRG answers the question: “what would linguistics look like if it were based on the multitude of human languages?”

RRG started out from the analysis of Native American, Australian Aboriginal and Philippine languages, rather than from English as Chomsky did, resulting in a theory of a very different nature. Professor Daniel Everett, the Dean of Arts & Sciences at Bentley University has described RRG “one of the most interesting theories of human language anywhere. Its combination of pragmatics, semantics, syntax and morphology into a single, non-English based model of how sentences work communicatively is innovative and successful.”

The scientific detail is complex and lengthy, as is expected from something that must explain how we communicate the myriad details we can experience. But with languages, it is always easy to understand through example because our human understanding kicks in to fill in the blanks.

At the risk of listing a whole lot of examples where you will say: “Of course” just remember this is being developed for emulation on devices, not in humans, so the computer needs to fill in the blanks or it won’t understand.

RRG Links Words to Meaning

The sentence: “the dog bit my mother” conveys a past event. It means something about biting, the act of using teeth to grab or cut. The actor, the thing that does the action is ‘the dog.’ The under-goer, the thing being acted upon, is ‘my mother.’

Now compare that to this sentence: “my mother was bitten by the dog”. How does this differ, beyond the words being in a totally different order? Welcome to RRG! RRG links the elements the same way while noting that the latter is passive. Fundamentally, the myriad of ways to get to the same meaning is preserved in English and other languages. The vast number of ways to say the same thing is one of the reasons traditional linguistics is found wanting, as it demands more and more computer processing to address what is termed the combinatorial explosion.

By capturing the link between form and meaning, RRG makes possible a new level of programming with language. Better still, this can be independent of the source language.

RRG Fills in the Gaps

“My mother arrived at the beach just now, but your son did not.” is an example of an elliptical sentence, in which “your son did not” is not expressed explicitly as “your son didn’t arrive at the beach just now”, but simply understood. Those extra words would make languages more cumbersome.

The explanation for this is not to be found in the statistical proximity to other words, but in its underlying meaning from a previous sentence, regardless of whether it is uttered by the same speaker, as in the example given, or by another speaker in a conversation.


Today’s blog examined the linguistic requirements to support the demand for improved machine understanding. Since the early 1990s, there has been a definite direction in research as IBM in particular pushed forward into statistically-based systems. It improved performance in the short term in areas where progress had been limited, but in hindsight it appears to have peaked some time ago.

I see the revolution now starting at Thinking Solutions as holding the key to language understanding. While the world has embraced statistical and machine learning techniques, their general lack of success means it is time to consider alternatives. My 2020 predictions for the decade of robotics needs more than just faster versions of today’s systems. New levels of accuracy are needed, and consideration of earlier ideas whose potential has yet to be fully explored.

The renewed interest in AI systems is, in many cases, seeking to solve the problems of language understanding. IBM has set up the Watson division and committed $1B for a planned ten-fold annual revenue return. By using the technology that beat the world’s best “Jeopardy!” players in a televised debate, IBM proposes cognitive-like solutions using its system that, while not understanding the words it is given, still gives fast, correct answers, albeit with a percentage to approximate its accuracy. Machines that work well, but don’t understand, seem to be lacking for 2020 robotics.

Today, Google continues a relentless pursuit of “understanding technologies” with its ongoing investment in research including a world-class team with the highly respected futurist Ray Kurzweil, its expansions of the Google Translate language pairs, improvements in its statistically-based voice recognition and the Deep Learning research with Professor Geoff Hinton. Those are just some of the visible areas of effort. Google-X, no doubt, has hidden projects as well, but are these innovations just more of the same, squeezing the processing paradigm harder and expecting new results? Doing the same thing and expecting different results is a measure of insanity.

The Google mission is evident: to give people what they want, they must first be understood in any language. Then, answers must be found, from whatever language, and a response made in the source language. This requires language understanding. Understanding plus context tracking provides accurate translation. Improved search through the understanding of content are therefore applications of language understanding, too.

Microsoft has a history of natural language interest. In 2008, its acquisition of PowerSet for an estimated $100M+ showed the interest in the next generation of search engines with semantic search, a technology with the potential to disrupt Google’s business model by improving on using just keywords. While subsequently not winning that battle with its underlying PARC technology, Microsoft has continued with the recent inclusion of voice translation for Skype users. There seems no reason not to try to improve with the 1960s proposals where accurate voice systems first require language understanding.

Silicon Valley continues to seek a solution. In 1996, “Ask Jeeves” goal was to provide internet search by asking questions to an ‘English butler.’ Modern examples are like Vicarious, a company focused on neural-network emulation with processor-hungry implementations. “We are building a unified algorithmic architecture to achieve human-level intelligence in vision, language and motor control.” Vicarious has already raised $70M to pursue their goals. On their web site: “Vicarious is bringing us all closer to a future where computers perceive, imagine and reason just like humans.” That may well happen, but neural networks have been researched since the 1940s and have yet to produce accuracy in language understanding at anything like human levels.

I wrote last week that scientific predictions are easiest when innovation has taken place: it is easier to improve on a jet engine than to come up with its initial design. Today’s article is based on a working system. We are awesome at improving working technology by scaling, miniaturizing and optimizing once we are on the right path. Again, think of television miniaturization and Moore’s Law.

AI has held language understanding captive since the 1950s and now I can see it being released to the mainstream with a dramatic shift in technology platform. The language understanding revolution has a long way to go, as language will enable machines to do what we ask and want. In addition, I am working on additional blogs focusing on various discussion of the computer design working with RRG to complete the underlying algorithm for language understanding: Patom theory which will need to be used in the IOT world rather quickly.

This article is published in collaboration with LinkedIn. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Dr Hossein Eslambolchi is the Chairman & CEO of Cyberflow Analytics. 

Image: Robotic arms spot welds on the chassis of a Ford Transit Van under assembly at the Ford Claycomo Assembly Plant in Claycomo. REUTERS/Dave Kaup.  

Leave a Reply