Natural language processing, or NLP, is the crucial problem that needs to be solved in order to build Turing-Test-passing versions of Apple’s Siri, Microsoft’s Cortana and whatever voice-activated virtual assistant Google plans to unleash on the world.
At the moment, the best they can do is do an internet search or two on verbal command, and even then, not always very accurately.
Amazon also has a virtual assistant which it calls Alexa. So, too, do hundreds and hundreds of small-scale development companies, many of which are being bought up by the tech giants mentioned above.
And IBM, of course, has Watson, the artificially intelligence computing system which you may have seen in advertisements holding perfectly plausible conversations with famous people such as singer Bob Dylan, and tennis player Serena Williams.
But just how plausible are those conversations as examples of NLP?
Not very, probably.
Dylan, Williams and Watson probably had to rehearse the dialogue and that is not natural.
In real life, people speak spontaneously; they often speak in grammatically incorrect ways; their accents vary, and so do their tones of voice, the pitch, the volume; they might have quirky ways of saying particular words; and nobody pauses inbetween every word to let the computer — or virtual assistant — know that one word has ended and another word has begun, so the problem for a computer is that a sentence spoken a human would be a string of letters that look like this.
Let’s take this sentence: The rain in Spain falls mainly on the plain.
Let’s leave aside for a moment that this is a famous sentence and Siri, Cortana and Google Assistant probably know of it. Let’s just see what it would look like if they didn’t:
The point is that NLP is difficult. But it seems natural to think it can be done, which is why there is such an intense race going on between the big tech companies to launch the first NLP that seems truly natural, and not a struggle.
The clever money might be on Google, given the enormous advantage of having decades of search-query data to work with, and the successful algorithms the company has developed over the years to answer those queries.
But while type-written sentences do indeed contain spaces and have all or at least most of the letters in the right places, making it easier for a computer to process, spoken language varies wildly from person to person, making it a whole new ball game.
In science fiction, natural language processing is often flawless. Think of HAL in 2001: A Space Odyssey, made in the 1960s, or even the original 1960s Star Trek television show, wherein Captain Kirk and his crew would often speak to the computer, which would process their queries and respond accurately, if a little robotically.
More recently, films like Her have shown some of the capabilities of an artificially intelligent computer operating system which can interact with human verbal communication.
But in science fact, the only way it has so far seemed to be possible is if a computer listens to someone 24 hours a day, all year round for most or all of their lives, and learns all the particular nuances and subtleties of the way they speak and what they might mean.
And even then, there’s no guarantee they’ll get it right.
But the tech companies are pouring in hundreds of millions into the technology, so sooner or later, we can have our own HAL-type AI computer running our lives for us.