AI algorithms can do several remarkable things – they have played chess better than any human for a long time, advanced to beat us in Jeopardy and now they have even managed to beat the very best humans conclusively in GO, the Asian bigger brother of chess. Google’s AI, AlphaGo, recently beat Korean grandmaster Lee Sedol with four wins to one. However in the fifth game Lee took the lead early on after what looked like an amateurish mistake to a human spectator.
Even if AlphaGo managed to recover from this mistake and win the game, this is somewhat a trademark of today’s best AI – even if the algorithms can beat us conclusively (and if it can beat mr. Lee I don't even want to think of what it would do to me...) they sometimes make mistakes with very simple tasks.
When computers routinely beat us in specific types of games, not even thinking of how much arithmetic an advanced spread sheet can do, really how ”intelligent” are they? This is a somewhat difficult question to answer, since intelligence isn't as easy to measure as upper-arm strength. However, one of the most widely accepted definitions of intelligence across cultures is the IQ test.
So, researchers at Illinois University have now tested an AI by submitting it to an IQ-test, calibrated for children.
The result? The AI proved to be about as intelligent as your average four-year old child.
The intelligent machine, dubbed ConceptNet 4, was given a verbal reasoning examination calibrated for four-year-old children. Known as the Wechsler Preschool and Primary Scale of Intelligence, it calculates a child’s IQ by asking a selection of questions from five categories.
The vocabulary category contains questions such as “What is a cat?” The information category asks questions such as “Where can you find a tiger?” and the word reasoning section asks the child to identify an object after being given three clues as to its identity. The comprehension category tests the child’s ability to understand the motivation behind actions, such as querying why people say hello or shake hands. Finally, the similarities category asks the child to understand the link between two objects, such as “Rain and snow are both made of _ ?”
The researchers gave ConceptNet 4 the IQ test and the answers it gave were strongly linked to how it dealt with the language in the question, the more straightforward the better it managed it. Consequently, it did very well in the vocabulary and similarities segments, while doing averagely in the information question.
When concepts with inherent meaning or intent had to be handled, however, it dropped the ball. For example, when asked “Why do people shake hands?” it interpreted the question as asking “What is the reason people’s hands shake?”. As a result, it decided that people shake hands because they are having an epileptic fit.
It also fared disastrously in the word reasoning category, giving truly bizarre answers unlike any child would ever use. When given the clues “This animal has a mane if it is male, it lives in Africa, and it is a yellowish-brown cat,” its five most common answers were “dog,” “cat,” “home,” “creature” and “farm.”
So, what does this tell us? First that AIs are very good at some things, but quite bad at others. It also shows us that measuring intelligence is a very tricky thing indeed. I have two pet macaw parrots at home and their intelligence is usually compared to that of a two to five year old human, depending on what scientist you trust. They can be trained to do some pretty impressive stuff, like ”get two rugged green balls from this pile of stuff”. This for example shows us that they can count and that they understand the concepts of shapes, textures and colours. You can find videos on YouTube of parrots having quite advanced conversations with their trainers and manipulating complex puzzles to get to that tasty nut.
But, just like AIs, they just don't get some concepts that are very simple to us humans. So, a comparable intelligence?
Still, no one talks about parrots taking over the world, so what's the deal?
Unlike parrots, AI seems bound to improve fast. Many of you might have heard of Moore's law.
”Moore's law is the observation that the number of transistors (and therefore the calculating capacity) in a dense integrated circuit doubles approximately every two years. The observation is named after Gordon E. Moore, the co-founder of Intel and Fairchild Semiconductor, whose 1965 paper described a doubling every year in the number of components per integrated circuit, and projected this rate of growth, would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years.” Source: Wikipedia
This has held more-or-less true since around 1960 and has enabled the incredible development in AI and other computer disciplines we have seen since then. Several times scientists have told us that ”we have reached the end of the exponential growth”, but so far they have all been wrong.
The observant reader will notice that Moore himself never used the word ”exponential” that has been added at a later time by others. Instead he predicted the growth for a decade at a time.
This is an incredibly important distinction. Look at the picture of the two curves. The green is exponential whilst the red is logistic.
At the ”1” or ”2” marks, the two curves are very similar. Shortly after ”3”, the difference is paramount. Most things both in nature and technology follow a logistic curve. In fact, offhand, I can't think of a single thing that follows an exponential path in real-life conditions. To my engineering mind it seems impossible. Sooner or later the laws of physics will surely trump technology, right? Already line-width in circuits is getting closer to atom-size. Only time will tell, but this difference will decide the future of AI, together with our current position on the curve.
If we have a logistic curve, the points ”1”, ”2” and ”3” will all seem almost indistinguishable when we are there. But the near future of ”3” is very different. We have clearly left point ”1” behind, that happened in the 1960s, but how far have we come since? And what type of curve are we following?
If we are following the green curve, I think we will have artificial super intelligence (an artificial intelligence that is so advanced we might not even comprehend what it's ”talking” about) within a few decades, at most. If we are following the red curve and we are currently at the ”2” mark, we might get artificial super intelligence, and if we do get it, it will come within the same few decades. If we follow the red curve and are at, or near, the ”3”, there is still a small chance that we will get artificial super intelligence, but in that case it will probably take more time before we reach it.
As I said, only time will tell, but cutting edge research tells us that the next few years probably won't see the levelling of a logistic curve, since the real progress in technology is about last decade’s cutting edge research finding uses in today’s everyday (or at least top-of-the-line) items.
Apart from the quick improvement, AI algorithms have another thing going for them – they don't demand minimum salaries (or, in the case of parrots, get full). Once an AI is developed it can work tirelessly 24/7, doing that thing it was developed for. And it does it very well.
An AI trained to recognise a photo of a chair (something most four-year-olds do easily) can sort through hundreds of thousands of pictures an hour, sorting out all non-chairs, with comparable, or better, precision than a human.
Most four-year-olds would grow bored after ten pictures and most adults would only manage to do maybe a couple of thousand pictures a day.
Our human race has spent the better part of five thousand years avoiding physical labour. First we trained slaves and horses to do our work, then we built machines, later robots. This allowed us to specialise in narrow fields, to become doctors, engineers and other specialists. Right now we are doing the same thing over again. This time with robots that are capable of making decisions, thanks to their AI ”brains”. So, what does this mean for our future? Will we make mental labour obsolete? Will doctors and engineers be replaced by robots and if so, when? What will we do then? Only time will tell.
Imagine two horses hundred years ago, having a discussion about the future.
Horse 1 ”What do you think about these 'car' things? What if they take our jobs and make us redundant?”
Horse 2 ”No problem. Remember the past? Riding at top speed with mail across the continent? Riding into war? Not bad things to live without. The new city jobs are not so bad and with more and more people, there will still be work for us, just different, better, jobs that we can't even imagine today.”
The horse population of earth peaked around 1915. Today there is about 1/40th the number of horses there were in 1925.
There is no rule saying that ”better technology makes more, better, jobs for horses”, actually it sounds pretty stupid. However, swap ”horses” for ”humans” and suddenly people will start agreeing. Probably, computers will replace lots of jobs. Not all jobs and not immediately, but many enough and soon enough to be a problem if we are not prepared, and we are not prepared.
So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder a majority of our workload? Is the purpose to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?