Artificial intelligence is dangerous. And not. My first experience of the idea that super intelligent computers could be dangerous was in the 1968 film “2001: A space odyssey” by Stanley Kubrick. On a mission to Jupiter the spacecraft’s super computer, “Hal”, gets increasingly emotional and erratic. Hal sounds so personal and intelligent that I will refer to this computer as “him” rather than “it”. Hal decides that his own survival is more important for the mission than the survival of the humans on board. Hal kills all crew members except one, whom after some struggle succeeds in disconnecting his AI-circuits. When Hal realizes that he is going to be shut down he becomes really scared and starts singing the song “Daisy Bell” (known as “Isabella” in Swedish). This song must be the equivalent of a comforting nursery rhyme for a super computer since it was the first song ever to be sung by a computer, in 1961.
The next scary take-over attempt by AI computers that I encountered occurred 1984 in the Terminator film starring Arnold Schwarzenegger. This film has later developed into a series of films, essentially with the same plot. A computer network called Skynet that was created by the American military as an AI defence network becomes “too” intelligent and decides to launch a massive nuclear attack against humanity. Most humans are killed in the initial attack, but still many humans survive. Over the years the survivors become increasingly more successful in their war against Skynet and in 2029 Skynet is close to losing the war. At this time time-travel has become possible and Skynet sends advanced cyborg assassins (more advanced as the series of films progresses) back to 1984, before the war started. Their mission is to kill a woman, Sarah Connor. The human leader in the war in 1929 is the charismatic John Connor. Skynet’s logic (which may not be that logic) is that if John Connor’s mother is killed before he is born he would not exist and the humans would not have a strong leader. However, also the 2029 humans can send cyborgs back through time, these, of course, with the mission to protect Sarah Connor. Needless to say, she survives the assassination attempts.
A more recent war between humans and machines controlled by AI-computers occurs in the Matrix films, of which the first one came 1999. The year is 2199 and the war between humans and machines has almost been won by the machines. Only one free human city remains, well hidden underground. Most humans live enslaved virtual reality lives in a computer simulation called The Matrix controlled by the computers. The “reality” that these people experience is precisely the one we experience right now. In reality (well film-reality), however, these humans are kept in fluid filled incubators for obscure computer purposes. The earth has become a dystopic dark place, almost destroyed in the war between humans and machines. The free humans that still remain (called the rebels) can enter The Matrix by connecting their brains to it and in this way they can interact with humans caught in the simulated reality. When the hero (Keanu Reeves) breaks loose from his incubator he becomes “Neo” a Messiah like figure for the rebels. He eventually becomes a master of The Matrix outperforming even the fastest computer programs that are created to kill rebels that enter The Matrix.
These movies have a funny twist at the end; mankind is eventually saved from the AI take-over by a computer virus. The most potent of the programs that are created to keep the rebels out of The Matrix, a secret service type of agent in black suit called Mr Smith, becomes a virus that strives to take control over the machines. In the very moment when the machine armies find the rebel city and attacks the free humans, Neo strikes a deal with the machines. If he can defeat the computer virus, humans and machines will live in future peace. I will not tell you who won, Mr Smith or Neo.
The Swedish crime story “The Girls in the spider’s web” (Swedish “Det som inte dödar oss”) is a late addition (2015) to this theme. This is the fourth book in the Millennium story about the odd duo Lisbeth Salander and Mikael Blomquist. A genius computer scientist, Frans Balder, leaves his prestigious job in Silicon Valley to take care of his autistic son in Sweden. He has developed an algorithm that will lead us rapidly into the potentially dangerous point in time called the technological singularity. This occurs when AI-competent computers become so intelligent that they are smarter than humans. This is supposed to be caused by a runaway process when computers develop new and smarter computers in an accelerating process. The implicit assumption behind movies such as Terminator or The Matrix trilogy is, of course, that computers are near, or have reached this point. In the book Frans Balder is killed by an assassin who is after this super algorithm. The algorithm, however, is lost forever.
Can we learn anything from such movies and books? The most obvious wisdom to me is that we humans are fascinated by this idea. When I struggle with my own genetic algorithms and artificial neural networks a technological singularity seems very remote and maybe not even possible. But who knows? The idea of this type of accelerating development stems from Darwinian evolution thinking. Life on earth has evolved from very simple ancestral forms into ever more complex and intelligent life-forms. Modern humans are a recent life form that has taken control over earth and by far outsmarts all other species. If we consider that earth is almost five billion years old, there has clearly been an acceleration evolution of intelligence the last 50 thousand years or so. Could a corresponding accelerating evolution of artificial intelligence also occur?
The idea that evolution can occur in a run-away process comes from the great sir Ronald Fisher himself. In the early 1900-s Darwin’s ideas of evolution by natural selection was becoming increasingly discredited among contemporary scientists. Darwin did not know about genes and hence he could not explain how characters could remain over generations. If characters are intermediate between two parents (like mixing two jars of paint) the individuals in a species would become increasingly more similar over generations and all variation would disappear. Such blended inheritance would make evolution as we know it impossible. The concept of genes is very different from this blended inheritance as genes are inherited intact across generations. Ronald Fisher was the most famous of the Neo-Darwinists, the scientists that unified Darwinian evolution with Mendelian genetics. In his 1930-book “The genetical theory of natural selection” he explained evolution essentially as we understand it today.
When we think about natural selection it is hard to understand how a trait like the peacock’s trail possibly could evolve. A male peacock with a long tail will clearly be more at risk if a leopard attacks than a peacock with a short tail. Fisher explained the peacock’s tail (and other extravagant sexual characters) with something he called runaway selection. He explained this in the following way: if a group of peahens for some reason prefer males with longer tails, long-tailed males will have a benefit since they will get more offspring than males with normal tails. If another gene makes females more likely to choose long-tailed mates, these female will also be at an advantage since they would get long-tailed (and successful) sons. In this way a genetically based tendency to choose in females would act in the same direction as the genetical base for the chosen trait in males. This system would be self-reinforcing and create run-away selection for long tails. The males with the longest tails that peacocks possible could possess would then be the ones that were most attractive to females.
Is it possible to apply this thinking to artificial intelligence? To me Fisher’s ideas suggest that mere development of the capacity of computers is not sufficient for us to lose control over their abilities. We would need some kind of competition in combination with evolution, for example competition between different types of AI systems. If we turn away from fantasy and enter reality the evolution of AI systems seem far less dangerous. On the contrary, it seems like the development of computers and AI has the potential to produce the most significant contemporary improvement in our everyday lives. Clearly the benefits of this will be harvested by people and companies that work hard in early stages of this development. To me, this is precisely what Lytics.ai does.