Whom shall I kill today? A dramatic, yet very valid question to ask as a computer scientist today. When computers and AI solutions are integrated into most devices and therefore into our daily lives the reach of computer and data scientists’ power is curiously great.
So what, is this another robots-will-take-over-the-world blog post? Not really, keep reading.
For a long time electronic systems have been in use in many devices and products. It's so cheap to computerize something, that even toys in the $10-$40 range often are computerized. Since cars are slightly more expensive and complicated than toys, they have slightly more, and more complex, electronics. In fact, the average new car has about 100 processors onboard, communicating, sharing information and making decisions. Just in the door of a 2014 Volkswagen Golf there are three processors. It's not so strange then, that during last year, the electronic components became the most expensive part of an average care.
So, picture this if you will, it’s a beautiful Sunday morning and you are driving your car. But wait, you are not actually driving, more riding. Because you now own an autonomous car.
That´s right, autonomous cars are no longer sci-fi, but already exist. The most well known is probably the car from Google, but several car manufacturers are developing their own technology.
If you don't want to wait, then buy a Tesla or a Mercedes-Benz. They are already capable of driving autonomously, under your supervision, in some circumstances, like in controlled-access highways. And the autonomous driving is already a better driver than you – at least it's a better driver than the average Joe.
Better driver than Joe? How is that possible?
According to Tesla, there is a fatality for every 150 million km driven for all cars in the US. Up until now there has been only one fatality with autonomous Tesla and it came after 208 millions km of driving. Arguably, a statistic range of one isn't very assuring, but the car is probably at least not significantly worse than average Joe. In this one incident, the crash occurred when a tractor-trailer made a left turn in front of a Tesla at an intersection on a non-controlled access highway. The Tesla failed to notice the trailer and apply the brakes and thus the car continued to travel after passing under the tractors trailer.
Please note the words ”non-controlled” above – which means that the driver had engaged the system under circumstances where Tesla doesn't support autonomous driving. Neither the car, nor the driver, noticed the light-colored trailer against the light-colored sky. In my opinion, the driver is partially to blame, since he/she used the system when it was not supposed to be used, but still, the driver died and the first autonomous fatality was recorded.
Now let's forget about all of the Joe's of this world, and get back to your beautiful Sunday morning, in your autonomous car. As it happens it´s not just a beautiful Sunday morning it's also icy. Very icy, but limited to on one tiny spot ahead. As you approach the ice spot, there is a bus stand where the road bends. Ahead of you is a concrete building. On the other side of the road, a truck is closing in. You reach that lone spot of ice just before the bus stand.
Despite being a Sunday there are lots of people standing at the bus stand waiting for the bus. Young and old, disabled, athletes, carpenters, politicians, men, women, children, you name it. Just when you stop and think about the unusually diversified crowd at the bus stand your car looses traction on the icy spot. You are travelling at 60 km/h meaning the stopping distance is too long to avoid the bus stand.
What will happen next?
If it were you driving, the good old-fashioned way, you would curse and try to save the situation to the best of your ability. You might end up underneath that truck, one more dot on the road fatalities statistics. Or, you might end up in the concrete house ahead, possibly making a hole in the wall, probably killing yourself and possibly, but less likely, killing someone having breakfast in their kitchen.
Or, you might be able to avoid the truck and the house and save yourself by running straight through the very diversified crowd waiting for the bus. Some of them might die, but you didn't have time to think. That's what you will be telling your shrink and quite likely it will be true. You REALLY didn't have time to think.
As a matter of fact, statistics reveal that you will likely run through the crowd, because the truck and the concrete building are much scarier to your brain in that split second when the decision is made.
But, luckily you are not driving yourself.
S**T, says the AI, "how can I fix this?" The answer is simple, “I can't.”
Some accidents just can't be avoided.
But, the AI has an advantage; it DOES have time to think. It's a really fast thinker.
So, it goes into damage control mode, trying to minimize the damage.
* In a split second it notes the truck on your left hand side. Trucks are tall and heavy, so likely the driver of the truck will only feel a slight bump running you over in your Tesla. Not very compelling odds for you though.
* The next split second it notices the concrete building ahead. You might survive the crash, but your car can't know what is behind that concrete wall. Uncertain odds for both you and your surroundings.
* Final option is the bus stand. But there are people there, remember? Is it okay to expose the diversified crowd to a high risk of death or crippling injuries? However people are soft. You, the passenger, will likely survive. Good odds for you but unfortunate for the diversified bus stand crowd.
However, the decision is not yours to make, it's your autonomous car's decision. Or actually it's not. In reality it was someone that wrote the software for your car a while ago, in an air-conditioned office, while listening to Spotify. Whoever wrote that piece of software had ample of time to work this scenario through. The programmer can think of the ethics the AI can't understand. Most likely the different outcomes and probabilities were first discussed by a group of experts who then had the software tested by a test team, before some manager approved it.
So someone in that organization is responsible, at least morally, for what decision your car makes, for who is killed and who survives. Because a group can’t be jointly responsible, right? Would you want to be that person?
Similar to the Three Laws of Robotics the question someone had to answer during development was; is it better to kill only you, the passenger of the car. The rationale being after all, if you didn't go on a trip there would have been no crash in the first place. Or maybe to crash into the concrete building (even if the AI don't know if there is a kindergarten class currently leaning against the other side of that wall and the AI knows nothing of how the robustness of the wall. Or maybe it is better to save the passenger (customer) at all costs. Remember, you, the passenger, paid the salaries of the developers, the factory workers etc when you bought the car. Many fatalities in that type of car won't be good for business. So then your car would plow through the bus stand?
Maybe the AI has time to discriminate and just run down some old people and save all the others? Old people are not as important as kids, are they? But, what if your car can recognize that old person as a prominent figure, is it then legitimate to flatten two kids and a dog?
What if that old, prominent, person happens to be a politician the car owner, the experts or the programmer voted for/against? Is he/she more worth then? What if the dog happens to be an endangered panda? What if someone belongs to a minority, like Mexicans or is disabled? How many pandas equal the worth of one kids?
Who will be authorized to program the AI to make these decisions? Take me for instance, since I happen to be a computer engineer and having worked in the automotive industry. I will be able to hack into my car and change my fatality risk preferences. Maybe I want to instruct my car to save my family and me at all costs. Will that be legal?
If you think cars are so well controlled that software can't be tapered with, please consider the Volkswagen exhaust scandal, or listen to my reason for quitting the automotive industry. We (the world-leading manufacturer of that kind of automotive part) were developing the next generation of our product, to be used by a European car manufacturer of very high profile. The software in our product had a unique feature, a failsafe. If our product started to malfunction, a helper-processor would take over control and reset the main processor, restoring normal operation.
Good thinking, you say, but unfortunately at the moment when the product was actually needed, such a reset would make it very probable that the car would loose control and crash. Let's compare it to a gun: If you detect a malfunction in the gun, loaded or not, you fire it, hoping no one is standing in front of the barrel. Sounds smart? It made it to production...
But, back to our dilemma who to kill; the car will have to steer somewhere or at a minimum making a passive decision not to steer. Should the AI discriminate between passengers and bystanders? If so should President Trump get priority? Pope Francis? People developing automotive software or endangered species like Pandas.
According to polls, many passengers can accept to be that only fatality in an accident. But only if ALL CARS ARE PROGRAMMED THE SAME. However, that seems rather unlikely - ”let's all just agree to be nice and abide by the rules” - won't happen anytime soon.
As you might have guessed by now I don't have a good answer to the question ”Whom shall I kill today.” Nonetheless it is a very valid question. A question that has to be asked and answered, as more and more autonomous cars, or semi-autonomous cars, or even cars with advanced assist systems enter our roads. Even though it's probably not currently possible for an autonomous car to recognize a well-known person, it might very well be soon. And at a minimum the other part of this question must already be taken into account: how important is my passive driver, in comparison to the public. And who is responsible?