In his 1964 novel Runaround, American author and scientist, Isaac Asimov proposed his famous three laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Robots do not belong to a far distant science fiction realm. Autonomous vehicles will soon be driving on our roads and it poses some interesting ethical considerations. Like a modern version of the classic trolley problem https://en.wikipedia.org/wiki/Trolley_problem a morally problematic situation could arise if the AI controlling the car needs to decide whether to make a high-risk maneuver in order to avoid collision with one or several pedestrians. The three laws cannot help us in this situation as both action and inaction leads to the injuring of humans.
So what factors and what guiding principles should the AI take into account? Should the AI protect the driver at all cost (still following traffic rules)? Should the AI take a naive utilitarian approach and save as many lives as possible not giving higher importance to the driver. What if the pedestrians’ actions are clearly inappropriate (for example jaywalking on a highway)? Even assuming perfect knowledge about the world, different moral axioms could lead to different outcomes. And in reality, the consequences of actions will be impossible to fully predict and will at best be reasonable estimates of probabilities.
Another example can be taken from the healthcare sector. Nursing robots are already a reality. In their current state they have a low grade of autonomy; for example Robear, a Japanese developed nursing robot with the ability to lift patients into beds and wheelchairs https://www.theguardian.com/technology/2015/feb/27/robear-bear-shaped-nursing-care-robot . As the autonomy of robots increase so will the need for moral problem solving. Even a simple case such as a medication administration robot encountering a patient who do not want to take his/hers medication becomes complex. A human nurse can efficiently balance the patient's right to autonomy, right to privacy and harm prevention. For a robot this would require a both sophisticated sensory processing together with complex explicit rules how to interpret the available information.
To make an AI behave morally one could take either a top-down rule based approach or a bottom-up machine learning approach. For simple situations, where all possible actions and their consequences are known by the designers, a set of explicit rules for each situation can be constructed making the AI interactions deterministic. When instead dealing with limited knowledge and outcome probabilities, the rules have to be more general, similar to traditional normative ethical theories.
The bottom-up approach would instead place an AI without preconceived notions about morality in an environment and reinforcing adequate behavior case to case using machine-learning techniques. This would more closely resemble the way humans learn moral behavior, and for example a human could be guiding the AI in its moral development process. By letting the AI learn its ethical principles from well-simulated or real scenarios, its response to new, not foreseen, events can be improved. One downside, however, is that its behavior may become less transparent. The risk for erratic behavior in novel situations cannot of course not be ruled out either.
A combination of the two approaches will probably be the way forward. Humans are not error-free in predicting the consequences of their actions. Psychologically, a belief of a moral agent’s good intention is an important factor for acceptance and trust in the sometimes unpredictable reality. Even if a robot may make less errors on an average (which seems to be the case for autonomous vehicles) and therefore would be preferable in a strict utilitarian sense, our need to understand why a certain action was chosen could lead to resistance for machine decision making. Trying to incorporate understandable top-down rules with clear intention may be necessary for acceptance.
It is indeed interesting times for normative ethics. I hope for fruitful discussions between professional ethicists, the machine learning community and the general public. The foundational principles on which leading normative ethical theories rests will be scrutinized with a new sense of practical urgency and psychologists will undoubtedly increase their understanding of what guiding principles really governs our interaction with the world. Maybe the global nature of technology will even lead humanity as a whole closer to common ethical principles.