Back around the turn of the millennium, Susan Anderson was puzzling over a problem in ethics. Is there a way to rank competing moral obligations? The University of Connecticut philosophy professor posed the problem to her computer scientist spouse, Michael Anderson, figuring his algorithmic expertise might help. At the time, he was reading about the making of the film 2001: A Space Odyssey, in which spaceship computer HAL 9000 tries to murder its human crewmates. “I realized that it was 2001,” he recalls, “and that capabilities like HAL’s were close.” If artificial intelligence was to be pursued responsibly, he reckoned that it would also need to solve moral dilemmas.
In the 16 years since, that conviction has become mainstream. Artificial intelligence now permeates everything from health care to warfare, and could soon make life-and-death decisions for self-driving cars. “Intelligent machines are absorbing the responsibilities we used to have, which is a terrible burden,” explains ethicist Patrick Lin of California Polytechnic State University. “For us to trust them to act on their own, it’s important that these machines are designed with ethical decision-making in mind.” The Andersons have devoted their careers to that challenge, deploying the first ethically programmed robot in 2010.
The Andersons are no longer alone, nor is their philosophical approach. Recently, Georgia Institute of Technology computer scientist Mark Riedl has taken a radically different philosophical tack, teaching AIs to learn human morals by reading stories. From his perspective, the global corpus of literature has far more to say about ethics than just the philosophical canon alone, and advanced AIs can tap into that wisdom. For the past couple of years, he’s been developing such a system, which he calls Quixote — named after the novel by Cervantes. Riedl sees a deep precedent for his approach. Children learn from stories, which serve as “proxy experiences,” helping to teach them how to behave appropriately.
Given that AIs don’t have the luxury of childhood, he believes stories could be used to “quickly bootstrap a robot to a point where we feel comfortable about it understanding our social conventions.”
Kemo D. 7
Source: Discover Magazine