Fool hu-mans, there is no escape!
The Wall Street Journal did a piece last week on drones that decide whether to fire on a target, provocatively titled “Could We Trust an Army of Killer Robots?”
Although the title goes for the sci-fi jugular, the article balances questions about robot decision making with concerns like those of Georgia Tech’s Mobile Robot Lab director Ronald Arkin:
His work has been motivated in large part by his concerns about the failures of human decision-makers in the heat of battle, especially in attacking targets that aren’t a threat. The robots “will not have the full moral reasoning capabilities of humans,” he explains, “but I believe they can—and this is a hypothesis—perform better than humans” [1]
In other words: Do we trust an army of people?
Drones might make better decisions in some contexts. Whether drones can be trusted is a whole ‘nother question.
Maybe trust can only occur between people (as in this post on “A sociologist’s guide to trust and design“). If a trust relationship is understood as including risk on the part of the trustor, as well as “responsibility for behavior and willingness to make good for failures” [2] on the part of the entrusted, then placing a website, robot, algorithm or object into the trustee side of that relationship raises sticky questions about what what kinds of uncertainty entail risk, and what it means to be responsible for one’s behavior.
But then again… if I believe that a person is responsible for their behavior, and trust them, and then later discover that they’re not responsible, does that mean that I never really trusted them? From a descriptive point of view, maybe it doesn’t matter so much whether a machine can be responsible, but whether a person believes a machine can be responsible, or trustworthy.
In a 2004 CHI panel on human-robot interaction, panelists discussed the potential risks of people both trusting and not trusting robots (but maybe what is called trust here could be better described as a having a sense of assurance from some perspectives). One point that panelists returned to was the need for people to understand the behavior of non-human agents — something that becomes increasingly difficult as algorithms become less predictable from a human point of view. People find creative ways to use a poorly performing robot when they understand its limitations, but either refuse to use a robot they don’t understand, or allow a robot they don’t understand to perform poorly.
Arguing that “understanding may be more important than trust”, the panel suggested:
The paradigm of “human supervision” may well give way to a promising, but simultaneously frightening future of “peer to-peer interaction” where authority is given to whichever team member is most appropriate, be they human or machine” [3].
How best to understand that kind of interaction?
—
References:
[1] McKelvey, T. (2012, May 19). Could We Trust an Army of Killer Robots? Wall Street Journal.
[2] Shneiderman, B. (2000). Designing trust into online experiences. Communications of the ACM, 43(12), 57–59.
[3] Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., Smart, B., et al. (2004). How to trust robots further than we can throw them. CHI ’04 extended abstracts on Human factors in computing systems, CHI EA ’04 (pp. 1576–1577). New York, NY, USA: ACM. doi:10.1145/985921.986152
Robben, A. (2012, May 18). Military Technologies Are Transforming the Anthropology of Violence. Science Voices. Retrieved May 28, 2012, from http://blogs.sciencemag.org/sciencevoices/2012/05/military-technologies-are-tran.html
Haimes, E. (2002). What can the social sciences contribute to the study of ethics? Theoretical, empirical and substantive considerations. Bioethics, 16(2), 89–113.
Interesting thoughts Rachelle – Are the questions around human/robot interaction being asked more widely or frequently now in 2012 than in 2004, beyond the level of the specialist panel that is?
Given the potential impact of such technological power, and the dubious levels of support they might engender amongst the population, surely this issue needs to be considered from a much wider public vantage point. Paul Virilio writes powerfully that major ethical decisions regarding AI, biotech, GM Food and the like are only currently being considered on small inaccessible ethics panels run by scientists deep within research institutes, where technologists often outnumber philosophers, activists or other voices.
Your point about ‘understanding’ feels much more ideological in that context. By replacing the concrete awareness of limits with a much more nebulous ‘understanding’ of intent, purpose or merit aren’t we segregating the population exposed to such technology into those who understand it (the scientists/technologists) and those who don’t.
We already talk of a digital divide, a technological alienation where older generations are restricted in the mode of their social interaction by virtue of not being able to understand how to use twitter or facebook. Seems like something similar is happening regarding the killer robot debate.
The debate on this issue needs to be of a higher quality, and more widely discussed.