Fool hu-mans, there is no escape!
The Wall Street Journal did a piece last week on drones that decide whether to fire on a target, provocatively titled “Could We Trust an Army of Killer Robots?”
Although the title goes for the sci-fi jugular, the article balances questions about robot decision making with concerns like those of Georgia Tech’s Mobile Robot Lab director Ronald Arkin:
His work has been motivated in large part by his concerns about the failures of human decision-makers in the heat of battle, especially in attacking targets that aren’t a threat. The robots “will not have the full moral reasoning capabilities of humans,” he explains, “but I believe they can—and this is a hypothesis—perform better than humans” 
In other words: Do we trust an army of people?
Drones might make better decisions in some contexts. Whether drones can be trusted is a whole ‘nother question.
Maybe trust can only occur between people (as in this post on “A sociologist’s guide to trust and design“). If a trust relationship is understood as including risk on the part of the trustor, as well as “responsibility for behavior and willingness to make good for failures”  on the part of the entrusted, then placing a website, robot, algorithm or object into the trustee side of that relationship raises sticky questions about what what kinds of uncertainty entail risk, and what it means to be responsible for one’s behavior.
But then again… if I believe that a person is responsible for their behavior, and trust them, and then later discover that they’re not responsible, does that mean that I never really trusted them? From a descriptive point of view, maybe it doesn’t matter so much whether a machine can be responsible, but whether a person believes a machine can be responsible, or trustworthy.
In a 2004 CHI panel on human-robot interaction, panelists discussed the potential risks of people both trusting and not trusting robots (but maybe what is called trust here could be better described as a having a sense of assurance from some perspectives). One point that panelists returned to was the need for people to understand the behavior of non-human agents — something that becomes increasingly difficult as algorithms become less predictable from a human point of view. People find creative ways to use a poorly performing robot when they understand its limitations, but either refuse to use a robot they don’t understand, or allow a robot they don’t understand to perform poorly.
Arguing that “understanding may be more important than trust”, the panel suggested:
The paradigm of “human supervision” may well give way to a promising, but simultaneously frightening future of “peer to-peer interaction” where authority is given to whichever team member is most appropriate, be they human or machine” .
How best to understand that kind of interaction?
 McKelvey, T. (2012, May 19). Could We Trust an Army of Killer Robots? Wall Street Journal.
 Shneiderman, B. (2000). Designing trust into online experiences. Communications of the ACM, 43(12), 57–59.
 Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., Smart, B., et al. (2004). How to trust robots further than we can throw them. CHI ’04 extended abstracts on Human factors in computing systems, CHI EA ’04 (pp. 1576–1577). New York, NY, USA: ACM. doi:10.1145/985921.986152
Robben, A. (2012, May 18). Military Technologies Are Transforming the Anthropology of Violence. Science Voices. Retrieved May 28, 2012, from http://blogs.sciencemag.org/sciencevoices/2012/05/military-technologies-are-tran.html
Haimes, E. (2002). What can the social sciences contribute to the study of ethics? Theoretical, empirical and substantive considerations. Bioethics, 16(2), 89–113.