Tag Archives: robots

What robots in space teach us about teamwork: A deep dive into NASA


Note from the Editor, Tricia Wang: The final contributor in the Co-designing with machines edition is Janet Vertesi, (@cyberlyra), assistant professor of sociology at Princeton University, urging us to think about organizations when we talk about robots. To overcome the human-machine binary, she takes us into her years of fieldwork with NASA’s robotic teams to show us that robotic work is always teamwork, never a one-to-one interaction with robots. It’s not easy to get inside organizations, much less a complicated a set of institutions such as NASA, but that is why Janet’s writings are a rare but powerful examination of how robots are actually used. She is a frequent op-ed contributor to places like CNN.  Following her first book on NASA’s Mars Rover Expedition, she is already working on her second book about robots and organizations. 

One robot, many humans

I study NASA’s robotic spacecraft teams: people for whom work with robots is not some scifi fantasy but a daily task. Their robotic teammates roll on planetary surfaces or whip past the atmospheres of gas giants and icy moons at tremendous speeds.

It is often easy to forget about these earth-bound groups behind the scenes when we are transfixed by new images of distant worlds or the achievements of these intrepid machines.  We might only catch a quick glimpse of a few people in a room, an American flag on the wall behind them, cheering when a probe aces a landing or swings into orbit: like this week, when Juno arrived at Jupiter.  But this is only a small fraction of the team. Not only are the probes complex and require a group of engineers to operate and maintain them safely, but the scientific requirements for each mission bring together many diverse experts to explore new worlds.

Robotic work is team work

To that end, working with a spacecraft is always teamwork, a creative task that brings together hundreds of people. Like any team, they use local norms of communication and interaction, and organizational routines and culture, in order to solve problems and achieve their goals. The spacecraft exploring our solar system have enough artificial intelligence to know better than to drive off a cliff, or they may know to reset their operating systems in case of a fault. There the autonomy ends. For the rest, every minute down to the second of their day is part of a plan, commanded and set into code by specialists on earth.

How to decide what the robot should do? First the team must take into account some basic constraints. When I studied the Mars Exploration Rover mission team, everyone knew that Opportunity could not drive very quickly; lately it has suffered from memory lapses and stiff joints in its old age. On another mission I have studied as an ethnographer, the path the spacecraft takes is decided years in advance to take into account the planetary system’s delicate orbital dynamics and enable the team to see as much of the planet, its moons and rings as possible. It is not easy to change course. On all missions, limits of power, time, and memory on board matter provide hard constraints for planning.

Read More… What robots in space teach us about teamwork: A deep dive into NASA

The future of designing autonomous systems will involve ethnographers


elish_photoNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Madeleine Clare Elish, (@mcette), is an anthropologist and researcher at Data & Society, presents a case for why current cultural perceptions of the role of humans in automated systems need to be updated in order to protect against new forms of bias and worker harms. Read more about her research on military drones and machine intelligence at Slate. Madeleine also works as a researcher with the Intelligence & Autonomy Initiative at Data & Society which develops empirical and historical research in order to ground policy debates around the rise of machine intelligence.

“Why would an anthropologist study unmanned systems?” This is a question I am often asked by engineers and product managers at conferences. The presumption is that unmanned systems (a reigning term in the field, albeit unreflexively gendered) are just that, free of humans; why would someone who studies humans take this as their object of study? Of course, we, as ethnographers, know there are always humans to be found.  Moreover, few if any current systems are truly “unmanned” or “autonomous.” [1] All require human planning, design and maintenance. Most involve the collaboration between human and machine, although the role of the human is often obscured. When we examine autonomous systems (or any of the other terms invoked in the related word cloud: unmanned, artificially intelligent, smart, robotic, etc) we must look not to the erasures of the human, but to the ways in which we, as humans, are newly implicated.

My dissertation research, as well as research conducted with the Intelligence and Autonomy Initiative at Data & Society, has examined precisely what gets obscured when we call something, “unmanned” or “autonomous.” I’ve been increasingly interested in the conditions and consequences for how human work and skill become differently valued in these kinds of highly automated and autonomous systems. In this post, Tricia has asked me to share some of the research I’ve been working on around the role of humans in autonomous systems and to work through some of the consequences for how we think about cooperation, responsibility and accountability.

giphy

Modern Times, 1936 [giphy]

The Driver or the System?

Let me start with a story: I was returning to New York from a robot law conference in Miami. I ordered a Lyft to take me to the Miami airport, selecting the address that first populated the destination field when I typed the phrase “airport Miami” into the Lyft app. The car arrived. I put my suitcase in the trunk. I think the driver and I exchanged hellos–or at the very least, a nod and a smile. We drove off, and I promptly fell asleep. (It had been a long week of conferencing!) I woke up as we were circling an exit off the highway, in a location that looked distinctly not like the entrance to a major airport. I asked if this was the right way to the airport. He shrugged, and I soon put together that he did not speak any English. I speak passable Spanish, and again asked if we were going to the right place. He responded that he thought so. Maybe it was a back way? We were indeed at the airport, but not on the commercial side. As he drove on, I looked nervously at the map on my phone.

Read More… The future of designing autonomous systems will involve ethnographers

The human-side of artificial intelligence and machine learning


StevenGustafsonNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Steven Gustafson (@stevengustafson), founder of the Knowledge Discovery Lab at the General Electric Global Research Center in Niskayuna, New York. In this post, he asked what is the role of humans in the future of intelligent machines. He makes the case that in the foreseeable future, artificially intelligent machines are the result of creative and passionate humans, and as such, we embed our biases, empathy, and desires into the machines making them more “human” that we often think. I first came across Steven’s work while he was giving a talk hosted by Madeleine Clare Elish (edition contributor) at Data & Society, where he spoke passionately about the need for humans to move up the design process and to bring in ethical thinking in AI innovation. Steven is a former member of the Machine Learning Lab and Computational Intelligence Lab, where he developed and applied advanced AI and machine learning algorithms for complex problem solving. In 2006, he received the IEEE Intelligent System’s “AI’s 10 to Watch” award. He currently serves on the Steering Committee of the National Consortium for Data Science, based out of University of North Carolina. Recently. he gave the Keynote at SPi Gobal’s Client Advisory Board Summit in April 2016, titled “Advancing Data & Analytics into the Age of Artificial Intelligence and Cognitive Computing”.

landscape-1457536221-alphago (1)Recently we have seen how Artificial Intelligence and Machine Learning can amaze us with seemingly impossible results like AlphaGo. We also see how machines can generate fear with perceived “machine-like” reasoning, logic and coldness, generating potentially destructive outcomes with a lack of humanity in decision making. An example of the latter that has become popular is how self driving cars decide to choose between two bad outcomes. In these scenarios, the AI and ML are embodied as a machine of some sort, either physical like a robot or car, or a “brain” like a predictive crime algorithm made popular in the book and film “Minority Report” and more recently TV show “Persons of Interest.

I am a computer scientist with the expertise and passion for AI and machine learning, and I’ve been working across broad technologies and applications for the past decade. When I see these applications of AI, and the fear or hype of their future potential, I like to remember what first inspired me. First, I am drawn to computers as they are a great platform for creation and instant feedback. I can write code and immediately run it. If it doesn’t work, I can change the code and try it again. Sure, I can make proofs and develop theory, which has its own beauty and necessity at times, but I remember one of the first database applications I created and how fun it was to enter sample data and queries and see it work properly. I remember the first time I developed a neural network and made it play itself to learn without any background knowledge how to play tic tac toe. This may be a very trivial example, but it is inspiring nonetheless.

Can a machine write its own code? Can a machine design a new, improved version of itself? Can a machine “evolve” like humans into a more intelligent species? Can a machine talk to another machine using a human language like English? These were all questions that excited me as an undergraduate computer scientist, and that led me to study AI and ML during grad school, and these are all questions that can be answered with a Yes! Machines, or computers and algorithms, have been shown in different circumstances to achieve these capabilities, yet both the idea that machines have the capabilities and the idea that machines can learn are scary concepts to humans in the general sense. But when we step into each one of these achievements, we find something that I believe is both creative, inspiring and human.

But let me step back for a minute. Machines can not do those things above in a general sense. For example, if I put my laptop in a gym with a basketball, it can’t evolve a body and learn to play basketball. That is, it can’t currently do that without the help of many bright engineers and scientists. If I downloaded all my health data into my phone, my phone is not going to learn how to treat my health issues and notify my doctor. Again, that is it can’t do that currently without the help of many smart engineers and scientists. So while my machine can’t become human today on its own, with the help of many engineers and scientists solving some very interesting technology, user experience, and domain specific problems, machines can do some very remarkable things, like drive a car or engage in conversation.

The gap that creative, intelligent and trained engineers and scientists play today is a gap that must be closed for intelligent machines that both learn and apply that learning. That gap is also a highly human gap – it highlights the desire of our species, accumulation of knowledge, our ability to overcome challenging problems, and our desire to collaborate and work together to solve meaningful problems. And yes, it can also highlight our failures to do the right thing. But it is a human thing, still.

Read More… The human-side of artificial intelligence and machine learning

The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions



Data and Society-039December 10, 2015Note from the Editor, Tricia Wang: The next author for the Co-designing with machines edition is Angèle Christin (@angelechristin), sociologist and Postdoctoral Fellow at the Data & Society Institute. In a riveting post that takes us inside the courtrooms of France and newsroom of the the US, Angèle compares how people deal with technologies of quantification in data-rich and data-poor environments. She shows how people in both contexts us similar strategies of resistance and manipulation of digital metrics in courtrooms and newsrooms. Her post is incredibly valuable as both courtrooms and newsrooms are new areas where algorithmic practices are being introduced, sometimes with appalling results, such as this Propublica article reveals. Angèle studies sectors and organizations where the rise of algorithms and ‘big data’ analytics transforms professional values, expertise, and work practices. She received her PhD in Sociology from Princeton University and the EHESS (Paris) in 2014. 

giphy11

I came to the question of machines from the study of numbers, more precisely the role that numbers play in organizations. Ten years ago, I wasn’t very interested in technology: I was a student in Paris, I barely had an email address, and what I wanted to study was criminal justice.

The fall of 2005 in France was marked by the events that came to be known as the “urban riots” (émeutes urbaines), a period of unrest among the young men and women living in city outskirts (banlieues). Their protests were triggered by the death by electrocution of two teenagers who had sought refuge in an electric substation while being chased by the police.

Over the next couple of months, cars were burning, the police were everywhere, and many young men of African and North-African descent were arrested, arraigned, and sentenced, usually to prison. Parisian courts relied heavily on an old penal procedure for misdemeanors, the comparutions immédiates (emergency hearings), which makes it possible to sentence defendants immediately after their arrest. The procedure was originally designed to control “dangerous” urban crowds in the second half of the 19th century.

During and after the urban riots, journalists and intellectuals denounced the revival of a bifurcated justice system, in which lower class and minority defendants were tried in a hurry, with meager resources for public defenders, insufficient procedural safeguards, and high rates of prison sentences. Crowds of friends and supporters congregated in the courts and attended the hearings, cheering the defendants and booing the judges. The police heavily guarded the courtrooms in order to prevent direct attacks on the magistrates.

In all of this, judges and prosecutors remained silent. No one knew what was really going on before or after the hearings. I decided to go behind the scenes to examine how prosecutors, judges, and lawyers worked on the cases and decided on the charges and sentences of the defendants. I was able to conduct a yearlong ethnographic study of several criminal courts, including one in Paris and one in a North-East banlieue.Read More… The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions

Lou and Cee Cee prepare for fieldwork in the future: a world where robots conduct ethnography


dudek-hi-res-headshotNote from the Editor, Tricia Wang: Kicking off our Co-designing with machines edition is Alicia Dudek (@aliciadudek), Innovation Insight Lead & Design Ethnographer at Deloitte Digital Australia. Using design thinking, ethnography, and other deep contextual customer research methods, she designs, conducts, and trains others in the world of customer empathy. Her contribution to this edition is the first science fiction to explore robots conducting ethnographic work. She uses a fictional story with Cee Cee, the robo-ethnographer, to examine what aspects of fieldwork can be conducted by a robot. I first met Alicia Dudek at an EPIC conference in London, where I became a fan of her work and promptly interviewed her for our edition that featured the best from EPIC. Read the interview, Play nice – design ethnographer meets management consultant, and find more of her writings on ethnography at her site.

Increasingly we are seeing more conversations about ‘what does it look like when the robots take your job.’ Once upon a time we believed this was some remote future where we’d finally invented the technology that could replace our bio-body’s ingenious functions. Now we are coming into a time where our technology has grown so advanced that the replacement of ourselves with robots is not only imagined, but plausible and even possible. An example of this shift is the imagining of white collar jobs ‘going robo’ that was recently covered by Quartz.

Writing this piece I wanted to have a little fun imagining a wonderful world where we can work hand in hand with robot peers. It is exciting to imagine the day when artificial intelligence is on par with that of our human research team members. Ethnographic technology is sometimes slow to progress due to the art and science nature of our work, but if we had the magic wand to unite all the drones, phones, data smarts, and humanly arts, we might have robo-colleagues as a part of our team one day soon. Friendly humans and friendly robots conducting ethnography together are a powerful combination. Also thank you to Elizabeth Dubois for writing this piece about trace interviews, which has some cool ideas on how we might conduct interviews.

– Alicia Dudek

Lou muses over her tea

Lou muses over her tea as she prepares for fieldwork with Cee Cee.

Lou mused over the steam rising from her cup of tea. She gathered her thoughts around what she’d be looking for in the field next week. She and her team were going to shadow young families and understand how they managed their finances. Field work was always one of the most exciting and exhausting parts of the data collection in her ethnography projects. What would be the right focus area for a trip into this family’s everyday life? She knew she’d have to cover the basics of bank accounts, credit cards, laptop / tablet / phone usage, calendar keeping, overall scheduling, family diaries, but what else might be valuable? What else could help to point the team in the direction of the golden nuggets of insight? All these years of traipsing in and out of the field and analysing scores of transcripts, videos, audios had left her always questioning, what’s next? What were the mental parameters that led her to the deep and meaningful insights from field observations? What was that ineffable thing that clients kept hiring her for again and again? How does an ethnographer see differently to find the golden nuggets?

Lou was jostled out of this reverie as Cee Cee energetically buzzed into the office and landed on Louise’s desk with a plop. “Louise I’m here for my briefing for the field work to be conducted.” Lou looked up from her imagined fieldwork and focused on Cee Cee’s entry into her office. In the past Lou had had dozens of assistants, grad students, and junior ethnographers to help with her work. None of them was quite like Cee Cee, who was rather innovative and definitely pushed Lou’s ways of working to new places. “Alright Cee Cee let’s get going on the briefing and I’ll tell you what we’re looking for and how to behave when you get out there.” Lou readjusted her posture and swung around to meet Cee Cee head on and get into the briefing.Read More… Lou and Cee Cee prepare for fieldwork in the future: a world where robots conduct ethnography

Co-designing with machines: moving beyond the human/machine binary



web-7525squareLetter from the Editor: I am happy to announce the The Co-Designing with Machines edition. As someone with one foot in industry redesigning organizations to flourish in a data-rich world and another foot in research, I’m constantly trying to take an aerial view on technical achievements. Lately, I’ve been obsessed with the future of design in a data-rich world increasingly powered by of artificial intelligence and its algorithms. What started out over a kitchen conversation with my colleague, Che-Wei Wang (contributor to this edition) about generative design and genetic algorithms turned into a big chunk of my talk at Interaction Design 2016 in Helsinki, Finland. That chunk then took up more of a my brain space and expanded into this edition of Ethnography Matters, Co-designing with machines. In this edition’s introductory post, I share a more productive way to frame human and machine collaboration: as a networked system. Then I chased down nine people who are at the forefront of this transformation to share their perspectives with us. Alicia Dudek from Deloitte will kick off the next post with a speculative fiction on whether AI robots can perform any parts of qualitative fieldwork. Janet Vertesi will close this edition giving us a sneak peak from her upcoming book with an article on human and machine collaboration in NASA Mars Rover expeditions. And in between Alicia and Janet are seven contributors coming from marketing to machine learning with super thoughtful articles. Thanks for joining the ride! And if you find this to be engaging, we have a Slack where we can continue the conversations and meet other human-centric folks. Join our twitter @ethnomatters for updates. Thanks. @triciawang

giphy (1)

Who is winning the battle between humans and computers? If you read the headlines about Google’s Artificial Intelligence (AI), DeepMind, beating the world-champion Go player, you might think the machines are winning. CNN’s piece on DeepMind proclaims, “In the ultimate battle of man versus machine, humans are running a close second.” If, on the other hand, you read the headlines about Facebook’s Trending News Section and Personal Assistant, M, you might be convinced that the machines are less pure and perfect than we’ve been led to believe. As the Verge headline puts it, “Facebook admits its trending news algorithm needs a lot of human help.”

The headlines on both sides are based in a false, outdated trope: The binary of humans versus computers. We’re surrounded by similar arguments in popular movies, science fiction, and news. Sometimes computers are intellectually superior to humans, sometimes they are morally superior and free from human bias. Google’s DeepMind is winning a zero-sum game. Facebook’s algorithms are somehow failing by relying on human help, as if collaboration between humans and computers in this epic battle is somehow shameful.

The fact is that humans and computers have always been collaborators. The binary human/computer view is harmful. It’s restricting us from approaching AI innovations more thoughtfully. It’s masking how much we are biased to believe that machines don’t produce biased results. It’s allowing companies to avoid taking responsibility for their discriminatory practices by saying, “it was surfaced by an algorithm.” Furthermore, it’s preventing us from inventing new and meaningful ways to integrate human intelligence and machine intelligence to produce better systems.

giphyAs computers become more human, we need to work even harder to resist the binary of computers versus humans. We have to recognize that humans and machines have always interacted as a symbiotic system. Since the dawn of our species, we’ve changed tools as much as tools have changed us. Up until recently, the ways our brains and our tools changed were limited to the amount of data input, storage, and processing both could handle. But now, we have broken Moore’s Law and we’re sitting on more data than we’re able to process. To make the next leap in getting the full social value out of the data we’ve collected, we need to make a leap in how we conceive of our relationships to machines. We need to see ourselves as one network, not as two separate camps. We can no longer afford to view ourselves in an adversarial position with computers.

To leverage the massive amount of data we’ve collected in a way that’s meaningful for humans, we need to embrace human and machine intelligence as a holistic system. Despite the snazzy zero-sum game headlines, this is the truth behind how DeepMind mastered Go. While the press portrayed DeepMind’s success as a feat independent of human judgement, that wasn’t the case at all. Read More… Co-designing with machines: moving beyond the human/machine binary

Trusting machines



Fool hu-mans, there is no escape!

The Wall Street Journal did a piece last week on drones that decide whether to fire on a target, provocatively titled “Could We Trust an Army of Killer Robots?”

Although the title goes for the sci-fi jugular, the article balances questions about robot decision making with concerns like those of Georgia Tech’s Mobile Robot Lab director Ronald Arkin:

His work has been motivated in large part by his concerns about the failures of human decision-makers in the heat of battle, especially in attacking targets that aren’t a threat. The robots “will not have the full moral reasoning capabilities of humans,” he explains, “but I believe they can—and this is a hypothesis—perform better than humans” [1]

In other words: Do we trust an army of people?

Drones might make better decisions in some contexts. Whether drones can be trusted is a whole ‘nother question.Read More… Trusting machines

Cheering up the chatbot


The speech to text tool on my phone is convinced that “ethnography” = “not greasy.” (At least “not greasy” tends to be a postive thing?) Generally STT and voice commands work great on it though. You have to talk to it the right way: Enunciate; dramatic pauses between each word; don’t feed it too many words at once. The popular speech recognition application Dragon NaturallySpeaking emphasizes that users train the system to recognize their voices, but there’s always an element of the system training its users how to talk.

For entertainment purposes, it’s best to avoid the careful pauses and smush things together, producing text message gems like “Send me the faxable baby.”  It’s the mismatches between human intention and machine representation that can make using natural language interaction tools like STT, chatbots and speech prediction both frustrating and hilarious. When it’s bad, it’s really really good.

I’ve been playing with the game Cheer up the Chatbot the last couple days (from RRRR, “Where the games play you”).

Chatbot has an unusual way of interacting with people, as so many chatbots do.

Screen explaining Chatbot's mental disorders

Screen explaining Chatbot’s mental disorders

Understandably, Chatbot is sad.

chatbotissad

Poor chatbot

 

The goal is to get Chatbot to smile.

Open-ended questions make robots happy

Open-ended questions make robots happy

 

The game is a mix of bot and human-to-human chat, where you switch between talking to the game’s bot and to different players who are presented as the “Chatbot” speaker to each other.  When you hit a moment where there are enough players with different agendas online — including some who don’t know how the game works, some presenting as Chatbot, and some presenting as people — it can get weird.

Read More… Cheering up the chatbot

The ethnography of robots


Heather Ford spoke with Stuart Geiger, PhD student at the UC Berkeley School of Information, about his emerging ideas about the ethnography of robots. “Not the ethnography of robotics (e.g. examining the humans who design, build, program, and otherwise interact with robots, which I and others have been doing),” wrote Geiger, “but the ways in which bots themselves relate to the world”. Geiger believes that constructing and relating an emic account of the non-human should be the ultimate challenge for ethnography but that he’s getting an absurd amount of pushback from it.” He explains why in this fascinating account of what it means to study the culture of robots.

Stuart Geiger speaking about bots on Wikipedia at the CPoV conference by Institute of Network Cultures on Flickr

HF: So, what’s new, almost-Professor Geiger?

SG: I just got back from the 4S conference — the annual meeting of the Society for the Social Study of Science — which is pretty much the longstanding home for not just science studies but also Science and
Technology Studies. I was in this really interesting session featuring some really cool qualitative studies of robots, including two ethnographies of robotics. One of the presenters, Zara Mirmalek, was looking at the interactions between humans and robots within a modified framework from intercultural communication and workplace studies.

I really enjoyed how she was examining robots as co-workers from different cultures, but it seems like most people in the room didn’t fully get it, thinking it was some kind of stretched metaphor. People kept giving her the same feedback that I’ve been given — isn’t there an easier way you can study the phenomena that interest you without attributing culture to robots themselves? But I saw where she was going and asked her about doing ethnographic studies of robot culture itself, instead of the culture of people who interact with robots — and it seemed like half the room gave a polite chuckle. Zara, however, told me that she loved the idea and we had a great chat afterwards about this.Read More… The ethnography of robots