One robot, many humans
I study NASA’s robotic spacecraft teams: people for whom work with robots is not some scifi fantasy but a daily task. Their robotic teammates roll on planetary surfaces or whip past the atmospheres of gas giants and icy moons at tremendous speeds.
It is often easy to forget about these earth-bound groups behind the scenes when we are transfixed by new images of distant worlds or the achievements of these intrepid machines. We might only catch a quick glimpse of a few people in a room, an American flag on the wall behind them, cheering when a probe aces a landing or swings into orbit: like this week, when Juno arrived at Jupiter. But this is only a small fraction of the team. Not only are the probes complex and require a group of engineers to operate and maintain them safely, but the scientific requirements for each mission bring together many diverse experts to explore new worlds.
Robotic work is team work
To that end, working with a spacecraft is always teamwork, a creative task that brings together hundreds of people. Like any team, they use local norms of communication and interaction, and organizational routines and culture, in order to solve problems and achieve their goals. The spacecraft exploring our solar system have enough artificial intelligence to know better than to drive off a cliff, or they may know to reset their operating systems in case of a fault. There the autonomy ends. For the rest, every minute down to the second of their day is part of a plan, commanded and set into code by specialists on earth.
How to decide what the robot should do? First the team must take into account some basic constraints. When I studied the Mars Exploration Rover mission team, everyone knew that Opportunity could not drive very quickly; lately it has suffered from memory lapses and stiff joints in its old age. On another mission I have studied as an ethnographer, the path the spacecraft takes is decided years in advance to take into account the planetary system’s delicate orbital dynamics and enable the team to see as much of the planet, its moons and rings as possible. It is not easy to change course. On all missions, limits of power, time, and memory on board matter provide hard constraints for planning.
Human factors are in the mix too. I often compare working on a spacecraft team to being on a bus with hundreds of people, each with their own idea about where to go and what to do – but with only one steering wheel. To make any decisions at all about robotic activities, the group first has to decide how to decide. They come up with a social organization for their team, codes of conduct and rules to govern their interactions. And they must constantly work together to prioritize which observations to send to the spacecraft.
I have spent ten years researching robotic team decision making and I can say for certain that there is no one best way to command a spacecraft. One team I studied uses a matrix organization structure, sorting scientists associated with different instruments into cross-cutting working groups and charging those groups to decide what science should be done during different periods of the spacecraft’s path. Another group I studied has a flat command structure and requires unilateral consensus across the whole team before the robot can act. I am analyzing these different groups in forthcoming work, comparing their work cultures and their organizational practices. Despite local differences, they have a lot in common. Each team is highly successful and conducts important scientific work. And even if they have different ways of working together, everyone on board is committed to reaching agreement.
Social organization affects the robot’s actions
The robot’s every move in space is determined by the outcome of these organizational interactions: what one scientist I interviewed called, a “tightly scripted little dance that we do.” When a scientist wants to make an observation, that request must go through the group decision-making process. If they are successful, they will get the data and raw material necessary to conduct their work. But if the observation is cut entirely as part of the process, the work cannot be done at all.
This means that every photograph you see on the internet or on the news is the result of these groups working together in concert to decide: this picture, this reading, this angle, this moment. As the decision-making process results in the robot conducting some scientific investigations but not others, this requires careful negotiation, difficult prioritization, and recourse to local cultures of decision-making to make sure everyone is on board with the plan. Ultimately, deciding how to decide plays an important role in how people relate to their robots.
Organization and the robotic body
Organizations play another important role in the way that people relate to their robots. I saw this most clearly on the Mars Rover mission, where it was commonplace for team members to use their bodies to imitate the rover. They held their hands out to approximate the robot’s camera “eyes,” tilted their torsos back and forth to “feel” the pitch and roll of the craft, and swung their forearms awkwardly from their elbows like the robotic arm.
As I argued in my first book, Seeing Like a Rover, this embodied activity helped rover team members to understand their robot’s activities at a distance, much like anthropologist Natasha Myers found that protein biologists use their bodies to understand simulated proteins. Yet as everyone embodied the same robot together in the same way and felt its actions throughout their whole frame, this also reified their organization’s commitment to unilateral consensus. This team preferred to allocate spacecraft time not one instrument at a time, but by bringing all the instruments and scientists together to solve directed, shared problems. So getting as many people as possible to participate in a shared plan and shared set of robotic experiences helped to generate unanimous support for a plan. The body work of being the rover also helped to turn the robot itself into the team’s totem, which in turn cemented group solidarity and strong social ties.
Robots, organizations, and design
One way to move beyond the human-machine binary is to stop thinking only about one-human-one-machine. While it’s tempting to imagine a future where we all have our own personal R2D2, humans are social beings. When robots join us at work, they will enter group settings with existing hierarchies, cultures, and interactional norms. From working on the shop floor, dispensing medication in a hospital, or cleaning our homes, these robots will need to navigate organizational norms alongside cultural expectations.
To that end, we must incorporate organizational thinking into our design thinking. This means investigating how different organizational forms demand different kinds of interactions from robotic agents. The intimacy of the home environment might influence how the Roomba is perceived like a pet or “one of the family,” as Georgia Tech researchers found; meanwhile, a robot in a hospital might have a cheeriest bedside manner but it must know how to behave in a hierarchical workplace when nurses, doctors, and surgeons give contradictory orders.
We must also know enough about group work and organizational culture not to upset the delicate sociotechnical work that people do with their robots in our own design ambitions. Certainly a more perfect 3-D visualization of Mars might assist in robotic planning. But abandoning the shared body of the robot as a subject position may disrupt the important organizational work that the consensus team is doing on earth when they imagine their robots in space.
Participatory design may reveal ways forward as we enroll many stakeholders in design practice; ethnography can help to develop a vocabulary for expressing organizational routines and workarounds. But if there is one thing we can apply from NASA’s robotic explorers to robotic workers of the future, it is to think organizationally about how they will join our human teams.
[All images courtesy NASA, JPL-Caltech; Mars images also courtesy Cornell, ASU]
Like what you’re reading? Ethnography Matters is a volunteer run site with no advertising. We’re happy to keep it that way, but we need your help. We don’t need your donations, we just want you to spread the word. Our reward for running this site is watching people spread our articles. Join us on twitter, tweet about articles you like, share them with your colleagues, or become a contributor. Also join our Slack to have deeper discussions about the readings and/or to connect with others who use applied ethnography; you don’t have to have any training in ethnography to join, we welcome all human-centric people from designers to engineers.