Tag Archives: Janet Vertesi

What robots in space teach us about teamwork: A deep dive into NASA


Note from the Editor, Tricia Wang: The final contributor in the Co-designing with machines edition is Janet Vertesi, (@cyberlyra), assistant professor of sociology at Princeton University, urging us to think about organizations when we talk about robots. To overcome the human-machine binary, she takes us into her years of fieldwork with NASA’s robotic teams to show us that robotic work is always teamwork, never a one-to-one interaction with robots. It’s not easy to get inside organizations, much less a complicated a set of institutions such as NASA, but that is why Janet’s writings are a rare but powerful examination of how robots are actually used. She is a frequent op-ed contributor to places like CNN.  Following her first book on NASA’s Mars Rover Expedition, she is already working on her second book about robots and organizations. 

One robot, many humans

I study NASA’s robotic spacecraft teams: people for whom work with robots is not some scifi fantasy but a daily task. Their robotic teammates roll on planetary surfaces or whip past the atmospheres of gas giants and icy moons at tremendous speeds.

It is often easy to forget about these earth-bound groups behind the scenes when we are transfixed by new images of distant worlds or the achievements of these intrepid machines.  We might only catch a quick glimpse of a few people in a room, an American flag on the wall behind them, cheering when a probe aces a landing or swings into orbit: like this week, when Juno arrived at Jupiter.  But this is only a small fraction of the team. Not only are the probes complex and require a group of engineers to operate and maintain them safely, but the scientific requirements for each mission bring together many diverse experts to explore new worlds.

Robotic work is team work

To that end, working with a spacecraft is always teamwork, a creative task that brings together hundreds of people. Like any team, they use local norms of communication and interaction, and organizational routines and culture, in order to solve problems and achieve their goals. The spacecraft exploring our solar system have enough artificial intelligence to know better than to drive off a cliff, or they may know to reset their operating systems in case of a fault. There the autonomy ends. For the rest, every minute down to the second of their day is part of a plan, commanded and set into code by specialists on earth.

How to decide what the robot should do? First the team must take into account some basic constraints. When I studied the Mars Exploration Rover mission team, everyone knew that Opportunity could not drive very quickly; lately it has suffered from memory lapses and stiff joints in its old age. On another mission I have studied as an ethnographer, the path the spacecraft takes is decided years in advance to take into account the planetary system’s delicate orbital dynamics and enable the team to see as much of the planet, its moons and rings as possible. It is not easy to change course. On all missions, limits of power, time, and memory on board matter provide hard constraints for planning.

Read More… What robots in space teach us about teamwork: A deep dive into NASA

Co-designing with machines: moving beyond the human/machine binary



web-7525squareLetter from the Editor: I am happy to announce the The Co-Designing with Machines edition. As someone with one foot in industry redesigning organizations to flourish in a data-rich world and another foot in research, I’m constantly trying to take an aerial view on technical achievements. Lately, I’ve been obsessed with the future of design in a data-rich world increasingly powered by of artificial intelligence and its algorithms. What started out over a kitchen conversation with my colleague, Che-Wei Wang (contributor to this edition) about generative design and genetic algorithms turned into a big chunk of my talk at Interaction Design 2016 in Helsinki, Finland. That chunk then took up more of a my brain space and expanded into this edition of Ethnography Matters, Co-designing with machines. In this edition’s introductory post, I share a more productive way to frame human and machine collaboration: as a networked system. Then I chased down nine people who are at the forefront of this transformation to share their perspectives with us. Alicia Dudek from Deloitte will kick off the next post with a speculative fiction on whether AI robots can perform any parts of qualitative fieldwork. Janet Vertesi will close this edition giving us a sneak peak from her upcoming book with an article on human and machine collaboration in NASA Mars Rover expeditions. And in between Alicia and Janet are seven contributors coming from marketing to machine learning with super thoughtful articles. Thanks for joining the ride! And if you find this to be engaging, we have a Slack where we can continue the conversations and meet other human-centric folks. Join our twitter @ethnomatters for updates. Thanks. @triciawang

giphy (1)

Who is winning the battle between humans and computers? If you read the headlines about Google’s Artificial Intelligence (AI), DeepMind, beating the world-champion Go player, you might think the machines are winning. CNN’s piece on DeepMind proclaims, “In the ultimate battle of man versus machine, humans are running a close second.” If, on the other hand, you read the headlines about Facebook’s Trending News Section and Personal Assistant, M, you might be convinced that the machines are less pure and perfect than we’ve been led to believe. As the Verge headline puts it, “Facebook admits its trending news algorithm needs a lot of human help.”

The headlines on both sides are based in a false, outdated trope: The binary of humans versus computers. We’re surrounded by similar arguments in popular movies, science fiction, and news. Sometimes computers are intellectually superior to humans, sometimes they are morally superior and free from human bias. Google’s DeepMind is winning a zero-sum game. Facebook’s algorithms are somehow failing by relying on human help, as if collaboration between humans and computers in this epic battle is somehow shameful.

The fact is that humans and computers have always been collaborators. The binary human/computer view is harmful. It’s restricting us from approaching AI innovations more thoughtfully. It’s masking how much we are biased to believe that machines don’t produce biased results. It’s allowing companies to avoid taking responsibility for their discriminatory practices by saying, “it was surfaced by an algorithm.” Furthermore, it’s preventing us from inventing new and meaningful ways to integrate human intelligence and machine intelligence to produce better systems.

giphyAs computers become more human, we need to work even harder to resist the binary of computers versus humans. We have to recognize that humans and machines have always interacted as a symbiotic system. Since the dawn of our species, we’ve changed tools as much as tools have changed us. Up until recently, the ways our brains and our tools changed were limited to the amount of data input, storage, and processing both could handle. But now, we have broken Moore’s Law and we’re sitting on more data than we’re able to process. To make the next leap in getting the full social value out of the data we’ve collected, we need to make a leap in how we conceive of our relationships to machines. We need to see ourselves as one network, not as two separate camps. We can no longer afford to view ourselves in an adversarial position with computers.

To leverage the massive amount of data we’ve collected in a way that’s meaningful for humans, we need to embrace human and machine intelligence as a holistic system. Despite the snazzy zero-sum game headlines, this is the truth behind how DeepMind mastered Go. While the press portrayed DeepMind’s success as a feat independent of human judgement, that wasn’t the case at all. Read More… Co-designing with machines: moving beyond the human/machine binary