Tag Archives: *edition post*

The future of designing autonomous systems will involve ethnographers


elish_photoNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Madeleine Clare Elish, (@mcette), is an anthropologist and researcher at Data & Society, presents a case for why current cultural perceptions of the role of humans in automated systems need to be updated in order to protect against new forms of bias and worker harms. Read more about her research on military drones and machine intelligence at Slate. Madeleine also works as a researcher with the Intelligence & Autonomy Initiative at Data & Society which develops empirical and historical research in order to ground policy debates around the rise of machine intelligence.

“Why would an anthropologist study unmanned systems?” This is a question I am often asked by engineers and product managers at conferences. The presumption is that unmanned systems (a reigning term in the field, albeit unreflexively gendered) are just that, free of humans; why would someone who studies humans take this as their object of study? Of course, we, as ethnographers, know there are always humans to be found.  Moreover, few if any current systems are truly “unmanned” or “autonomous.” [1] All require human planning, design and maintenance. Most involve the collaboration between human and machine, although the role of the human is often obscured. When we examine autonomous systems (or any of the other terms invoked in the related word cloud: unmanned, artificially intelligent, smart, robotic, etc) we must look not to the erasures of the human, but to the ways in which we, as humans, are newly implicated.

My dissertation research, as well as research conducted with the Intelligence and Autonomy Initiative at Data & Society, has examined precisely what gets obscured when we call something, “unmanned” or “autonomous.” I’ve been increasingly interested in the conditions and consequences for how human work and skill become differently valued in these kinds of highly automated and autonomous systems. In this post, Tricia has asked me to share some of the research I’ve been working on around the role of humans in autonomous systems and to work through some of the consequences for how we think about cooperation, responsibility and accountability.

giphy

Modern Times, 1936 [giphy]

The Driver or the System?

Let me start with a story: I was returning to New York from a robot law conference in Miami. I ordered a Lyft to take me to the Miami airport, selecting the address that first populated the destination field when I typed the phrase “airport Miami” into the Lyft app. The car arrived. I put my suitcase in the trunk. I think the driver and I exchanged hellos–or at the very least, a nod and a smile. We drove off, and I promptly fell asleep. (It had been a long week of conferencing!) I woke up as we were circling an exit off the highway, in a location that looked distinctly not like the entrance to a major airport. I asked if this was the right way to the airport. He shrugged, and I soon put together that he did not speak any English. I speak passable Spanish, and again asked if we were going to the right place. He responded that he thought so. Maybe it was a back way? We were indeed at the airport, but not on the commercial side. As he drove on, I looked nervously at the map on my phone.

Read More… The future of designing autonomous systems will involve ethnographers

An Engineering Anthropologist: Why tech companies need to hire software developers with ethnographic skills


1955a58Note from the Editor, Tricia Wang: I’m very please to announce that the next contributor in the Co-designing with machines edition is Astrid Countee (@ianthro), an anthropologist, software developer, data analyst and writer all-in-one. In this article, Astrid illustrates how being an anthropologist makes her a better developer, and argues that the gap between the social science and tech must be bridged to reach new innovations. Her article echoes themes brought up in edition contributors Stephen Gustafson’s and Che-Wei Wang’s post – where both authors discuss the importance of the human side of AI innovations. As a long time fan of Astrid’s work, I’m also excited that we get to hear her recount her journey of starting out as neurosurgeon student to becoming an anthropologist and then a software programmer. She is an organizer for Rails Girls, a workshop that teaches girls and women how to code. Her newly available book, Family Talk and Chronic Disease, a practical guide for black families to manage diabetes and hypertension. She is currently pursuing a masters in computer science and math. Read more of Astrid’s writing at Ianthro.

photo by Martin QuirozI did not always have dreams of being a software engineer. For a very long time I dreamed only of being a surgeon. I was fascinated with medicine, and longed to be able to help people from the inside out. It was with this singular focus that I entered college as a forensic science pre-med major and started down a path that I thought for sure would end with me in the operating room.

But my fate was changed, at first very slowly and then with a quickness. The first couple of years in school had been rough on me. It made me question if I was doing the right thing since I wasn’t enjoying my major as much as I thought I would. I didn’t like the way that the natural sciences taught through memorization. I was interested in discovery, and wanted the challenge of making something new, rather than learning how things already worked. All of these things were small nuisances at first, but they let to me deciding  to drop my pre-med designation. I was now free to take classes that I found interesting. I found a better fit studying psychology, neuroscience and linguistics. Then I took my first anthropology class. This ushered in the quick change. I found the discipline that I would continue to study in graduate school, and a worldview that gave me the chance to discover. I loved the integration of natural science, philosophy with art and history. It allowed my mind to see the world from a new angle.

While working on my graduate degree, I also worked full time at a data company. It was at this company that I learned about technology and my love and affinity for it. I learned how to run queries, how to build databases, and how to manipulate data in ways I had never thought to before. It was a great compliment to my graduate studies as a medical anthropologist. It was also at this company that the seeds were planted that lead me to become a software engineer. It was that same sense of discovery combined with tools to build what I wanted into existence.

I found ways to apply anthropology to everything that I did, including software. Anthropology and software are not exactly peanut butter and jelly, but they do maintain a delicate balance to innovation.

Digging into Anthropology

Anthropology is a broad discipline concerned with techniques like ethnography, often using grounded theory, where you go out into the field and allow a culture to tell you who they are and how they do things. It is a science unlike any other in that what you can study nearly knows no bounds.  The vastness of the discipline trains you to see universal patterns. Everything is understood as belonging to a system. It is through understanding the system that you can find your footing in something unfamiliar, and find your way through it. It is no wonder that when I started working as a software engineer, I was drawn to DevOps and systems engineering. My anthropological training lead me straight to the framework for how technology works.

I know the value of holism, of seeing how one piece affects another. It is an obvious thing that often gets ignored when building technical systems. People often think of technology as machines talking to machines. And while that is true at some level in the technology stack, building software is more about people than anything else. There are people who are using the systems, there are people who are architecting the systems. There are people who are writing the software. The human footprint can be found everywhere you turn. So, it makes sense that humanistic thinking in software is revolutionary. It is the reason why Apple can change the world by taking their iPod and attaching it to a cell phone. 10 years ago smart phones were an extremely small part of the market. Now, in the western world, it is likely that there are more smart phones and tablets in a home than there are personal computers. It isn’t by accident, or only by great marketing. It is by using technology to tap into a holistic system. These systems exists around us all the time, and an anthropologist is trained to root them out, understand them, and predict how they will change.

Gearing up with Engineering

But like any balanced equation, being a software developer has changed my view as an anthropologist as well. My training, even as an applied practitioner was not nearly as project driven as my work as a software engineer. In order to break down the problems I am looking at, it is helpful to start doing something, in order to understand it. Even if that means sketching out the chain of events that I am trying to fix, action is a virtue. You are a software engineer because you write working programs. That’s it. No peer-reviewed work, no list of accolades to prove your value. That intentional execution has influenced the way that I think about problem solving. It forces me to get deep into the dirty work much sooner. It also means becoming expert at shrinking big problems down to size. The only way to eat the elephant is one bite at a time. No one knows that better than a software engineer. It is a huge part of the job to dissect what you are doing down to small chunks of solvable problems.  Being in the thick of it is what I loved about being an anthropologist. Being a software engineer takes that to a whole new level.Read More… An Engineering Anthropologist: Why tech companies need to hire software developers with ethnographic skills

The human-side of artificial intelligence and machine learning


StevenGustafsonNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Steven Gustafson (@stevengustafson), founder of the Knowledge Discovery Lab at the General Electric Global Research Center in Niskayuna, New York. In this post, he asked what is the role of humans in the future of intelligent machines. He makes the case that in the foreseeable future, artificially intelligent machines are the result of creative and passionate humans, and as such, we embed our biases, empathy, and desires into the machines making them more “human” that we often think. I first came across Steven’s work while he was giving a talk hosted by Madeleine Clare Elish (edition contributor) at Data & Society, where he spoke passionately about the need for humans to move up the design process and to bring in ethical thinking in AI innovation. Steven is a former member of the Machine Learning Lab and Computational Intelligence Lab, where he developed and applied advanced AI and machine learning algorithms for complex problem solving. In 2006, he received the IEEE Intelligent System’s “AI’s 10 to Watch” award. He currently serves on the Steering Committee of the National Consortium for Data Science, based out of University of North Carolina. Recently. he gave the Keynote at SPi Gobal’s Client Advisory Board Summit in April 2016, titled “Advancing Data & Analytics into the Age of Artificial Intelligence and Cognitive Computing”.

landscape-1457536221-alphago (1)Recently we have seen how Artificial Intelligence and Machine Learning can amaze us with seemingly impossible results like AlphaGo. We also see how machines can generate fear with perceived “machine-like” reasoning, logic and coldness, generating potentially destructive outcomes with a lack of humanity in decision making. An example of the latter that has become popular is how self driving cars decide to choose between two bad outcomes. In these scenarios, the AI and ML are embodied as a machine of some sort, either physical like a robot or car, or a “brain” like a predictive crime algorithm made popular in the book and film “Minority Report” and more recently TV show “Persons of Interest.

I am a computer scientist with the expertise and passion for AI and machine learning, and I’ve been working across broad technologies and applications for the past decade. When I see these applications of AI, and the fear or hype of their future potential, I like to remember what first inspired me. First, I am drawn to computers as they are a great platform for creation and instant feedback. I can write code and immediately run it. If it doesn’t work, I can change the code and try it again. Sure, I can make proofs and develop theory, which has its own beauty and necessity at times, but I remember one of the first database applications I created and how fun it was to enter sample data and queries and see it work properly. I remember the first time I developed a neural network and made it play itself to learn without any background knowledge how to play tic tac toe. This may be a very trivial example, but it is inspiring nonetheless.

Can a machine write its own code? Can a machine design a new, improved version of itself? Can a machine “evolve” like humans into a more intelligent species? Can a machine talk to another machine using a human language like English? These were all questions that excited me as an undergraduate computer scientist, and that led me to study AI and ML during grad school, and these are all questions that can be answered with a Yes! Machines, or computers and algorithms, have been shown in different circumstances to achieve these capabilities, yet both the idea that machines have the capabilities and the idea that machines can learn are scary concepts to humans in the general sense. But when we step into each one of these achievements, we find something that I believe is both creative, inspiring and human.

But let me step back for a minute. Machines can not do those things above in a general sense. For example, if I put my laptop in a gym with a basketball, it can’t evolve a body and learn to play basketball. That is, it can’t currently do that without the help of many bright engineers and scientists. If I downloaded all my health data into my phone, my phone is not going to learn how to treat my health issues and notify my doctor. Again, that is it can’t do that currently without the help of many smart engineers and scientists. So while my machine can’t become human today on its own, with the help of many engineers and scientists solving some very interesting technology, user experience, and domain specific problems, machines can do some very remarkable things, like drive a car or engage in conversation.

The gap that creative, intelligent and trained engineers and scientists play today is a gap that must be closed for intelligent machines that both learn and apply that learning. That gap is also a highly human gap – it highlights the desire of our species, accumulation of knowledge, our ability to overcome challenging problems, and our desire to collaborate and work together to solve meaningful problems. And yes, it can also highlight our failures to do the right thing. But it is a human thing, still.

Read More… The human-side of artificial intelligence and machine learning

The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions



Data and Society-039December 10, 2015Note from the Editor, Tricia Wang: The next author for the Co-designing with machines edition is Angèle Christin (@angelechristin), sociologist and Postdoctoral Fellow at the Data & Society Institute. In a riveting post that takes us inside the courtrooms of France and newsroom of the the US, Angèle compares how people deal with technologies of quantification in data-rich and data-poor environments. She shows how people in both contexts us similar strategies of resistance and manipulation of digital metrics in courtrooms and newsrooms. Her post is incredibly valuable as both courtrooms and newsrooms are new areas where algorithmic practices are being introduced, sometimes with appalling results, such as this Propublica article reveals. Angèle studies sectors and organizations where the rise of algorithms and ‘big data’ analytics transforms professional values, expertise, and work practices. She received her PhD in Sociology from Princeton University and the EHESS (Paris) in 2014. 

giphy11

I came to the question of machines from the study of numbers, more precisely the role that numbers play in organizations. Ten years ago, I wasn’t very interested in technology: I was a student in Paris, I barely had an email address, and what I wanted to study was criminal justice.

The fall of 2005 in France was marked by the events that came to be known as the “urban riots” (émeutes urbaines), a period of unrest among the young men and women living in city outskirts (banlieues). Their protests were triggered by the death by electrocution of two teenagers who had sought refuge in an electric substation while being chased by the police.

Over the next couple of months, cars were burning, the police were everywhere, and many young men of African and North-African descent were arrested, arraigned, and sentenced, usually to prison. Parisian courts relied heavily on an old penal procedure for misdemeanors, the comparutions immédiates (emergency hearings), which makes it possible to sentence defendants immediately after their arrest. The procedure was originally designed to control “dangerous” urban crowds in the second half of the 19th century.

During and after the urban riots, journalists and intellectuals denounced the revival of a bifurcated justice system, in which lower class and minority defendants were tried in a hurry, with meager resources for public defenders, insufficient procedural safeguards, and high rates of prison sentences. Crowds of friends and supporters congregated in the courts and attended the hearings, cheering the defendants and booing the judges. The police heavily guarded the courtrooms in order to prevent direct attacks on the magistrates.

In all of this, judges and prosecutors remained silent. No one knew what was really going on before or after the hearings. I decided to go behind the scenes to examine how prosecutors, judges, and lawyers worked on the cases and decided on the charges and sentences of the defendants. I was able to conduct a yearlong ethnographic study of several criminal courts, including one in Paris and one in a North-East banlieue.Read More… The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions

Why do brands lose their chill? How bots, algorithms, and humans can work together on social media



Note from the Editor, Tricia Wang: The fourth contributor to the Co-designing with machines edition is Molly Templeton (@mollymeme), digital and social media expert, Director of Social Media at Everybody at Once, and one of the internet’s first breakaway YouTube stars. Her piece urges brands’ social media strategy to look beyond the numbers when working in the digital entertainment and marketing industry. Molly gives specific examples where algorithms don’t know how to parse tweets by humans that are coded with multiple layers of emotional and cultural meaning. She offers the industry a new way to balance the emotional labor in audience management with data analysis. Her articles draws on her work at Everybody at Once, a consultancy that specializes in audience development and social strategy for media, entertainment, and sports.

@Tacobell spent an hour sending this same gif out to dozens of people. The account is probably run by humans (most social media presences today are). And they were following best practice by “replicating community behavior,” that is, talking the way normal people talk to each other (a human taco bell fan would definitely send a gif). But when @tacobell only sends the same gifs out over and over again, it’s uncanny. It’s pulling the right answers from the playbook, but at the wrong frequency.  

Why do brands lose their chill?

I think that brands lose their chill when they don’t let their social media managers exercise empathy. The best brands on social media balance the benefit of interaction with the risk of human error – managers are constantly concerned with pissing off the organization, or the audience, and ultimately trying to please both sets of real people. Hitting campaign goals and maximizing efficiency are important, but social media managers need to bring humanity to their work. They have to understand the audience’s moods and where they’re coming from, and they have to exercise empathy at every level: customer service, information and content sharing, community management, call-to-actions, participation campaigns, crisis and abuse management. That is a lot of emotional labor.   

With the recent chatter about chat bots on Facebook’s messenger platform, a lot of people are thinking about how bots can take over communications roles from humans. I’ve been thinking a lot about the opposite: how can machines help people manage the emotional labor of working with audiences? Can bots ever help with the difficult, and very human task of managing with empathy?  

Social media is a business of empathy  

Emotional connections drive social media. When people gather around the things they feel passionate about, they create energy. It’s because of limbic resonance — the deep, neurological response humans have to other people’s emotions. As my colleague Kenyatta Cheese says, it’s that energy that makes participating as a fan on social media feel as electric as it does when you’re part of a physical crowd.  Read More… Why do brands lose their chill? How bots, algorithms, and humans can work together on social media

Mindful Algorithms: the new role of the designer in generative design


Note from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Che-Wei Wang, (@sayway), is a designer and architect who co-runs CW&T, a design study with Taylor Levy. In this post, he contemplates why engineers and architects will need to become more like ethnographers with generative design. He asks if it is possible to convert ethnographic data into quantitative data as algorithmic input. I’ve long admired Che-Wei’s ability to bring a poetic quality to the deeply mathematical nature of his work whether it’s in architecture or designing beloved products such as Pentype A and Pentype B. He’s currently an artist in residence Autodesk. Most recently, the Collective Design Fair featured CW&T‘s designs, followed by an article written about their work in Coolhunting.

giphy7The traditional design workflow is getting a turbo boost from algorithms (don’t worry. The robots aren’t taking over…yet). With new types of generative design processes like genetic algorithms, the designer’s role is changing from the traditional, top down approach of drawing ideas on paper, into a systems approach.  Designers traditionally sketch and develop ideas intuitively. With a genetic algorithm, instead of imagining a design solution, the designer develops a fitness criteria and coaxes the algorithm towards a final design.

As algorithms and data become crucial tools to a design workflow, ethnography will need to become part of the process. Engineers, designers, scientists will all need to become ethnographers. The best cab drivers know how to work with GPS navigation. You sometimes have to ‘trick’ the algorithm to get the best result.

I’m an artist and designer with a background in architecture.  I’m currently teaching a studio at Pratt Institute School of Architecture that’s attempting to integrate genetic algorithms into the design process in a meaningful way.  I started teaching this class primarily as a reaction to all the highly ornate generative design work that I’ve seen over the last decade. These algorithms are fetishized and have been used to generate highly articulated forms like swoopy skyscrapers with windows that vary in shape and size throughout the facade.  The question that always comes to my mind is…to what end?

Recent developments in software have started to shift generative design processes to incorporate environmental factors like sun radiation and structural forces to create more functional geometry. But, the question remains…What other forces and factors should be tied into the generative design process to create designs that respond to a site or a condition in a meaningful way?

Designers have been traditionally trained to conduct research, sketch ideas, refine ideas by moving between sketch, computer modeling, and prototypes. The designer in this traditional workflow does all the data processing in their head.  As algorithms increasingly become part of our workflow, the data will have a more direct effect on the outcome of designs.

How will algorithms change the design process? How do designers need to change their mindset to take advantage of algorithms? How does design need to change?… First I want to tell you what generative design is, and then give you an example, and tell you some thoughts I’m having about it and ethnographic data.

What is generative design? And the genetic algorithms?

Generative design is wide term encompassing any design process that involves algorithms in the design process. It’s often used to design complex shapes and optimized forms in relationship to forces, sun radiation, and various data that may influence the design.Read More… Mindful Algorithms: the new role of the designer in generative design

Lou and Cee Cee prepare for fieldwork in the future: a world where robots conduct ethnography


dudek-hi-res-headshotNote from the Editor, Tricia Wang: Kicking off our Co-designing with machines edition is Alicia Dudek (@aliciadudek), Innovation Insight Lead & Design Ethnographer at Deloitte Digital Australia. Using design thinking, ethnography, and other deep contextual customer research methods, she designs, conducts, and trains others in the world of customer empathy. Her contribution to this edition is the first science fiction to explore robots conducting ethnographic work. She uses a fictional story with Cee Cee, the robo-ethnographer, to examine what aspects of fieldwork can be conducted by a robot. I first met Alicia Dudek at an EPIC conference in London, where I became a fan of her work and promptly interviewed her for our edition that featured the best from EPIC. Read the interview, Play nice – design ethnographer meets management consultant, and find more of her writings on ethnography at her site.

Increasingly we are seeing more conversations about ‘what does it look like when the robots take your job.’ Once upon a time we believed this was some remote future where we’d finally invented the technology that could replace our bio-body’s ingenious functions. Now we are coming into a time where our technology has grown so advanced that the replacement of ourselves with robots is not only imagined, but plausible and even possible. An example of this shift is the imagining of white collar jobs ‘going robo’ that was recently covered by Quartz.

Writing this piece I wanted to have a little fun imagining a wonderful world where we can work hand in hand with robot peers. It is exciting to imagine the day when artificial intelligence is on par with that of our human research team members. Ethnographic technology is sometimes slow to progress due to the art and science nature of our work, but if we had the magic wand to unite all the drones, phones, data smarts, and humanly arts, we might have robo-colleagues as a part of our team one day soon. Friendly humans and friendly robots conducting ethnography together are a powerful combination. Also thank you to Elizabeth Dubois for writing this piece about trace interviews, which has some cool ideas on how we might conduct interviews.

– Alicia Dudek

Lou muses over her tea

Lou muses over her tea as she prepares for fieldwork with Cee Cee.

Lou mused over the steam rising from her cup of tea. She gathered her thoughts around what she’d be looking for in the field next week. She and her team were going to shadow young families and understand how they managed their finances. Field work was always one of the most exciting and exhausting parts of the data collection in her ethnography projects. What would be the right focus area for a trip into this family’s everyday life? She knew she’d have to cover the basics of bank accounts, credit cards, laptop / tablet / phone usage, calendar keeping, overall scheduling, family diaries, but what else might be valuable? What else could help to point the team in the direction of the golden nuggets of insight? All these years of traipsing in and out of the field and analysing scores of transcripts, videos, audios had left her always questioning, what’s next? What were the mental parameters that led her to the deep and meaningful insights from field observations? What was that ineffable thing that clients kept hiring her for again and again? How does an ethnographer see differently to find the golden nuggets?

Lou was jostled out of this reverie as Cee Cee energetically buzzed into the office and landed on Louise’s desk with a plop. “Louise I’m here for my briefing for the field work to be conducted.” Lou looked up from her imagined fieldwork and focused on Cee Cee’s entry into her office. In the past Lou had had dozens of assistants, grad students, and junior ethnographers to help with her work. None of them was quite like Cee Cee, who was rather innovative and definitely pushed Lou’s ways of working to new places. “Alright Cee Cee let’s get going on the briefing and I’ll tell you what we’re looking for and how to behave when you get out there.” Lou readjusted her posture and swung around to meet Cee Cee head on and get into the briefing.Read More… Lou and Cee Cee prepare for fieldwork in the future: a world where robots conduct ethnography

Five Mixed Methods for Research in the (Big) Data Age


In this final post for The Person in the (Big) Data edition of Ethnography Matters, we provide a collection of five mixed methods used by researchers to shine a light on the people behind the massive streams of data that are being produced as a result of online behavior. These research methods use a variety of digital and traditional methods but they share one thing in common: they are aimed at discovering stories. As Tricia Wang wrote on EM back in 2013, ‘Big Data needs Thick Data’. While ‘Big Data delivers numbers; thick data delivers stories. Big data relies on machine learning; thick data relies on human learning.’ In the methods outlined below, researchers outline how they have made the most of digital data using innovative methods that uncover the meaning, the context, the stories behind the data. In the end, this is still the critical piece for researchers trying to understand the moment in which we are living. Or, put differently, the ways in which we may want to live but are often prevented from by a system that sometimes reduces the human experience rather than to enable its flourishing. – HF, Ed.

1. Real-time Audience Feedback: The Democratic Reflection Method

Democratic Reflection ToolDemocratic Reflection is a new methodological tool for researching the real-time responses of citizens to televised media content, which was developed by a team of researchers from the University of Leeds and the Open University in the UK as part of a larger research project on TV election debates. The research for the project began by developing an inductive understanding of what people need from TV election debates in order to perform their role as democratic citizens. Drawing on focus groups with a diverse range of participants, the research identified five key demands — or ‘democratic entitlements’ — that participants felt debates and the political actors involved in them should meet. Participants felt entitled to be: (1) addressed as rational and independent decision makers, (2) given the information needed to make considered political judgements, (3) included in and engaged by the debates, (4) recognised and represented by the political leaders, and (5) provided with meaningful choices that allow them to make a difference politically. In the next phase of the research, the research team developed a new web-based app (accessible via mobile phones, tablets, and laptops), which allows viewers to respond to televised debates in real time and evaluate them using a range of twenty statements based on the five democratic entitlements. An experiment using the Democratic Reflection app was conducted with a panel of 242 participants during the first debate of the 2015 UK General Election, generating a dataset of over 50,000 responses. Analysis of the data provides a valuable new way to understand how viewers respond to election debates: we can explore general patterns of responses, compare different individuals and groups, track changes over time, and examine how specific moments and performances during the debates may relate to particular responses. – Giles MossRead More… Five Mixed Methods for Research in the (Big) Data Age

Trace Interviews Step-By-Step



Screen Shot 2016-05-03 at 9.38.39 AMIn this penultimate post for The Person in the (Big) Data edition of EM, Elizabeth Dubois @lizdubois provides
a step-by-step account of how the trace interviewing process works. Trace interviewing is a method used to elicit a person’s stories about why they made particular traces on social media platforms and is a wonderful way of getting at the stories underlying the data. Elizabeth’s step-by-step will hopefully help others wanting to use a similar method in the interview process! 

Social media data and other digital traces we leave as we navigate the web offer incredible opportunity for discovery within the social sciences. I am going to take you step by step through the process of trace interviewing – an approach that helps researchers gain richly detailed insights about the social context of that digital data. For those interested in the why as well as the how, Heather Ford and I talk a lot about why we think the method is important and what types of things it can offer researchers (such as validity checks, information about social context, opportunities to join data sets from various platforms) in our paper.

The Study
I want to figure out how people who influence other people on political issues choose their channels of communication (and the impact of those choices). The only way to understand these decisions and their impacts, I think, is a mixed-methods approach. So, I draw on Twitter data for content and network analysis, an online survey and in-depth trace interviews. You can read more about the full work here.

Trace Interviews
Hey, so great to finally meet you in person. Welcome!
By the time I got to the interview stage my interviewee and I already knew quite a lot about each other. They had filled out a survey, they knew I found them because the use the #CDNpoli hashtag, and they had read a project description and signed a consent form in advance.

It was important to form a relationship with my participants well in advance because I needed permission to collect their data. Sometimes trace data is publicly available (for example, tweets made or the list of accounts a Twitter user follows). But, even when it is publicly available, I tend to think that giving your participant a heads up that you’ve got or will collect data specifically about them is a good call. The fact is, people don’t always understand what making a public tweet means.Read More… Trace Interviews Step-By-Step

Taking Stock



In this post for The Person in the (Big) Data edition of EM, we hear from Giorgia Aiello @giorgishka who
demonstrates the ways in which she used both digital and traditional methods to explore the people and practices that characterise the stock photography industry. Giorgia’s stories of photographers attempting to game the algorithms that determine which photographs will come out on top in a search for cheese are compelling and memorable, and they just show how important it is to develop an expanded idea of just what ‘data’ is constituted by, even if the dominant discourse is more limited. 

Image banks like Getty Images and Shutterstock that sell ready-to-use ‘stock’ photographs online have become the visual backbone of advertising, branding, publishing, and journalism. Also, daily exposure to stock images has increased exponentially with the rise of social networking and the generic visuals used in lifestyle articles and ‘clickbait’ posts. The stock imagery business has become a global industry through recent developments in e-commerce, copyright and social media (Glückler & Panitz, 2013).

However, stock images are most often overlooked rather than looked at—both by ‘ordinary’ people in the contexts of their everyday lives and by scholars, who have rarely taken an interest in this industry and genre in its own right. There are some notable exceptions, dating back to the ‘pre-Internet’ era of stock photography, like Paul Frosh’s work on the ‘visual content industry’ in the early 2000s or David Machin’s critical analysis of stock imagery as the ‘world’s visual language’ (Frosh, 2003; Machin, 2004). As a whole, and compared to other media and communication industries, research on online image banks and digital stock imagery is virtually uncharted territory.

Why, then, should stock images be ascribed any significance or power since people do not particularly pay attention to them? Stock images are not only the ‘wallpaper’ of consumer culture (Frosh, 2003 and 2013); they are also central to the ambient image environment that defines our visual world, which is now increasingly digital and global while also remaining very much analogue and local (just think of your own encounters with such imagery at your bank branch, at your dentist or beauty salon, or on billboards in city streets). Pre-produced images are the raw material for the world’s visual media.Read More… Taking Stock