Tag Archives: big data

Persuasion and the other thing: A critique of big data methodologies in politics


Earlier this year, a company called Cambridge Analytica shot to the forefront of the debate over big data and elections when it claimed responsibility for the upset victories of both Donald Trump and the Brexit Campaign. Reports have cast the firm as a puppet master “propaganda machine” able to mint voters through a proprietary blend of psychometric data, primarily Facebook “likes” and targeted nudges. In this story, repeated by Mother Jones and The Guardian among others, Cambridge Analytica [working in conjunction with an “election management” firm called SCL Group] is both a king maker and a Pied Piper: voters are unable to resist attempts at political manipulation, as they are seamlessly integrated with voters’ online environment and pulled by strings too deeply anchored in voters’ psyches to be ignored.

Parody drawing of Star brand snake oil

Snake oil elixir, CC-BY Mike Licht

I’m uninterested in the actual snake content of Cambridge Analytica’s snake oil. As noted by the MIT Technology Review and BuzzFeed, the company has made some big claims and has been happy to take credit for several of 2016’s startling electoral results. But Cambridge Analytica relies heavily on the techno-magic of under-described big data psychographics and algorithmic nudging. Both the Tech Review and BuzzFeed point out that the amount and types of data that the company appears to use are not much different than types of data acquisition and analysis already commonly in use.

Instead I’m interested in the ways that Cambridge Analytica’s sales pitch reflects how the subjects of these big data analytics projects are viewed by those conducting the research, and the entitlements held by advertisers, tech firms, and researchers who deploy big data analytics in support of political campaigns or other political projects. This sense of entitlement matters. I’d like to posit that the use of “big data” in politics strips its targets of subjectivity, turning individuals into ready-to-read “data objects,” and making it easier for those in positions of power to justify aggressive manipulation and invasive inference. I would like to further suggest that when big data methodology is used in the public sphere, it is reasonable for these “data objects” to, in turn, use tactics like obfuscation, up to the point of actively sabotaging the efficacy of the methodology in general, to resist attempts to be read, known, and manipulated.Read More… Persuasion and the other thing: A critique of big data methodologies in politics

The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions



Data and Society-039December 10, 2015Note from the Editor, Tricia Wang: The next author for the Co-designing with machines edition is Angèle Christin (@angelechristin), sociologist and Postdoctoral Fellow at the Data & Society Institute. In a riveting post that takes us inside the courtrooms of France and newsroom of the the US, Angèle compares how people deal with technologies of quantification in data-rich and data-poor environments. She shows how people in both contexts us similar strategies of resistance and manipulation of digital metrics in courtrooms and newsrooms. Her post is incredibly valuable as both courtrooms and newsrooms are new areas where algorithmic practices are being introduced, sometimes with appalling results, such as this Propublica article reveals. Angèle studies sectors and organizations where the rise of algorithms and ‘big data’ analytics transforms professional values, expertise, and work practices. She received her PhD in Sociology from Princeton University and the EHESS (Paris) in 2014. 

giphy11

I came to the question of machines from the study of numbers, more precisely the role that numbers play in organizations. Ten years ago, I wasn’t very interested in technology: I was a student in Paris, I barely had an email address, and what I wanted to study was criminal justice.

The fall of 2005 in France was marked by the events that came to be known as the “urban riots” (émeutes urbaines), a period of unrest among the young men and women living in city outskirts (banlieues). Their protests were triggered by the death by electrocution of two teenagers who had sought refuge in an electric substation while being chased by the police.

Over the next couple of months, cars were burning, the police were everywhere, and many young men of African and North-African descent were arrested, arraigned, and sentenced, usually to prison. Parisian courts relied heavily on an old penal procedure for misdemeanors, the comparutions immédiates (emergency hearings), which makes it possible to sentence defendants immediately after their arrest. The procedure was originally designed to control “dangerous” urban crowds in the second half of the 19th century.

During and after the urban riots, journalists and intellectuals denounced the revival of a bifurcated justice system, in which lower class and minority defendants were tried in a hurry, with meager resources for public defenders, insufficient procedural safeguards, and high rates of prison sentences. Crowds of friends and supporters congregated in the courts and attended the hearings, cheering the defendants and booing the judges. The police heavily guarded the courtrooms in order to prevent direct attacks on the magistrates.

In all of this, judges and prosecutors remained silent. No one knew what was really going on before or after the hearings. I decided to go behind the scenes to examine how prosecutors, judges, and lawyers worked on the cases and decided on the charges and sentences of the defendants. I was able to conduct a yearlong ethnographic study of several criminal courts, including one in Paris and one in a North-East banlieue.Read More… The hidden story of how metrics are being used in courtrooms and newsrooms to make more decisions

Co-designing with machines: moving beyond the human/machine binary



web-7525squareLetter from the Editor: I am happy to announce the The Co-Designing with Machines edition. As someone with one foot in industry redesigning organizations to flourish in a data-rich world and another foot in research, I’m constantly trying to take an aerial view on technical achievements. Lately, I’ve been obsessed with the future of design in a data-rich world increasingly powered by of artificial intelligence and its algorithms. What started out over a kitchen conversation with my colleague, Che-Wei Wang (contributor to this edition) about generative design and genetic algorithms turned into a big chunk of my talk at Interaction Design 2016 in Helsinki, Finland. That chunk then took up more of a my brain space and expanded into this edition of Ethnography Matters, Co-designing with machines. In this edition’s introductory post, I share a more productive way to frame human and machine collaboration: as a networked system. Then I chased down nine people who are at the forefront of this transformation to share their perspectives with us. Alicia Dudek from Deloitte will kick off the next post with a speculative fiction on whether AI robots can perform any parts of qualitative fieldwork. Janet Vertesi will close this edition giving us a sneak peak from her upcoming book with an article on human and machine collaboration in NASA Mars Rover expeditions. And in between Alicia and Janet are seven contributors coming from marketing to machine learning with super thoughtful articles. Thanks for joining the ride! And if you find this to be engaging, we have a Slack where we can continue the conversations and meet other human-centric folks. Join our twitter @ethnomatters for updates. Thanks. @triciawang

giphy (1)

Who is winning the battle between humans and computers? If you read the headlines about Google’s Artificial Intelligence (AI), DeepMind, beating the world-champion Go player, you might think the machines are winning. CNN’s piece on DeepMind proclaims, “In the ultimate battle of man versus machine, humans are running a close second.” If, on the other hand, you read the headlines about Facebook’s Trending News Section and Personal Assistant, M, you might be convinced that the machines are less pure and perfect than we’ve been led to believe. As the Verge headline puts it, “Facebook admits its trending news algorithm needs a lot of human help.”

The headlines on both sides are based in a false, outdated trope: The binary of humans versus computers. We’re surrounded by similar arguments in popular movies, science fiction, and news. Sometimes computers are intellectually superior to humans, sometimes they are morally superior and free from human bias. Google’s DeepMind is winning a zero-sum game. Facebook’s algorithms are somehow failing by relying on human help, as if collaboration between humans and computers in this epic battle is somehow shameful.

The fact is that humans and computers have always been collaborators. The binary human/computer view is harmful. It’s restricting us from approaching AI innovations more thoughtfully. It’s masking how much we are biased to believe that machines don’t produce biased results. It’s allowing companies to avoid taking responsibility for their discriminatory practices by saying, “it was surfaced by an algorithm.” Furthermore, it’s preventing us from inventing new and meaningful ways to integrate human intelligence and machine intelligence to produce better systems.

giphyAs computers become more human, we need to work even harder to resist the binary of computers versus humans. We have to recognize that humans and machines have always interacted as a symbiotic system. Since the dawn of our species, we’ve changed tools as much as tools have changed us. Up until recently, the ways our brains and our tools changed were limited to the amount of data input, storage, and processing both could handle. But now, we have broken Moore’s Law and we’re sitting on more data than we’re able to process. To make the next leap in getting the full social value out of the data we’ve collected, we need to make a leap in how we conceive of our relationships to machines. We need to see ourselves as one network, not as two separate camps. We can no longer afford to view ourselves in an adversarial position with computers.

To leverage the massive amount of data we’ve collected in a way that’s meaningful for humans, we need to embrace human and machine intelligence as a holistic system. Despite the snazzy zero-sum game headlines, this is the truth behind how DeepMind mastered Go. While the press portrayed DeepMind’s success as a feat independent of human judgement, that wasn’t the case at all. Read More… Co-designing with machines: moving beyond the human/machine binary

Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods



Screen Shot 2016-04-12 at 7.56.00 AMIn the next post for ‘The Person in the (Big) Data‘ edition, Chris Birchall @birchallchris talks us through a variety of methods – big, small and mixed – that he used to study citizenship in the UK. Using some of the dominant tools for studying large data sources in one part of the study, Chris realised that the tools used had a significant impact on what can be (and is being) discovered and that this is quite different from the findings reached by deeper, mixed methods analysis. In this post, Chris asks important questions about whether big data research tools are creating some the conditions of citizenship today and what, exactly, deeper, more nuanced analysis can tell us.

People talk about politics online in many different ways and for many different purposes. The way that researchers analyse and understand such conversation can influence the way that we depict public political opinion and citizenship. In two recent projects I investigated the nature of this conversation and the forces that influence it, as well as the networks, spaces and resources that link that talk to political action. In doing so, I encountered a methodological rift in which careful, manual, time consuming approaches produce different types of conclusions from the big data driven approaches that are widespread in the commercial social media analytics industry. Both of these approaches could be framed as an illustration of human behaviour on the internet, but their differences show that the way that we embrace big data or digital methods influences the understanding of digital publics and citizenship that we gain from the translation of mass online data.

My recently submitted PhD study investigated online public political conversation in the UK. Drawing on the work of previous scholars who have focussed on the deliberative online public sphere (such as Coleman and Gotze, 2001; Coleman and Moss, 2012; Mutz, 2006; Wright and Street, 2007; Graham, 2012), the study acknowledged the importance of interpersonal exchange between participants and exposure to diverse and opposing viewpoints in the formation of preferences and informed opinion. My initial motivation was to ask how interface design might influence people as they talk about politics in online spaces, but this required an examination of the more human, less technologically determinate factors that are also, and often more significantly, involved in political expression.

Over the course of the study it became obvious that the methodology used to investigate these concepts influences the insight obtained; something that many researchers have discussed in the context of digital methods within social science (Baym, 2013; Boyd and Crawford, 2012; Clough et al., 2015; Gitelman and Jackson, 2013; Kitchin and Lauriault, 2014; Kitchin, 2014; Manovich, 2011; Van Dijck, 2014). Technologically mediated questions can be answered through technology-centric methods to give technologically focussed answers, while questions involving human nature, motivation and interaction can be answered by qualitative, human-centred methods in order to provide human-centred answers. These approaches represent the divide between the large scale, quantitative analysis of big data methods and small scale qualitative approaches. In order to address this issue, I employed a methodology which was designed to combine these approaches through directed iterations of analysis that was initially large scale and quantitative, but increasingly small scale and qualitative.Read More… Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods

Datalogical Systems and Us


Helen ThornhamIn this post for ‘The Person in the (Big) Data‘ edition of EM, Helen Thornham
@Thornhambot
talks about how her research into data and the everyday has made her think critically about the power relations that surround “datalogical systems”, particularly in how we as researchers are implicated in the systems we aim to critique.   

Data, big data, open data and datalogical systems (Clough et al. 2015) are already, as David Beer has noted, ‘an established presence in our everyday cultural lives’ (2015:2) and this means that the material and embodied configurations of data are already normative and quotidian and novel and innovative. Much of my research over the last 4 years, supported by a range of ESRC [i], EPSRC [ii] and British Academy grants, has engaged with normative and everyday configurations of data – whether that is in terms of routine and mundane mediations, lived subjective experiences framed by datalogical systems and their obscure decision making processes, the relationship between the promises of data for infrastructural change and the realisation of this, or human interrogations of machines. While the scope and breadth of my research into data and datalogical systems is broad and diverse, what connects all of my research is a continued concern with how data and datalogical systems are not just reconceptualising epistemology and ontology more widely (see also Burrows and Savage 2014), but how they implicate us as researchers and  reveal to us that our long-term methods of research are equally and always already subject to, and framed by, the very issues we purport, in the digital era, to be critiquing.

To rehash a familiar argument: if we conceive of technology in relation to social media, big data and data flow, the subsequent methods that epistemologically frame this are defined by that initial conception: web analytics, scraping and mining tools, mapping – tools that seek to make visible the power relations of the digital infrastructures but that actually generate those power relations in the act of making them visible (boyd and Crawford 2012). For the ESRC project where we have investigated risks and opportunities of social media for the UK Ministry of Defence (MoD), web analytic methods show us very clearly mundane and dull, repetitive mass media management of content. News headlines are retweeted effectively and broadly with limited discussion that is capturable by the scraping tools we use.Read More… Datalogical Systems and Us

The Person in the (Big) Data


FullSizeRender This edition of EM is jam-packed with methods for doing people-centred digital research and is edited by EM co-founder Heather Ford (@hfordsa)
newly-appointed Fellow in Digital Methods at the University of Leeds and thus super excited to understand her role as an ethnographer who (also) does digital methods.

Today we launch the next edition of Ethnography Matters entitled: ‘Methods for uncovering the Person in the (Big) Data’. The aim of the edition is to document some of the innovative methods that are being used to explore online communities, cultures and politics in ways that connect people to the data created about/by them. By ‘method’, we mean both the things that researchers do (interviews, memo-ing, member checking, participant observation) as well as the principles that underpin what many of us do (serving communities, enabling people-centred research, advocating for change). In this introductory post, I outline the current debate around the risks of data-centric research methods and introduce two principles of people-centric research methods that are common to the methods that we’ll be showcasing in the coming weeks.

As researchers involved in studying life in an environment suffused by data, we are all (to at least some extent) asking and answering questions about how we employ digital methods in our research practice. The increasing reliance on natively digital methods is part of what David Berry calls the “computational turn” in the social sciences, and what industry researchers recognize as moves towards Big Data and the rise of Data Science.

digitalmethods

Digital Methods‘ by Richard Rogers (2013)

First, a word on digital methods. In his groundbreaking work on digital methods, Richard Rogers argued for a move towards natively digital methods. In doing so, Rogers distinguishes between methods that have been digitized (e.g. online surveys) vs. those that are “born digital” (e.g. recommender systems), arguing that the Internet should not only be seen as an object for studying online communities but as a source for studying modern life that is now suffused by data. “Digital methods,” writes Rogers, “strives to follow the evolving methods of the medium” by the researcher becoming a “native” speaker of online vocabulary and practices.

The risks of going natively digital

There are, however, risks associated with going native. As ethnographers, we recognize the important critical role that we play of bridging different communities and maintaining reflexivity about our research practice at all times and this makes ethnographers great partners in data studies. Going native in this context, in other words, is an appropriate metaphor for both the benefits and risks of digital methods because the risk is not in using digital methods but in focusing too much on data traces.

Having surveyed some of debates about data-centric methodology, I’ve categorized the risks according to three core themes: 1. accuracy and completeness, 2. access and control, 3. ethical issues.Read More… The Person in the (Big) Data

The Addiction Algorithm: An interview with Natasha Dow Schüll


Addiction by Design, book coverEM: Can you tell me a little bit about your book?

NDS: I was in the first cohort of the Robert Wood Johnson Health & Society Scholar postdoctoral program. I was definitely an outlier as a cultural anthropologist, but the pitch I made to them at the time was that research angles on addiction should include more qualitative work, and should also attend to the addictive effects of consumer interfaces and technology, not just drugs, as a public health issue.

I think any good addiction researcher would recognize that addiction is in a large part a question of the timing of rewards or reinforcements, or the so-called event frequency. So it makes sense that if digitally-mediated forms of gambling like slot machines are able to intensify the event frequency to a point where you’re playing 1,200 hands an hour, then they’re more addictive. Waiting for your turn at a poker game, by contrast, isn’t as fast – there are lots of pauses and lots of socializing in-between hands. Slot machines are solitary, continuous, and rapid. Uncertainty is opened up and then it’s closed — so quickly that it creates a sense of merging with the machine.

If you accept that gambling can be an addiction, you can then broaden the conversation to include other less obviously addictive contemporary experiences, whether it’s an eBay auction or Facebook photo clicking or even just checking email, and certainly texting. It’s so compelling to take your fingertip and just keep clicking, clicking to get that response.

EM: That’s fascinating. Or this word game on my phone — it’s become really, really addictive for me. I’m curious if you’ve had interactions also with people in game design? There’s a certain point of view that seems really prevalent right now about game design and play.

NDS: People in the general world of game and app design don’t see themselves as in the business of producing addiction but they have reached out to me. Often they want to hear about how to avoid creating addiction.

I was recently invited out to the Habit Summit, an event in Silicon Valley held at Stanford, with lots of local tech people who are all there to figure out how to design habit, how to retain attention. In my presentation to them, I talked about the increasing prevalence of little ludic loops in design, as ways to retain attention. With Candy Crush and so many phone apps, if you ride a subway in the morning there are people sitting there zoning out on these little devices. I think the reason they’re so able to retain attention and form habits is that they are affect modulators. They’re helping people to modulate and manage their moods. It’s addictive because it’s right there at your fingertips, and you’re able to reach out and just start clicking this thing to create a stimulus response loop.There are more and more moments of zoning out – to use a phrase from the slot machine gamblers – moments that are configured very much like a slot machine in terms of the continuous, rapid little loop where something is opened up and then it’s closed… open it up and then it’s kill the monster; kill the monster again; kill the monster again.

It’s so compelling to take your fingertip and just keep clicking, clicking to get that response.

Read More… The Addiction Algorithm: An interview with Natasha Dow Schüll

The Facebook Experiment: Cow-Sociology, Redux


Once having arrived at a set (or sets) of defensible moral positions, social psychologists should better be able to educate those outside the field concerning appropriate ethical criteria by which to judge the field's work...
Alan C. Elms, 1975
Warnings of public backlashes against psychologists, diminished subject pools, and a tarnished professional interest had little, if any, visible effect. The psychologist's ethical stance remains to his or her chosen methodology. Where the behavioristic model applies, deception is usually part of it.
C.D. Herrera, 1997
The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.
Adam Kramer, 2014

Now that the initial heat has faded, it is a good time to place the Facebook experiment in historical perspective. In the first two quotes above, social psychologist Alan C. Elms and philospher-ethicist C.D. Herrera represent two sides of a debate over the ethics and efficacy of relying on deception in experimental research. I highlight these two quotes because they demonstrate moments within social psychology, even if they are a generation apart, when deception surfaces as a topic for reconsideration. Elms, one of the original research assistants involved in Stanley Milgram’s obedience research, writes as deception is being called into question. Herrera, writing with the benefit of hindsight, suggests that paradigms other than behaviorist are the way forward. The crux of this disagreement lies in the conceptualization of the research subject. Is the research subject a reflexive being with an intelligence on par with the researcher’s intelligence, or is the research subject a raw material to be deceived and manipulated by the superior intelligence of the researcher?

<a href="http://akenator.deviantart.com/art/Danger-cow-signal-in-the-fog-166643929">Danger, cows ahead</a> CC BY-SA akenator

Danger, cows ahead
CC BY-SA akenator

Unseen, but looming in the background of this disagreement, is the Industrial Psychology/Human Relations approach, which developed in the 1920’s and 1930’s through the work of researchers like Elton Mayo and his consociates, and in experiments such as those at the Hawthorne plant.

This debate is worth revisiting in light of the Facebook experiment and its fallout. Any understanding of the Facebook experiment — and the kind of experimentation allowed by Big Data more generally — must include the long, intertwined history of behaviorism and experimental deception as it has been refracted through both Adam Kramer’s home discipline of social psychology and somewhat through his adopted discipline of “data scientist” [1].Read More… The Facebook Experiment: Cow-Sociology, Redux

Falling in: how ethnography happened to me and what I’ve learned from it


guest author Austin Toombs

Austin Toombs

Editor’s Note: Austin Toombs (@altoombs) brings a background in computer science and a critical sensibility to his ethnographic research on maker cultures.  He explores the formation of maker identities in his research, focusing on how specific sites such as hackerspaces, makerspaces, Fab Labs, and other co-working spaces intersect with the politics of making, gendered practices, urban vs. rural geographies, and creative hardware and software developments. Austin is a PhD student in Human Computer Interaction Design in the School of Informatics and Computing at Indiana University. He is a member of the Cultural Research In Technology (CRIT) Group, and is advised by Shaowen Bardzell and Jeffrey Bardzell. He is also a member of ISTC-Social.


My research as a PhD student began by looking at cultures of participation surrounding hobbyist programming. I was—and still am—interested in the fuzzy-gray area between work and play, and as someone who misses the puzzle, thrill, and flow of programming, these communities were great starting points for me. Working on this research led me, almost inevitably, toward my ethnographic work with my local hackerspace and the broader maker community. In this context, I have seen how this local community embraces the work/play ambiguity, how it can function primarily as a social environment, and how it works to actively cultivate an attitude of lifelong, playful, and ad hoc learning. In this post I explore the role ethnography played in my work and how the ethnographic approach helped me get to these insights. I also discuss some of the complications and issues I have run into because of this approach, and how I am working toward solving them. For more information, feel free to contact me!

hackerspaces

the role of ethnography in my work

My first encounter with the concept of a hackerspace came from my initial research on hobbyist programmers. I remember nearly dancing with excitement when I realized that the city I lived in happened to have a hackerspace, because I knew immediately that I would be joining them in some capacity, if not for research, then for my own personal enjoyment. The first few visits to the space were exploratory; I wanted to see what was going on, how the members and regular attendees interacted with each other, and whether or not this seemed like a good fit for my research.

My initial goal was to use the site as a potentially endless supply of case studies to explore my questions about work and play. Thankfully, I realized fairly early on that this case-study-first approach would not work for me. Instead, I found myself drawn to the overall narrative of the hackerspace and its members. How did this particular maker community form? What did the members do for their day jobs? How did they become ‘makers’? What do they think about themselves, and how has becoming a member of this community influenced that?

Read More… Falling in: how ethnography happened to me and what I’ve learned from it

Studying Up: The Ethnography of Technologists


Nick Seaver

Editor’s Note: Nick Seaver (@npseaver) kicks off the March-April special edition of Ethnography Matters, which will feature a number of researchers at the Intel Science and Technology Center for Social Computing on the forefront of exploring the cultures of hackers, makers, and engineers.

Nick’s post makes the case for the importance of “studying up“: doing ethnographies not only of disempowered groups, but of groups who wield power in society, perhaps even more than the ethnographers themselves.

Nick’s own research explores how people imagine and negotiate the relationship between cultural and technical domains, particularly in the organization, reproduction, and dissemination of sonic materials. His current project focuses on the development of algorithmic music recommendation systems. Nick is a PhD candidate in sociocultural anthropology at UC Irvine. Before coming to UCI, Nick researched the history of the player piano at MIT. 


When people in the tech industry hear “ethnography,” they tend to think “user research.” Whether we’re talking about broad, multinational explorations or narrowly targeted interviews, ethnography has proven to be a fantastic way to bring outside voices in to the making of technology. As a growing collection of writing on Ethnography Matters attests, ethnography can help us better understand how technology fits into people’s everyday lives, how “users” turn technologies to unexpected ends, and how across the world, technologies get taken up or rejected in a diverse range of cultural contexts. Ethnography takes “users” and shows how they are people — creative, cultural, and contextual, rarely fitting into the small boxes that the term “user” provides for them.

But ethnography doesn’t have to be limited to “users.”

Engineers in context. cc by-nc-nd 2.0 | http://www.flickr.com/somewhatfrank

My ethnographic research is focused on the developers of technologies — specifically, people who design and build systems for music recommendation. These systems, like PandoraSpotifySongza, or Beats Music, suggest listening material to users, drawing on a mix of data sources, algorithms, and human curation. The people who build them are the typical audience for ethnographic user studies: they’re producing technology that works in an explicitly cultural domain, trying to model and profile a diverse range of users. But for the engineers, product managers, and researchers I work with, ethnography takes a backseat to other ways of knowing people: data mining, machine learning, and personal experience as a music listener are far more common sources of information.

Ethnographers with an interest in big data have worked hard to define what they do in relation to these other methods. Ethnography, they argue, provides thick, specific, contextualized understanding, which can complement and sometimes correct the findings of the more quantitative, formalized methods that dominate in tech companies. However, our understandings of what big data researchers actually do tend to lack the specificity and thickness we bring to our descriptions of users.

Just as ethnography is an excellent tool for showing how “users” are more complicated than one might have thought, it is also useful for understanding the processes through which technologies get built. By turning an ethnographic eye to the designers of technology — to their social and cultural lives, and even to their understandings of users — we can get a more nuanced picture of what goes on under the labels “big data” or “algorithms.” For outsiders interested in the cultural ramifications of technologies like recommender systems, this perspective is crucial for making informed critiques. For developers themselves, being the subject of ethnographic research provides a unique opportunity for reflection and self-evaluation.

Starbucks Listeners and Savants

Among music tech companies, it is very common to think about users in terms of how avidly they consume music. Here is one popular typology, as printed in David Jennings’ book Net, Blogs, and Rock ‘n’ Roll:

Read More… Studying Up: The Ethnography of Technologists