NDS: I was in the first cohort of the Robert Wood Johnson Health & Society Scholar postdoctoral program. I was definitely an outlier as a cultural anthropologist, but the pitch I made to them at the time was that research angles on addiction should include more qualitative work, and should also attend to the addictive effects of consumer interfaces and technology, not just drugs, as a public health issue.
I think any good addiction researcher would recognize that addiction is in a large part a question of the timing of rewards or reinforcements, or the so-called event frequency. So it makes sense that if digitally-mediated forms of gambling like slot machines are able to intensify the event frequency to a point where you’re playing 1,200 hands an hour, then they’re more addictive. Waiting for your turn at a poker game, by contrast, isn’t as fast – there are lots of pauses and lots of socializing in-between hands. Slot machines are solitary, continuous, and rapid. Uncertainty is opened up and then it’s closed — so quickly that it creates a sense of merging with the machine.
If you accept that gambling can be an addiction, you can then broaden the conversation to include other less obviously addictive contemporary experiences, whether it’s an eBay auction or Facebook photo clicking or even just checking email, and certainly texting. It’s so compelling to take your fingertip and just keep clicking, clicking to get that response.
EM: That’s fascinating. Or this word game on my phone — it’s become really, really addictive for me. I’m curious if you’ve had interactions also with people in game design? There’s a certain point of view that seems really prevalent right now about game design and play.
NDS: People in the general world of game and app design don’t see themselves as in the business of producing addiction but they have reached out to me. Often they want to hear about how to avoid creating addiction.
I was recently invited out to the Habit Summit, an event in Silicon Valley held at Stanford, with lots of local tech people who are all there to figure out how to design habit, how to retain attention. In my presentation to them, I talked about the increasing prevalence of little ludic loops in design, as ways to retain attention. With Candy Crush and so many phone apps, if you ride a subway in the morning there are people sitting there zoning out on these little devices. I think the reason they’re so able to retain attention and form habits is that they are affect modulators. They’re helping people to modulate and manage their moods. It’s addictive because it’s right there at your fingertips, and you’re able to reach out and just start clicking this thing to create a stimulus response loop.There are more and more moments of zoning out – to use a phrase from the slot machine gamblers – moments that are configured very much like a slot machine in terms of the continuous, rapid little loop where something is opened up and then it’s closed… open it up and then it’s kill the monster; kill the monster again; kill the monster again.
It’s so compelling to take your fingertip and just keep clicking, clicking to get that response.
I have been thinking and writing about mobile apps recently and how they are used for medical and health purposes. Millions of apps designed for smartphones, tablet computers and other mobile devices have been developed since their first appearance in 2008. Many of these are health and medical apps. In mid-2014 there were over 100,000 health and medical apps listed in the two major app stores, Apple App Store and Google Play, and new ones are being issued every day.
Several health and medical apps feature on Apple’s lists of popular apps, and download figures provided by Google Play show that some health and medical apps on their store have been downloaded hundreds of thousands or even millions of times. In late 2012 a Pew Research Center survey found that 85 per cent of American adults owned a mobile phone. Fifty-three per cent of these were smartphones, and one fifth of smartphone users had used their phone to download a health-related app. The most popular of these apps were related to monitoring exercise, diet and weight. A more recent market research study found that almost one-third of American smartphone users (equivalent to 46 million people) had used apps from the health and fitness category in January 2014Public health researchers have sought to evaluate their use in health promotion campaigns and gathering data on health-related practices. But few researchers have investigated the broader social, cultural, political and ethical dimensions of medical and health apps.
Healthcare practitioners and administrators are also increasingly using apps as part of their professional practice. Hundreds of apps have been developed by hospitals and other healthcare providers. A growing number of medical schools are now offering at least part of their education via apps and require their students to own a tablet computer. In one study that surveyed American doctors, more than two thirds said that they used apps as part of their work. Another survey of medical students and junior doctors in a UK healthcare region found that over half of both students and junior doctors had medical-related apps on smartphones, with apps for medical education purposes the most popular. The medical literature now often refers to ‘prescribing’ apps to patients.
Despite the ever-increasing popularity of apps, very little academic research focused on these devices has been carried out in the social sciences and humanities. Numerous market research reports and medical journal articles have been published that provide some quantitative data on their content, accuracy and use, but these are largely instrumental and descriptive rather than critical.
In recent years I have been interested in developing a research agenda in critical digital health studies, including research into medical and health-related apps. I adopt a sociomaterial perspective drawn from science and technology studies to investigate the digital health phenomenon. From this perspective, mobile apps, like all technologies, assume certain kinds of capacities, desires and embodiments; they also construct and configure them. Apps are new digital technology tools but they are also active participants that shape human bodies and selves as part of heterogeneous networks, creating new practices. Indeed apps may be viewed as sociocultural artefacts, the products of human decision-making, underpinned by tacit assumptions, norms and discourses already circulating in the social and cultural contexts in which they are generated, marketed and used. As they not only present information and health and medicine but also often invite users to generate and share digital data about themselves, apps participate as actors in the digital knowledge economy.Read More… App-ography: A critical perspective on medical and health apps
Alan C. Elms, 1975
C.D. Herrera, 1997
Adam Kramer, 2014
Now that the initial heat has faded, it is a good time to place the Facebook experiment in historical perspective. In the first two quotes above, social psychologist Alan C. Elms and philospher-ethicist C.D. Herrera represent two sides of a debate over the ethics and efficacy of relying on deception in experimental research. I highlight these two quotes because they demonstrate moments within social psychology, even if they are a generation apart, when deception surfaces as a topic for reconsideration. Elms, one of the original research assistants involved in Stanley Milgram’s obedience research, writes as deception is being called into question. Herrera, writing with the benefit of hindsight, suggests that paradigms other than behaviorist are the way forward. The crux of this disagreement lies in the conceptualization of the research subject. Is the research subject a reflexive being with an intelligence on par with the researcher’s intelligence, or is the research subject a raw material to be deceived and manipulated by the superior intelligence of the researcher?
Unseen, but looming in the background of this disagreement, is the Industrial Psychology/Human Relations approach, which developed in the 1920’s and 1930’s through the work of researchers like Elton Mayo and his consociates, and in experiments such as those at the Hawthorne plant.
This debate is worth revisiting in light of the Facebook experiment and its fallout. Any understanding of the Facebook experiment — and the kind of experimentation allowed by Big Data more generally — must include the long, intertwined history of behaviorism and experimental deception as it has been refracted through both Adam Kramer’s home discipline of social psychology and somewhat through his adopted discipline of “data scientist” .Read More… The Facebook Experiment: Cow-Sociology, Redux
I’m frustrated that the state of public intellectualism allows us, individually, to jump into the conversation about the recently published Facebook “Emotions” Study . What we—from technology builders and interface designers to data scientists and ethnographers working in industry and at universities alike—really (really) need right now is to sit down together and talk. Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have, from data sharing and users’ rights to the drop in public funding for basic research itself. We need a dialogue—a thoughtful, compassionate conversation among those who are or will be training the next generation of researchers studying social media. And, like all matters of ethics, this discussion will become a personal one as we reflect on our doubts, disagreements, missteps, and misgivings. But the stakes are high. Why should the Public trust social media researchers and the platforms that make social media a thing? It is our collective job to earn and maintain the Public’s trust so that future research and social media builders have a fighting chance to learn and create more down the line. Science, in particular, is an investment in questions that precede and will live beyond the horizon of individual careers.
As more and more of us crisscross disciplines and work together to study or build better social media, we are pressed to rethink our basic methods and the ethical obligations pinned to them. Indeed “ethical dilemmas” are often signs that our methodological techniques are stretched too thin and failing us. When is something a “naturalistic experiment” if the data are always undergoing A/B tweaks? How do we determine consent if we are studying an environment that is at once controllable, like a lab, but deeply social, like a backyard BBQ? When do we need to consider someone’s information “private” if we have no way to know, for sure, what they want us to do with what we can see them doing? When, if ever, is it ok to play with someone’s data if there’s no evident harm but we have no way to clearly test the long-term impact on a nebulous number of end users?Read More… When Science, Customer Service, and Human Subjects Research Collide. Now What?
I’m not one to speak about theory and method in the abstract. But when I am asked about my method, I typically respond that I use historically informed ethnography. However, whenever I say this I think of Mike Meyers’ SNL character Linda Richman. On Richman’s public access show, she and her friends talked about “about coffee, New York, dawters, dawgs, you know – no big whoop – just coffee talk.” During their discussions Richman would often become “verklempt,” such as in recalling meeting Barbara Streisand; overcome with emotion, she’d turn to her guests with a prompt: “The Prince of Tides is neither about a Prince nor tides – discuss.”
Hence, while I might say “historically informed ethnography,” I think to myself that “my work is neither historical nor ethnographic – discuss.”
As a computer science undergraduate I loved (and minored in) history. I still do love history and find that while I am typically focusing on contemporary communities and how they work together, historical context is important to my developing understanding of the practices of today.
When I went off to graduate school for a PhD, I was very much inspired by a little known work about Quakers: Michael Sheeran’s1 Beyond Majority Rule: Voteless Decisions in the Religious Society Of Friends. This was an ethnography of their consensus decision-making, but began with an introduction to their history, one that greatly informs the present-day. For instance, Quakers’ decision-making is a reflection of the origins of Protestantism. In short, under Protestantism it was thought that divine will could be discerned via the individual rather than through the church. However, the idea of individual discernment allowed for some unusual (and ill-favored) beliefs, such as those of the Ranters and the messianic Quaker James Naylor. This, in turn, brought increased persecution by the state. Hence, early Quakers faced the problem of how to represent themselves as moderate and nonthreatening. Their solution, in part, was to adopt a position of pacifism and community consensus. This historical context imparted a much richer understanding than if I had only read of their current day decision making. Accordingly, I tried to do the same thing with respect to Wikipedia collaboration by placing it in the historical context of what I called the pursuit of the universal encyclopedia.
Hence, even when I am focused upon the seemingly faddish phenomena of the digital realm, I challenge myself to ask if this is truly something never seen before? It rarely is, which then permits me to ask the more interesting and productive question of how is it different from (or a continuation of) what has gone before?
A digital interlude
Much of my quandary about history and ethnography relates to my domain of study. I love being able to immerse myself in the conversations and cultural artifacts of a community. Much of this is likely a reflection of my personality. I can be shy and I enjoy hunting through archives for something that is novel and leads to an insight. I am often happy to work alone as I read through blogs, wiki pages and email archives. Yet, is this history or ethnography? And at what point, in trawling through online archives, does ethnography become history? (When the sources are dead?)
I’m fortunate that I tend to study open communities and geeks. This means that many of my sources are prolific self-documenters, publishing their thoughts and contributions in public. Consequently, I have many primary sources, and I want to share them with my readers. In fact, after a decade of work, I have over four thousand sources and as I’ve done this work, I’ve continued to develop a system by which I can easily document, find, and manage this information. I recently did a screencast of the two tools I’ve developed for this.
Of course, this is not to say that conversations and interviews with community members are not useful. I’ve attended many a conference, Meetup, and un-conference. Many times people have shared with me context and background that has been invaluable to my understanding and portrayals. Sometimes, I delight in a key insight or wonderful quotation I can use from an interview. However, I do take lesser pleasure in an insight communicated to me privately than one I can find publicly. I don’t attempt to rationalize or advocate for this position, it is simply my preference. (I suspect many of the lofty words spent on academic distinctions is to justify similar differences in personal sensibilities and social habitus).Read More… Verklempt: Historically Informed Digital Ethnography
Rachelle Annechino invited me to write something about the concept of “public health” as I experienced it in my decades-long and checkered past in the drug field. That past is described in unbearable detail in a book called Dope Double Agent: The Naked Emperor on Drugs. The bottom line of my memory (if memories can have a bottom line) is that the phrase “public health” was a severe case of metaphor abuse. I only got clear on this slowly over the decades. This is the first time that I’ve tried to box it up in a summary, courtesy of ten years of hindsight after leaving the field.
The history of policy and practice around psychoactive substances in the 20th century U.S. has been a long slow-dance between docs and cops. Consider opioids as an example – opium and morphine and laudanum, and later heroin, and later methadone, and later buprenorphine, and now oxycontin — all opioid drugs that range from the organic to the synthetic. The docs first celebrated them for their medical use, then got upset when users broke the compliance rules and used them on their own, at which point the cops stepped in. In their different historical contexts they went through the same cycle, from legit (more or less) medication to popular use to crime. To those of us working in what the bureaucrats called the “demand” side of the drug field, attention to public health made a lot more sense than what the better funded “supply” side lusted after, namely, toss the addicted into jail.
Like most U.S. presidential elections, “public health” was only the better of two bad choices. “Public health” has its uses. Boas studied with Virchow, a founder of social epidemiology, after all. It isn’t the right framework to describe and understand people in their social worlds and how chemicals they ingest do and don’t fit into the flow. But, if you want to join policy conversations about “substance abuse” in most countries I’ve worked in, you have to translate your arguments into a doc/cop creole to make sense to the other participants. It’s the old problem of naïve realism, as the social cognition types say, or doxa, if you’re a Bourdieu fan. Do you push from the outside or talk on the inside? I chose the latter. So the question was, how could anthropologists, among others, use and subvert the public health discourse in useful ways?
Here’s a pretty easy example of one way we did that. Historically, public health arose out of successes at finding and then controlling the biological mechanisms that caused a disease. Public health found those mechanisms using epidemiology and then attempted to control them with biology. Epidemiologists built a database of “case records.” A good case record consists of clinical criteria for diagnosis, severity, time and place of onset, and demographics. (See, for example, this introduction to epidemiology [pdf].)
In the drug field, “ diagnosis” and “severity” were corrupted by war on drugs ideology. The insanity reached a peak in the 1980s with the official definition of “drug abuse” as “any illicit use of a substance” — any at all — including “illicit use” of a legal substance as well. This madness occurred at about the same time as the famous “library purge” of 1984, in which the National Institute on Drug Abuse (NIDA) expunged a set of its own titles from its archives and encouraged librarians to remove them from card catalogs. With time, as the DSM molted during its travels along its Roman numeral marked trail, diagnostic criteria have become more subtle and more reasonable, but that official definition of “abuse” remains on NIDA’s web page today. By this definition, it’s hard to imagine anyone who hasn’t been, at least at one point in their life, a drug abuser. The “diagnostic” part of a case record lost any useful meaning for research or intervention.Read More… Public Health on Drugs
The vision of an ethnographer physically going to a place, establishing themselves in the activities of that place, talking to people and developing deeper understandings seems so much simpler than the same activities in multifaceted spaces like Wikipedia. Researching how Wikipedians manage and verify information in rapidly evolving news articles in my latest ethnographic assignment, I sometimes wish I could simply go to the article as I would to a place, sit down and have a chat to the people around me.
Imagine that there is a community or culture of people that use social media–let’s focus on Twitter–in a particularly interesting or funny or outlandish way. Would you give it a name? Would you try to understand its size or its structure? Its history? Its purpose? How would you go about doing that?
Could it be studied by an anthropologist? A data scientist? An economist? A philosopher? A critic? A journalist? Could it ever understand itself?
I’m getting ahead of myself. Let’s start with a name: Weird Twitter.