Latest News

Five Mixed Methods for Research in the (Big) Data Age


In this final post for The Person in the (Big) Data edition of Ethnography Matters, we provide a collection of five mixed methods used by researchers to shine a light on the people behind the massive streams of data that are being produced as a result of online behavior. These research methods use a variety of digital and traditional methods but they share one thing in common: they are aimed at discovering stories. As Tricia Wang wrote on EM back in 2013, ‘Big Data needs Thick Data’. While ‘Big Data delivers numbers; thick data delivers stories. Big data relies on machine learning; thick data relies on human learning.’ In the methods outlined below, researchers outline how they have made the most of digital data using innovative methods that uncover the meaning, the context, the stories behind the data. In the end, this is still the critical piece for researchers trying to understand the moment in which we are living. Or, put differently, the ways in which we may want to live but are often prevented from by a system that sometimes reduces the human experience rather than to enable its flourishing. – HF, Ed.

1. Real-time Audience Feedback: The Democratic Reflection Method

Democratic Reflection ToolDemocratic Reflection is a new methodological tool for researching the real-time responses of citizens to televised media content, which was developed by a team of researchers from the University of Leeds and the Open University in the UK as part of a larger research project on TV election debates. The research for the project began by developing an inductive understanding of what people need from TV election debates in order to perform their role as democratic citizens. Drawing on focus groups with a diverse range of participants, the research identified five key demands — or ‘democratic entitlements’ — that participants felt debates and the political actors involved in them should meet. Participants felt entitled to be: (1) addressed as rational and independent decision makers, (2) given the information needed to make considered political judgements, (3) included in and engaged by the debates, (4) recognised and represented by the political leaders, and (5) provided with meaningful choices that allow them to make a difference politically. In the next phase of the research, the research team developed a new web-based app (accessible via mobile phones, tablets, and laptops), which allows viewers to respond to televised debates in real time and evaluate them using a range of twenty statements based on the five democratic entitlements. An experiment using the Democratic Reflection app was conducted with a panel of 242 participants during the first debate of the 2015 UK General Election, generating a dataset of over 50,000 responses. Analysis of the data provides a valuable new way to understand how viewers respond to election debates: we can explore general patterns of responses, compare different individuals and groups, track changes over time, and examine how specific moments and performances during the debates may relate to particular responses. – Giles MossRead More… Five Mixed Methods for Research in the (Big) Data Age

Trace Interviews Step-By-Step



Screen Shot 2016-05-03 at 9.38.39 AMIn this penultimate post for The Person in the (Big) Data edition of EM, Elizabeth Dubois @lizdubois provides
a step-by-step account of how the trace interviewing process works. Trace interviewing is a method used to elicit a person’s stories about why they made particular traces on social media platforms and is a wonderful way of getting at the stories underlying the data. Elizabeth’s step-by-step will hopefully help others wanting to use a similar method in the interview process! 

Social media data and other digital traces we leave as we navigate the web offer incredible opportunity for discovery within the social sciences. I am going to take you step by step through the process of trace interviewing – an approach that helps researchers gain richly detailed insights about the social context of that digital data. For those interested in the why as well as the how, Heather Ford and I talk a lot about why we think the method is important and what types of things it can offer researchers (such as validity checks, information about social context, opportunities to join data sets from various platforms) in our paper.

The Study
I want to figure out how people who influence other people on political issues choose their channels of communication (and the impact of those choices). The only way to understand these decisions and their impacts, I think, is a mixed-methods approach. So, I draw on Twitter data for content and network analysis, an online survey and in-depth trace interviews. You can read more about the full work here.

Trace Interviews
Hey, so great to finally meet you in person. Welcome!
By the time I got to the interview stage my interviewee and I already knew quite a lot about each other. They had filled out a survey, they knew I found them because the use the #CDNpoli hashtag, and they had read a project description and signed a consent form in advance.

It was important to form a relationship with my participants well in advance because I needed permission to collect their data. Sometimes trace data is publicly available (for example, tweets made or the list of accounts a Twitter user follows). But, even when it is publicly available, I tend to think that giving your participant a heads up that you’ve got or will collect data specifically about them is a good call. The fact is, people don’t always understand what making a public tweet means.Read More… Trace Interviews Step-By-Step

Taking Stock



In this post for The Person in the (Big) Data edition of EM, we hear from Giorgia Aiello @giorgishka who
demonstrates the ways in which she used both digital and traditional methods to explore the people and practices that characterise the stock photography industry. Giorgia’s stories of photographers attempting to game the algorithms that determine which photographs will come out on top in a search for cheese are compelling and memorable, and they just show how important it is to develop an expanded idea of just what ‘data’ is constituted by, even if the dominant discourse is more limited. 

Image banks like Getty Images and Shutterstock that sell ready-to-use ‘stock’ photographs online have become the visual backbone of advertising, branding, publishing, and journalism. Also, daily exposure to stock images has increased exponentially with the rise of social networking and the generic visuals used in lifestyle articles and ‘clickbait’ posts. The stock imagery business has become a global industry through recent developments in e-commerce, copyright and social media (Glückler & Panitz, 2013).

However, stock images are most often overlooked rather than looked at—both by ‘ordinary’ people in the contexts of their everyday lives and by scholars, who have rarely taken an interest in this industry and genre in its own right. There are some notable exceptions, dating back to the ‘pre-Internet’ era of stock photography, like Paul Frosh’s work on the ‘visual content industry’ in the early 2000s or David Machin’s critical analysis of stock imagery as the ‘world’s visual language’ (Frosh, 2003; Machin, 2004). As a whole, and compared to other media and communication industries, research on online image banks and digital stock imagery is virtually uncharted territory.

Why, then, should stock images be ascribed any significance or power since people do not particularly pay attention to them? Stock images are not only the ‘wallpaper’ of consumer culture (Frosh, 2003 and 2013); they are also central to the ambient image environment that defines our visual world, which is now increasingly digital and global while also remaining very much analogue and local (just think of your own encounters with such imagery at your bank branch, at your dentist or beauty salon, or on billboards in city streets). Pre-produced images are the raw material for the world’s visual media.Read More… Taking Stock

Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content



What has always impressed me about this next method for ‘The Person in the (Big) Data‘ series is the way in which research participants were able to develop their own ideals for democratic citizenship that are then used to evaluate politicians. Giles Moss discusses the evolution of the app through its various iterations and highlights the value of the data developed out of its application for further research. This grounded, value-driven application is an inspiration for people-centred research and we look forward from more of the same from Giles and the team he worked with on this!

Democratic Reflection is a web app that measures the real-time responses of audiences to media content. The app was developed by a team of researchers from the Open University and the University of Leeds in the UK, as part of a research project funded by the EPSRC, to explore how citizens respond to and evaluate televised election debates (Coleman, Buckingham Shum, De Liddo, Moss, Plüss & Wilson 2014).[1] Accessing the web app via a second screen, research participants are asked to watch live television programming and use the app to evaluate the programme by selecting from a range of twenty predefined statements. The statements are designed to capture key capabilities of democratic citizenship, allowing us to analyse how viewers evaluate media content in relation to their needs as democratic citizens rather than just media consumers. In this post, I describe how we developed Democratic Reflection and what we hope to learn from the data the app generates.

Of course, we’re not the first researchers to develop a technology to measure real-time audience responses to media content. As far back as the 1930s, Paul Lazerfield and Frank Stanton developed an instrument called the Lazarsfeld-Stanton Program Analyzer, where research participants could indicate whether they liked or disliked media content and their inputs would be recorded in real time (Levy 1982). More sophisticated variants of the Program Analyzer followed. The Ontorio Educational Communication Authority and Children’s Television Workshop created a ‘Program Evaluation Analysis Computer’, which had sixteen buttons with labels that could be altered to include new measures, and the RD Percy Company of Seattle developed VOXBOX, which allowed viewers to respond to content by indicating whether they thought it was ‘Funny’, ‘Unbelievable’, and so on (Levy 1982: 36-37). More recently, Boydstun, Glazier, Pietryka, & Resnik (2014) developed a mobile app to capture real-time responses of citizens to the first US presidential debate in 2012, offering viewers four responses: ‘Agree’, ‘Disagree’, ‘Spin’, and ‘Dodge’.

Democratic Reflection fits into this tradition of real-time response to media content, but it focuses on analysing how viewers evaluate televised election debates in terms of their communicative needs as democratic citizens. In other words, we designed the app not just to explore whether people liked or disliked what they were watching or agreed or disagreed with it, but how media content related to their more fundamental capabilities as democratic citizens. Our first task, therefore, was to identify the democratic capabilities that media content and more specifically televised election debates could affect. Read More… Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content

Thinking with selfies


Kath Albury @KathAlbury
continues our edition of ‘The Person in the (Big) Data‘ by talking about her research into young people and sexting. Instead of educating those who worked with young people about social media and the digital, Kath developed an innovative Selfie Workshop with colleagues where she got participants to produce and reflect on their own selfies through the lens of introductory media theory. Instead of telling educators about sexting and social media representation, Kath facilitated an experience in which they would be directly involved. This kind of embodied learning is a wonderful way of generating new data about the social implications of mediation and offers the opportunity to engage directly to empower the community under study. 

Having undertaken a range of research investigations into ‘hot button’ issues such as Australian pornography producers and consumers, young people’s use of social media for sexual health informationyoung people’s responses to sexting, and selfie cultures, I am regularly invited to address sexual health promotion professionals (including clinical staff and teachers) seeking to better understand ‘what media does to young people’.

In the process, I have become increasing concerned that while online and mobile media practices are now ubiquitous (if not universal) elements of young Australians’ everyday sexual cultures, many sexuality education and health promotion professionals seem to have had little (or no) access to foundational training in media and communications technologies and practices.

Consequently, the Rethinking Media and Sexuality Education project sought to investigate the desirability and utility of providing sexuality educators and health promotion professionals with an introduction to the theoretical and methodological frameworks underpinning my research on media and sexuality.

Rather than discussing young people’s media practices directly, I shared some frameworks for thinking critically about media, gender and sexuality without seeking to quantify ‘impact’ or ‘effects’, and invited participation in a series of exercises adapted from the Selfie Course, with the aim of offering a prototype toolkit that might be applied across different professional settings and contexts.

How do selfies communicate a desire for intimacy? Participants in the Selfie Workshop are tasked with creating selfies for different audiences and contexts. (Pic used with permission from creator.)

The workshop introduced participants to a range of media theories (including Stuart Hall’s ‘encoding/decoding’ model ), followed by hands-on exercises drawn from the Selfie Course, particularly the Sexuality, dating and gender module, which I co-authored with colleagues Fatima Aziz and Magdalena Olszanowski. In the context of the Rethinking Media workshop, I briefly acknowledged the stereotypical ‘duckface selfie’, then moved on to introduce other selfie genres that were clearly read as an expression of ‘identity’, without revealing the photographer’s face. These the pelfie (a pet selfie), a range of body part selfies (such as the foot selfie, aka felfie), and the shelfie – a self-portrait featuring the contents of the photographer’s bookshelf.

The first activity was adapted from ‘The Faceless Selfie’ which my Selfie Researcher Network colleagues and I described as an exercise exploring the ways that “people navigate the ubiquity of online surveillance while simultaneously wishing to connect with others on social media sites”. This activity invites participants to use their own mobile phones to create a selfie that their friends or family would definitely recognise as them, without showing their faces.Read More… Thinking with selfies

Trace ethnography: a retrospective


Stuart GeigerStuart Geiger @staeiou continues our edition of ‘The Person in the (Big) Data‘ with a reflection on his practice of ‘trace ethnography’ that focuses on the trace-making techniques that render users’ activities and intentions legible to each other. Importantly, Stuart argues, we as researchers need to see these traces in the context of our active socialization within the community in question, rather than passively reading traces through lurking. 

When I was an M.A. student back in 2009, I was trying to explain various things about how Wikipedia worked to my then-advisor David Ribes. I had been ethnographically studying the cultures of collaboration in the encyclopedia project, and I had gotten to the point where I could look through the metadata documenting changes to Wikipedia and know quite a bit about the context of whatever activity was taking place. I was able to do this because Wikipedians do this: they leave publicly accessible trace data in particular ways, in order to make their actions and intentions visible to other Wikipedians. However, this was practically illegible to David, who had not done this kind of participant-observation in Wikipedia and had therefore not gained this kind of socio-technical competency. 

For example, if I added “{{db-a7}}” to the top an article, a big red notice would be automatically added to the page, saying that the page has been nominated for “speedy deletion.” Tagging the article in this way would also put it into various information flows where Wikipedia administrators would review it. If any of Wikipedia’s administrators agreed that the article met speedy deletion criteria A7, then they would be empowered to unilaterally delete it without further discussion. If I was not the article’s creator, I could remove the {{db-a7}} trace from the article to take it out of the speedy deletion process, which means the person who nominated it for deletion would have to go through the standard deletion process. However, if I was the article’s creator, it would not be proper for me to remove that tag — and if I did, others would find out and put it back. If someone added the “{{db-a7}}” trace to an article I created, I could add “{{hangon}}” below it in order to inhibit this process a bit — although a hangon is a just a request, it does not prevent an administrator from deleting the article.

File:Wiki Women's Edit-a-thon-1.jpg

Wikipedians at an in-person edit-a-thon (the Women’s History Month edit-a-thon in 2012). However, most of the time, Wikipedians don’t get to do their work sitting right next to each other, which is why they rely extensively on trace data to coordinate render their activities accountable to each other. Photo by Matthew Roth, CC-BY-SA 3.0

I knew all of this both because Wikipedians told me and because this was something I experienced again and again as a participant observer. Wikipedians had documented this documentary practice in many different places on Wikipedia’s meta pages. I had first-hand experience with these trace data, first on the receiving end with one of my own articles. Then later, I became someone who nominated others’ articles for deletion. When I was learning how to participate in the project as a Wikipedian (which I now consider myself to be), I started to use these kinds of trace data practices and conventions to signify my own actions and intentions to others. This made things far easier for me as a Wikipedian, in the same way that learning my university’s arcane budgeting and human resource codes helps me navigate that bureaucracy far easier.Read More… Trace ethnography: a retrospective

Datalogical Systems and Us


Helen ThornhamIn this post for ‘The Person in the (Big) Data‘ edition of EM, Helen Thornham
@Thornhambot
talks about how her research into data and the everyday has made her think critically about the power relations that surround “datalogical systems”, particularly in how we as researchers are implicated in the systems we aim to critique.   

Data, big data, open data and datalogical systems (Clough et al. 2015) are already, as David Beer has noted, ‘an established presence in our everyday cultural lives’ (2015:2) and this means that the material and embodied configurations of data are already normative and quotidian and novel and innovative. Much of my research over the last 4 years, supported by a range of ESRC [i], EPSRC [ii] and British Academy grants, has engaged with normative and everyday configurations of data – whether that is in terms of routine and mundane mediations, lived subjective experiences framed by datalogical systems and their obscure decision making processes, the relationship between the promises of data for infrastructural change and the realisation of this, or human interrogations of machines. While the scope and breadth of my research into data and datalogical systems is broad and diverse, what connects all of my research is a continued concern with how data and datalogical systems are not just reconceptualising epistemology and ontology more widely (see also Burrows and Savage 2014), but how they implicate us as researchers and  reveal to us that our long-term methods of research are equally and always already subject to, and framed by, the very issues we purport, in the digital era, to be critiquing.

To rehash a familiar argument: if we conceive of technology in relation to social media, big data and data flow, the subsequent methods that epistemologically frame this are defined by that initial conception: web analytics, scraping and mining tools, mapping – tools that seek to make visible the power relations of the digital infrastructures but that actually generate those power relations in the act of making them visible (boyd and Crawford 2012). For the ESRC project where we have investigated risks and opportunities of social media for the UK Ministry of Defence (MoD), web analytic methods show us very clearly mundane and dull, repetitive mass media management of content. News headlines are retweeted effectively and broadly with limited discussion that is capturable by the scraping tools we use.Read More… Datalogical Systems and Us

Lemon Difficult: Building a Strategic Speculation Consultancy


Joseph LindleyJoseph Lindley works with design fiction in order to facilitate meaningful speculation about the future. In between he likes to make music, take photographs and combine the other two with things that fly. Quoting from his 2012 song Tingle in the Finger: it’s a designed world, balanced and slippy. Artificial. I see beauty, not a little superficial. Colder wind.

Editors Note: When I agreed to collaborate with my friend Dr. James Duggan in order to explore a future where corporate taxation was transparent, I had no idea that it would ultimately result in me writing an introduction to my own blog piece on Ethnography Matters. To explain: at an event to share the results of our design fiction tax project (that I did with James) I ended up talking to Heather, and was pleasantly surprised to discover that she was one of the people behind this website. I was aware of Ethnography Matters because of citing Laura Forlano’s posts while writing about ‘anticipatory ethnography‘ for EPIC. Through the wonder of serendipity, that citation, the collaboration with James, and the conversation with Heather has lead to this introductory paragraph being tapped out on my keyboard. Amazing! This seems like the best place to say a massive thank you to the Ethnography Matters team for their extensive and friendly support through the process. Also a massive thank you to Rob, Ding and Dhruv who contributed posts. Hopefully what we’ve collectively written will be of use, interest, or act as some kind of stimulus to provoke new insights about ethnography. So long, and thanks for the all the fish.

This post is part of the Post Disciplinary Ethnography Edition based on work done at the HighWire Centre for Doctoral Training and curated by Joseph Lindley. The other articles in the series are “What on Earth is Post Disciplinary Ethnography?“, “What’s the matter with Ethnography?“, “Everybody’s an Ethnographer!” and “Don’t Panic: the smart city is here”.

Design fiction is what I do. I’ve immersed myself in it for the last 3 years, and it is the subject of my doctoral thesis. I’ve explored it by adopting a ‘research through design‘ approach, which in essence means I’ve been ‘researching design fiction by doing design fiction’. It also means I get to be playful, which suits me fine. The ‘doing’ part of design fiction can be great fun (arguably it’s an integral part of getting design fiction’s right) and this has made my PhD experience an absolute blast. Of course there has been a fair amount of reading and desk-based research too but for the most part I have been doing practical experiments with this extremely flexible approach to speculating about the future. One of the many insights coming out of my research is that design fiction achieves many of the same things that design ethnography does. Furthermore it achieves those by leveraging some of the same properties of the world that design ethnography does. Design fiction can easily be adapted to play an important role in virtually any kind of research project.

But what is design fiction? The generally accepted definition of design fiction is the ‘intentional use of diegetic prototypes to suspend disbelief about change‘. That’s a bit of a jargony mouthful. With the most jargony part being the word ‘diegetic‘. Diegetic is the adjective from the noun ‘diegesis’, and diegesis is derived from ancient Greek philosophy. The concept is fiendishly deep and complex, so properly ‘getting’ it is pretty damn hard (and, if I’m honest, probably beyond my modest cognitive capacity). For the purposes of design fiction, however, it can be taken to simply mean ‘story world’. So if we put it like that, design fiction is really quite simple: it’s about incorporating design concepts into story worlds. But why would you join together a design concept and a story world, why put a prototype inside a fictional world, what’s wrong with this world? Well, it’s about the power of situativity, the depth of insight that emerges when action and context are considered together and with equal importance. And this is where the similarity between design fiction and ethnography can be drawn. The combination of design provocation and context is design fiction’s unique selling point (even if it is all just ‘made up’). It differs from traditional notions of fiction in that it tells situations rather than stories. And it differs from normal views of design, in that the designs are only of consequence when considered in terms of the (made up) situations they’re placed within.Read More… Lemon Difficult: Building a Strategic Speculation Consultancy