Tag Archives: mixed-methods

Five Mixed Methods for Research in the (Big) Data Age


In this final post for The Person in the (Big) Data edition of Ethnography Matters, we provide a collection of five mixed methods used by researchers to shine a light on the people behind the massive streams of data that are being produced as a result of online behavior. These research methods use a variety of digital and traditional methods but they share one thing in common: they are aimed at discovering stories. As Tricia Wang wrote on EM back in 2013, ‘Big Data needs Thick Data’. While ‘Big Data delivers numbers; thick data delivers stories. Big data relies on machine learning; thick data relies on human learning.’ In the methods outlined below, researchers outline how they have made the most of digital data using innovative methods that uncover the meaning, the context, the stories behind the data. In the end, this is still the critical piece for researchers trying to understand the moment in which we are living. Or, put differently, the ways in which we may want to live but are often prevented from by a system that sometimes reduces the human experience rather than to enable its flourishing. – HF, Ed.

1. Real-time Audience Feedback: The Democratic Reflection Method

Democratic Reflection ToolDemocratic Reflection is a new methodological tool for researching the real-time responses of citizens to televised media content, which was developed by a team of researchers from the University of Leeds and the Open University in the UK as part of a larger research project on TV election debates. The research for the project began by developing an inductive understanding of what people need from TV election debates in order to perform their role as democratic citizens. Drawing on focus groups with a diverse range of participants, the research identified five key demands — or ‘democratic entitlements’ — that participants felt debates and the political actors involved in them should meet. Participants felt entitled to be: (1) addressed as rational and independent decision makers, (2) given the information needed to make considered political judgements, (3) included in and engaged by the debates, (4) recognised and represented by the political leaders, and (5) provided with meaningful choices that allow them to make a difference politically. In the next phase of the research, the research team developed a new web-based app (accessible via mobile phones, tablets, and laptops), which allows viewers to respond to televised debates in real time and evaluate them using a range of twenty statements based on the five democratic entitlements. An experiment using the Democratic Reflection app was conducted with a panel of 242 participants during the first debate of the 2015 UK General Election, generating a dataset of over 50,000 responses. Analysis of the data provides a valuable new way to understand how viewers respond to election debates: we can explore general patterns of responses, compare different individuals and groups, track changes over time, and examine how specific moments and performances during the debates may relate to particular responses. – Giles MossRead More… Five Mixed Methods for Research in the (Big) Data Age

Trace Interviews Step-By-Step



Screen Shot 2016-05-03 at 9.38.39 AMIn this penultimate post for The Person in the (Big) Data edition of EM, Elizabeth Dubois @lizdubois provides
a step-by-step account of how the trace interviewing process works. Trace interviewing is a method used to elicit a person’s stories about why they made particular traces on social media platforms and is a wonderful way of getting at the stories underlying the data. Elizabeth’s step-by-step will hopefully help others wanting to use a similar method in the interview process! 

Social media data and other digital traces we leave as we navigate the web offer incredible opportunity for discovery within the social sciences. I am going to take you step by step through the process of trace interviewing – an approach that helps researchers gain richly detailed insights about the social context of that digital data. For those interested in the why as well as the how, Heather Ford and I talk a lot about why we think the method is important and what types of things it can offer researchers (such as validity checks, information about social context, opportunities to join data sets from various platforms) in our paper.

The Study
I want to figure out how people who influence other people on political issues choose their channels of communication (and the impact of those choices). The only way to understand these decisions and their impacts, I think, is a mixed-methods approach. So, I draw on Twitter data for content and network analysis, an online survey and in-depth trace interviews. You can read more about the full work here.

Trace Interviews
Hey, so great to finally meet you in person. Welcome!
By the time I got to the interview stage my interviewee and I already knew quite a lot about each other. They had filled out a survey, they knew I found them because the use the #CDNpoli hashtag, and they had read a project description and signed a consent form in advance.

It was important to form a relationship with my participants well in advance because I needed permission to collect their data. Sometimes trace data is publicly available (for example, tweets made or the list of accounts a Twitter user follows). But, even when it is publicly available, I tend to think that giving your participant a heads up that you’ve got or will collect data specifically about them is a good call. The fact is, people don’t always understand what making a public tweet means.Read More… Trace Interviews Step-By-Step

Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content



What has always impressed me about this next method for ‘The Person in the (Big) Data‘ series is the way in which research participants were able to develop their own ideals for democratic citizenship that are then used to evaluate politicians. Giles Moss discusses the evolution of the app through its various iterations and highlights the value of the data developed out of its application for further research. This grounded, value-driven application is an inspiration for people-centred research and we look forward from more of the same from Giles and the team he worked with on this!

Democratic Reflection is a web app that measures the real-time responses of audiences to media content. The app was developed by a team of researchers from the Open University and the University of Leeds in the UK, as part of a research project funded by the EPSRC, to explore how citizens respond to and evaluate televised election debates (Coleman, Buckingham Shum, De Liddo, Moss, Plüss & Wilson 2014).[1] Accessing the web app via a second screen, research participants are asked to watch live television programming and use the app to evaluate the programme by selecting from a range of twenty predefined statements. The statements are designed to capture key capabilities of democratic citizenship, allowing us to analyse how viewers evaluate media content in relation to their needs as democratic citizens rather than just media consumers. In this post, I describe how we developed Democratic Reflection and what we hope to learn from the data the app generates.

Of course, we’re not the first researchers to develop a technology to measure real-time audience responses to media content. As far back as the 1930s, Paul Lazerfield and Frank Stanton developed an instrument called the Lazarsfeld-Stanton Program Analyzer, where research participants could indicate whether they liked or disliked media content and their inputs would be recorded in real time (Levy 1982). More sophisticated variants of the Program Analyzer followed. The Ontorio Educational Communication Authority and Children’s Television Workshop created a ‘Program Evaluation Analysis Computer’, which had sixteen buttons with labels that could be altered to include new measures, and the RD Percy Company of Seattle developed VOXBOX, which allowed viewers to respond to content by indicating whether they thought it was ‘Funny’, ‘Unbelievable’, and so on (Levy 1982: 36-37). More recently, Boydstun, Glazier, Pietryka, & Resnik (2014) developed a mobile app to capture real-time responses of citizens to the first US presidential debate in 2012, offering viewers four responses: ‘Agree’, ‘Disagree’, ‘Spin’, and ‘Dodge’.

Democratic Reflection fits into this tradition of real-time response to media content, but it focuses on analysing how viewers evaluate televised election debates in terms of their communicative needs as democratic citizens. In other words, we designed the app not just to explore whether people liked or disliked what they were watching or agreed or disagreed with it, but how media content related to their more fundamental capabilities as democratic citizens. Our first task, therefore, was to identify the democratic capabilities that media content and more specifically televised election debates could affect. Read More… Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content

Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods



Screen Shot 2016-04-12 at 7.56.00 AMIn the next post for ‘The Person in the (Big) Data‘ edition, Chris Birchall @birchallchris talks us through a variety of methods – big, small and mixed – that he used to study citizenship in the UK. Using some of the dominant tools for studying large data sources in one part of the study, Chris realised that the tools used had a significant impact on what can be (and is being) discovered and that this is quite different from the findings reached by deeper, mixed methods analysis. In this post, Chris asks important questions about whether big data research tools are creating some the conditions of citizenship today and what, exactly, deeper, more nuanced analysis can tell us.

People talk about politics online in many different ways and for many different purposes. The way that researchers analyse and understand such conversation can influence the way that we depict public political opinion and citizenship. In two recent projects I investigated the nature of this conversation and the forces that influence it, as well as the networks, spaces and resources that link that talk to political action. In doing so, I encountered a methodological rift in which careful, manual, time consuming approaches produce different types of conclusions from the big data driven approaches that are widespread in the commercial social media analytics industry. Both of these approaches could be framed as an illustration of human behaviour on the internet, but their differences show that the way that we embrace big data or digital methods influences the understanding of digital publics and citizenship that we gain from the translation of mass online data.

My recently submitted PhD study investigated online public political conversation in the UK. Drawing on the work of previous scholars who have focussed on the deliberative online public sphere (such as Coleman and Gotze, 2001; Coleman and Moss, 2012; Mutz, 2006; Wright and Street, 2007; Graham, 2012), the study acknowledged the importance of interpersonal exchange between participants and exposure to diverse and opposing viewpoints in the formation of preferences and informed opinion. My initial motivation was to ask how interface design might influence people as they talk about politics in online spaces, but this required an examination of the more human, less technologically determinate factors that are also, and often more significantly, involved in political expression.

Over the course of the study it became obvious that the methodology used to investigate these concepts influences the insight obtained; something that many researchers have discussed in the context of digital methods within social science (Baym, 2013; Boyd and Crawford, 2012; Clough et al., 2015; Gitelman and Jackson, 2013; Kitchin and Lauriault, 2014; Kitchin, 2014; Manovich, 2011; Van Dijck, 2014). Technologically mediated questions can be answered through technology-centric methods to give technologically focussed answers, while questions involving human nature, motivation and interaction can be answered by qualitative, human-centred methods in order to provide human-centred answers. These approaches represent the divide between the large scale, quantitative analysis of big data methods and small scale qualitative approaches. In order to address this issue, I employed a methodology which was designed to combine these approaches through directed iterations of analysis that was initially large scale and quantitative, but increasingly small scale and qualitative.Read More… Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods

August 2013: Ethnographies of Objects


This month’s edition is co-edited by CW Anderson (@chanders), Juliette De Maeyer (@juliettedm) and Heather Ford (@hfordsa). The three of us met in June for the ICA preconference entitled ‘Objects of Journalism’ organised by Chris and Juliette. Over the course of the day, we heard fascinating stories of insights garnered through a focus on the objects, tools and spaces surrounding and interspersed with the business and practice of newsmaking: about faked photographs through the ages, about the ways in which news app designers think about news when designing apps for mobile devices and tablets, and about the evolution of the ways in which news room spaces were designed. We also heard rumblings – rarely fully articulated – that a focus on objects is controversial in the social sciences. In this August edition of Ethnography Matters, we offer a selection of objects from the conference as well as from an open call to contribute and hope that it sparks a conversation started by a single question: what can we gain from an ethnography of objects – especially in the fields of technology, media and journalism research?

"Hardware"

Hardware. Image by Cover.69 on Flickr CC BY

Why an *ethnography* of objects?

As well as the important studies of body snatching, identity tourism, and transglobal knowledge networks, let us also attend ethnographically to the plugs, settings, sizes, and other profoundly mundane aspects of cyberspace, in some of the same ways we might parse a telephone book. Susan Leigh Star, 1999

Susan Leigh Star, in ‘The ethnography of infrastructure‘ noted that we need to go beyond studies of identity in cyberspace and networks to (also) look at the often invisible infrastructure that surfaces important issues around group formation, justice and change. Ethnography is a useful way of studying infrastructure, she writes, because of its strengths of ‘surfacing silenced voices, juggling disparate meanings, and understanding the gap between words and deeds’.

In her work studying archives of meetings of the World Health Organization and old newspapers and law books concerning cases of racial recategorization under apartheid in South Africa, Star ‘brought an ethnographic sensibility to data collection and analysis: an idea that people make meanings based on their circumstances, and that these meanings would be inscribed into their judgements about the built information environment’.Read More… August 2013: Ethnographies of Objects

Tweeting Minarets: A personal perspective of joining methodologies


David Ayman Shamma

Editor’s note: In the last post of the Ethnomining‘ edition, David Ayman Shamma @ayman gives a personal perspective on mixed methods. Based on the example of data produced by people of Egypt who stood up against then Egyptian president and his party in 2011, he advocates for a comprehensive approach for data analysis beyond the “Big Data vs the World” situation we seem to have reached. In doing so, his perspective complements the previous posts by showing the richness of ethnographic data in order to deepen quantitative findings.
David Ayman Shamma is a research scientist in the Internet Experiences group at Yahoo! Research for which he designs and evaluate systems for multimedia-mediated communication.

____________________________________________________________________________

There’s a problem we face now; the so called Big Data world created an overshadowing world of numerical data analysis leaving everyone else to try to find a coined niche like “small data” or “long data” or “sideways data” or the like. The silos and fragmentation is overwhelming. But really, it’s just all data. Regardless of the its form or flavor, there are people who are experts at number crunching data and people who are experts at field work data. Unfortunately, the speed at which data science moves is attractive and that’s part of the problem; we don’t get the full picture at speed and everyone is racing to produce answers first.

A few months ago, in a conversation with a colleague, he told me “you don’t know what you don’t know, especially when it’s not there.” We were looking for a way to automatically surface a community of photographers on Flickr who didn’t annotate their photos. They didn’t use any titles or tags or any annotations what so ever. But they were clearly a strong and prolific community. If there was some way to automatically identify them, then we could help connect them.

Now, finding metrics for social engagement in unannotated data is not an impossible task when provided with some signal in the data that has some correlation, statistical or otherwise, to the effect you’re trying to surface. But in some cases, it’s just not possible. What you need is just not there; therein is a problem. In other cases, it’s much harder to surface features when you don’t know what they look like.

When you have a lot of data, finding that unexplainable prediction through algorithmic statistics becomes easier. It doesn’t explain why and it doesn’t always work.

Enter Ethnography to answer the why and find out what things might look like—surfacing findings in the age of big data. When I was invited to write a post on Ethnography Matters, I decided to illustrate this through a personally motivated example.

In the late January of 2011, the people of Egypt stood up against then President Hosni Mubarak and his National Democratic Party. They wanted employment, a fair government, and an end to the 30 year long emergency law which had removed most of their civilian rights. Undoubtedly, you read about it somewhere. At the time, my mother was in Cairo visiting her 100+ year old mother. So this left me glued to the only source of news I could find—a rather buggy Al Jazeera video stream. U.S. news agencies were slow to start some sparse coverage. Somewhere in-between, it was burning up on Twitter.

Tharir tweets

A visualization of Twitter activity directed towards Tahrir by aymanshamma

Read More… Tweeting Minarets: A personal perspective of joining methodologies

The Ethnographer’s Complete Guide to Big Data: Conclusions (part 3 of 3)


Statistics House, Kampala, Uganda

As promised here is the final installment of my short series about ‘big data.’ I started out by declaring myself a ‘small data’ person. My intention was to be a bit provocative by suggesting that forgoing or limiting data collection might sometimes be a legitimate or even laudable choice. That contrast was perhaps overdrawn. It seemed to suggest that ‘big data’ and ethnographic approaches were at the opposing ends of some continuum. ‘How much’ is not necessarily a very interesting or relevant question for an ethnographer, but who among us hasn’t done some counting and declared some quantity (1000s of pages of notes, hundreds of days in the field, hours of audio or video recordings) that is meant to impress, to indicate thoroughness, depth, effort, and seriousness?

So the game of numbers is one we all probably play from time to time.

Now to answer my few remaining questions:

1) How might big data be part of projects that are primarily ethnographic in approach?

My first exposure to ‘big data’ came from a student who managed to gain access to a truly massive collection of CDR (call detail record) data from a phone network in Rwanda. Josh Blumenstock was able to combine CDR data with results from a survey he designed and carried out with a research team in Rwanda to gain insights into the demographics of phone owners, within country migration patterns, and reciprocity and risk management. I was terribly excited by the possibilities of what could be found in that kind of data since I had been examining mobile phone ownership and gifting in nearby Uganda. I wondered how larger patterns in the data might reflect (or raise questions) about what I was coming to see at the micro-level about phone ownership and sharing, especially its gendered dimensions. Indeed Josh’s work showed a strong gender skew in ownership with far more men than women owning phones and women phone owners more affluent and well-educated. My work explained the marital and other family dynamics that put far fewer phones into the hands of women than men.

However, combining these two approaches is more a standard mixed methods approach than anything new. Is something more innovative than that possible?Read More… The Ethnographer’s Complete Guide to Big Data: Conclusions (part 3 of 3)

The Ethnographer’s Complete Guide to Big Data: Answers (part 2 of 3)


Statistics House, Kampala, Uganda

I’ve come away from the DataEdge conferencewith some answers…and some more questions. While I don’t intend to recap the conference itself, I do want to take advantage of time spent with this diverse group of participants and their varied perspectives to try to offer the bigger picture sense I’m starting to develop of the big data/data analytics trend.

The idea that big data might usher in a new era of automatic research and along with it researcher de-skilling or that it would render the scientific method obsolete did not prove to be a popular sentiment (*phew* sigh of relief). The point that data isn’t self-explanatory, that it needs to be interpreted was reasserted many times during the conference coming from people who occupy very different roles in this data science world. No need to panic, let’s move along to some answers to those questions I raised in part I.

What is big data? Ok, this was not a question I raised going into the conference, but I should have. Perhaps unsurprisingly there wasn’t a clear consensus or a consistent definition that carried through the talks. I found myself at certain points wondering, “are we still talking about ‘big data’ or are we just talking about your standard, garden-variety statistics now?” At any rate, this confusion was productive and led me to identify three things that appear to be new in this discussion of data, statistics, and analysis.

Read More… The Ethnographer’s Complete Guide to Big Data: Answers (part 2 of 3)

The Ethnographer’s Complete Guide to Big Data: Small Data People in a Big Data World (part 1 of 3)


Statistics House, Kampala, Uganda

Part I: Questions

Research is hard to do. Much of it is left to the specialists who carry on in school 4-10 more years after completing a first degree to acquire the proper training. It’s not only hard to do, it’s also hard to read and understand and extrapolate from. Mass media coverage of science and social research is rife with misinterpretations – overgeneralizations, glossing over research limitations, failing to adequately consider the characteristics of subject populations. Does more data or “big data” in any way, shape, or form alter this state of affairs? Is it the case, as Wired magazine (provocatively…arrogantly…and ignorantly) suggests that “the data deluge makes the scientific method obsolete” and “with enough data, the numbers speak for themselves?”

Read More… The Ethnographer’s Complete Guide to Big Data: Small Data People in a Big Data World (part 1 of 3)

Qualitative research is not research at all?


Image of building with torn sign reading "Rant"

Rant this way ~ Photo by Nesster, CC BY-SA

Heather pointed out these comments by Bob Garfield from a recent broadcast of On the Media (“Sentiment Analysis Reveals How the World is Feeling“):

I’ve been arguing for years that qualitative research, focus groups and the like, are not research at all. They don’t generate data. It’s statistically insignificant, easily manipulated, and from my perspective just as likely to be exactly wrong as exactly right.

Garfield then adds:

But it seems to me that what you’re dealing with is something that deals with all of my objections, because you’ve got the world’s largest focus group.

Sigh. This is wrong on so many levels, and anyone who is interested in ethnography already knows why, but just to touch on some of the problems:

  • Qualitative research can generate data. The tweets used in Johann Bollen‘s [1] sentiment analysis (the subject of this OTM episode), interview transcripts, field notes, photos, audiorecordings, visual recordings: all data. Some research within the qualitative tradition also generates numeric data [2] by, for example, calculating measures of intercoder reliability, or in the analysis of card sorting tasks.
  • There is a lot more to statistical testing than statistical significance (and some controversy among statisticians about overuse of significance testing). There is also more to quantitative analysis than statistical testing. Bayesian inference, for example, could be thought of as quantitative analysis that is not necessarily statistical testing.
  • Similarly, qualitative research cannot be reduced to “focus groups and the like”. The purposes, strengths and weaknesses of focus groups are very different from those of other qualitative methods such as [participant-]observation and one-on-one interviews [3].
  • Using statistical testing as a marker for what is or is not research omits work that has formed the backbone of the sciences such as classical experimentation, disconfirmation by example, comparative methods for creating typologies and analyzing artifacts, etc.
  • “Easily manipulated”? Yup, research findings in general can be manipulated. Statistical testing is really easy to manipulate.

Garfield’s statement also suggests either ignorance or dismissal of mixed methods research, which, I would argue, is increasingly becoming a gold standard for research in some fields, such as public health.

There’s a hint at why mixed methods have become so important in public health research in Garfield’s comment about “the world’s largest focus group.” Bollen’s use of a large collection of corpora is well-suited to his purposes, but other purposes can require different or additional kinds of work.

Let’s say I do a giant public health survey. If a minority in my sample doesn’t interpret a word or phrase in the same way that the majority interprets it, if some questions make no sense at all from their perspective, if people writing the survey have no idea what minority members’ concerns or experiences even are much less how they’re relevant to health, then the survey results will be meaningless for that social group.

There is no such thing as a survey that is not culturally informed. Without ethnographic work and awareness, surveys, public health information and campaigns, etc., will likely be culturally informed by those who are most powerful and/or in the majority. Qualitative research is indispensable for addressing structural health inequities affecting the less powerful. Should ethnographic work focused on these inequities be patted on the head and assured that it’s nice, but it’s not-really-research? Fortunately, the NIH does not think so.

Sometimes I wonder if people miss how widespread and useful qualitative work is because it can be invisible (see Tricia‘s related post about the ‘Invisibility of Ethnography‘). A couple recent episodes of On the Media may clarify the kind of research that Garfield is dismissing here, while at the same time (perhaps unknowingly?) depending on it.

On Nov. 4th, Garfield spoke with social media researcher danah boyd about “Parents helping kids lie online.” The paper [4] behind this interview presents quantitative summaries of survey data — “real” research, perhaps, to Garfield.  But hmm, how and why was this survey designed?

Read More… Qualitative research is not research at all?