Democratic Reflection is a web app that measures the real-time responses of audiences to media content. The app was developed by a team of researchers from the Open University and the University of Leeds in the UK, as part of a research project funded by the EPSRC, to explore how citizens respond to and evaluate televised election debates (Coleman, Buckingham Shum, De Liddo, Moss, Plüss & Wilson 2014). Accessing the web app via a second screen, research participants are asked to watch live television programming and use the app to evaluate the programme by selecting from a range of twenty predefined statements. The statements are designed to capture key capabilities of democratic citizenship, allowing us to analyse how viewers evaluate media content in relation to their needs as democratic citizens rather than just media consumers. In this post, I describe how we developed Democratic Reflection and what we hope to learn from the data the app generates.
Of course, we’re not the first researchers to develop a technology to measure real-time audience responses to media content. As far back as the 1930s, Paul Lazerfield and Frank Stanton developed an instrument called the Lazarsfeld-Stanton Program Analyzer, where research participants could indicate whether they liked or disliked media content and their inputs would be recorded in real time (Levy 1982). More sophisticated variants of the Program Analyzer followed. The Ontorio Educational Communication Authority and Children’s Television Workshop created a ‘Program Evaluation Analysis Computer’, which had sixteen buttons with labels that could be altered to include new measures, and the RD Percy Company of Seattle developed VOXBOX, which allowed viewers to respond to content by indicating whether they thought it was ‘Funny’, ‘Unbelievable’, and so on (Levy 1982: 36-37). More recently, Boydstun, Glazier, Pietryka, & Resnik (2014) developed a mobile app to capture real-time responses of citizens to the first US presidential debate in 2012, offering viewers four responses: ‘Agree’, ‘Disagree’, ‘Spin’, and ‘Dodge’.
Democratic Reflection fits into this tradition of real-time response to media content, but it focuses on analysing how viewers evaluate televised election debates in terms of their communicative needs as democratic citizens. In other words, we designed the app not just to explore whether people liked or disliked what they were watching or agreed or disagreed with it, but how media content related to their more fundamental capabilities as democratic citizens. Our first task, therefore, was to identify the democratic capabilities that media content and more specifically televised election debates could affect.
Democratic Capabilities & Entitlements
Our research on Democratic Reflection is framed by a general normative commitment to democracy and belief that all citizens should be able to participate effectively in the political decisions that affect them. Following the philosopher Martha Nussbaum (2011: 34), we view this democratic capability as an entitlement, something which should be made available to all citizens as a matter of social justice. However, we didn’t want to assume before our research begun that we knew exactly what citizens might need from media in order to realize this general capability and what other more specific capabilities might be involved. Instead, we treated this as an empirical question and something to be determined through research rather than specified in advance.
To develop our inductive model of democratic capabilities and entitlements, we conducted a series of twelve focus groups, where we asked a diverse range of voters to reflect back on their experience of the televised debates in the 2010 UK General Election and consider how debates could and should be improved (Coleman & Moss 2016). The focus groups were open-ended and exploratory, allowing participants to respond to questions in their own terms and introduce new topics and themes. Furthermore, the focus groups were designed to promote critical reflection and deliberation among participants. We were conscious of the danger that a normative model of democratic needs derived from people’s existing experiences of political communication could be constrained by the realities and compromises of ‘actually-existing democracy’. Participants were therefore encouraged to think critically and imagine how political communication could be otherwise and to reflect on what is most important for democratic citizenship with other focus-group participants who — despite their other differences — share this political identity with them.
Analysis of the focus-group transcripts suggested there were a number of key capabilities of democratic citizenship that people felt televised debates could affect, either positively or negatively. Given their importance, these capabilities may be viewed as democratic entitlements, things that viewers as citizens are entitled to expect televised debates and those who participate in them to promote. There were five such entitlements. Citizens felt entitled to: (1) be addressed in ways that respect their rationality and independence as decision makers; (2) receive the types of information they need to be able to evaluate political claims and make informed decisions; (3) be part of and engaged by the debates as mediated political events; (4) be recognized and represented by the political leaders; and (5) have the kinds of choices available that allow them to make a difference to what happens in the political world.
Democratic Reflection App
Having identified the five capabilities and entitlements through our focus-group research, our next task was to devise a set of real-time responses that could measure the extent to which they were realized or not during the debates. We arrived at twenty response statements in total, four per capability (see Figure 1). For example, for the first entitlement (which refers to being addressed by political leaders in ways that respect the rationality and independence of citizens), the app included the following four statements: ‘s/he’s speaking to us honestly’, ‘s/he’s speaking fairly and to the point’, ‘s/he’s just saying what people want to hear’, and ‘s/he thinks we stupid’.
We ran an experiment using Democratic Reflection with a panel of 242 participants during the first televised debate in the UK 2015 General Election. Participants were asked to complete a survey before and after the experiment, which covered demographic information and included questions about their level of interest in politics, voting intention, and views related to the entitlements. During the experiment, participants were asked to watch the debate and to use the app to register responses by pressing the buttons that most closely corresponded with their views. A large dataset of over 50,000 responses was generated through the experiment.
Analysis of the dataset generated by the app promises to provide novel insights for media and political communication research. It is possible to analyse overall trends and patterns of response; compare differences among individuals and groups of respondents (based on demographic information and their responses to questions in the pre- and post-experiment survey); and to examine how specific moments and performances during the debates are correlated with particular responses. We have designed an analytics interface to help to visualize and investigate the data generated by the experiment (see Figure 2). We are currently working on the analysis of the data and results from the research will be forthcoming soon. Ultimately, at a time when various new forms of digital data analysis seek to capture and know media users as consumers, we hope the data generated by Democratic Reflection and future iterations of it will help to understand them better as citizens.
Boydstrun, A, R. Glazier, M. Pietryka, and P. Resnik. 2014. ‘Real-Time Reactions to a 2012 Presidential Debate: A Method for Understanding Which Messages Matter’. Public Opinion Quarterly 78 (1): 330–43.
Coleman, S, and G. Moss. 2016. ‘Rethinking Election Debates What Citizens Are Entitled to Expect’. The International Journal of Press/Politics 21 (1): 3–24.
Coleman, S, S. Buckingham Shum, A. de Liddo, G. Moss, B. Plüss and P. Wilson. 2014. ‘A Novel Method for Capturing Instant, Nuanced Audience Feedback to Televised Election Debates. Election Debate Visualisation. Available at: http://edv-project.net/wp-content/uploads/2014/03/EDV-Briefing2014.04.pdf/.
Levy, M. R. 1982. ‘The Lazarsfeld-Stanton Program Analyzer: An Historical Note’. Journal of Communication 32 (4): 30–38.
Nussbaum, M. C. 2011. Creating Capabilities. Harvard: Harvard University Press.
 The research team includes the following members: Professor Stephen Coleman, Dr Anna De Liddo, Dr Giles Moss, Dr Brian Plüss, and Dr Paul Wilson. The research project is supported by the Engineering & Physical Sciences Research Council [EPSRC Reference: EP/L003112/1].