Tag Archives: methods

Taking Stock



In this post for The Person in the (Big) Data edition of EM, we hear from Giorgia Aiello @giorgishka who
demonstrates the ways in which she used both digital and traditional methods to explore the people and practices that characterise the stock photography industry. Giorgia’s stories of photographers attempting to game the algorithms that determine which photographs will come out on top in a search for cheese are compelling and memorable, and they just show how important it is to develop an expanded idea of just what ‘data’ is constituted by, even if the dominant discourse is more limited. 

Image banks like Getty Images and Shutterstock that sell ready-to-use ‘stock’ photographs online have become the visual backbone of advertising, branding, publishing, and journalism. Also, daily exposure to stock images has increased exponentially with the rise of social networking and the generic visuals used in lifestyle articles and ‘clickbait’ posts. The stock imagery business has become a global industry through recent developments in e-commerce, copyright and social media (Glückler & Panitz, 2013).

However, stock images are most often overlooked rather than looked at—both by ‘ordinary’ people in the contexts of their everyday lives and by scholars, who have rarely taken an interest in this industry and genre in its own right. There are some notable exceptions, dating back to the ‘pre-Internet’ era of stock photography, like Paul Frosh’s work on the ‘visual content industry’ in the early 2000s or David Machin’s critical analysis of stock imagery as the ‘world’s visual language’ (Frosh, 2003; Machin, 2004). As a whole, and compared to other media and communication industries, research on online image banks and digital stock imagery is virtually uncharted territory.

Why, then, should stock images be ascribed any significance or power since people do not particularly pay attention to them? Stock images are not only the ‘wallpaper’ of consumer culture (Frosh, 2003 and 2013); they are also central to the ambient image environment that defines our visual world, which is now increasingly digital and global while also remaining very much analogue and local (just think of your own encounters with such imagery at your bank branch, at your dentist or beauty salon, or on billboards in city streets). Pre-produced images are the raw material for the world’s visual media.Read More… Taking Stock

Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content



What has always impressed me about this next method for ‘The Person in the (Big) Data‘ series is the way in which research participants were able to develop their own ideals for democratic citizenship that are then used to evaluate politicians. Giles Moss discusses the evolution of the app through its various iterations and highlights the value of the data developed out of its application for further research. This grounded, value-driven application is an inspiration for people-centred research and we look forward from more of the same from Giles and the team he worked with on this!

Democratic Reflection is a web app that measures the real-time responses of audiences to media content. The app was developed by a team of researchers from the Open University and the University of Leeds in the UK, as part of a research project funded by the EPSRC, to explore how citizens respond to and evaluate televised election debates (Coleman, Buckingham Shum, De Liddo, Moss, Plüss & Wilson 2014).[1] Accessing the web app via a second screen, research participants are asked to watch live television programming and use the app to evaluate the programme by selecting from a range of twenty predefined statements. The statements are designed to capture key capabilities of democratic citizenship, allowing us to analyse how viewers evaluate media content in relation to their needs as democratic citizens rather than just media consumers. In this post, I describe how we developed Democratic Reflection and what we hope to learn from the data the app generates.

Of course, we’re not the first researchers to develop a technology to measure real-time audience responses to media content. As far back as the 1930s, Paul Lazerfield and Frank Stanton developed an instrument called the Lazarsfeld-Stanton Program Analyzer, where research participants could indicate whether they liked or disliked media content and their inputs would be recorded in real time (Levy 1982). More sophisticated variants of the Program Analyzer followed. The Ontorio Educational Communication Authority and Children’s Television Workshop created a ‘Program Evaluation Analysis Computer’, which had sixteen buttons with labels that could be altered to include new measures, and the RD Percy Company of Seattle developed VOXBOX, which allowed viewers to respond to content by indicating whether they thought it was ‘Funny’, ‘Unbelievable’, and so on (Levy 1982: 36-37). More recently, Boydstun, Glazier, Pietryka, & Resnik (2014) developed a mobile app to capture real-time responses of citizens to the first US presidential debate in 2012, offering viewers four responses: ‘Agree’, ‘Disagree’, ‘Spin’, and ‘Dodge’.

Democratic Reflection fits into this tradition of real-time response to media content, but it focuses on analysing how viewers evaluate televised election debates in terms of their communicative needs as democratic citizens. In other words, we designed the app not just to explore whether people liked or disliked what they were watching or agreed or disagreed with it, but how media content related to their more fundamental capabilities as democratic citizens. Our first task, therefore, was to identify the democratic capabilities that media content and more specifically televised election debates could affect. Read More… Democratic Reflection: Evaluating Real-Time Citizen Responses to Media Content

Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods



Screen Shot 2016-04-12 at 7.56.00 AMIn the next post for ‘The Person in the (Big) Data‘ edition, Chris Birchall @birchallchris talks us through a variety of methods – big, small and mixed – that he used to study citizenship in the UK. Using some of the dominant tools for studying large data sources in one part of the study, Chris realised that the tools used had a significant impact on what can be (and is being) discovered and that this is quite different from the findings reached by deeper, mixed methods analysis. In this post, Chris asks important questions about whether big data research tools are creating some the conditions of citizenship today and what, exactly, deeper, more nuanced analysis can tell us.

People talk about politics online in many different ways and for many different purposes. The way that researchers analyse and understand such conversation can influence the way that we depict public political opinion and citizenship. In two recent projects I investigated the nature of this conversation and the forces that influence it, as well as the networks, spaces and resources that link that talk to political action. In doing so, I encountered a methodological rift in which careful, manual, time consuming approaches produce different types of conclusions from the big data driven approaches that are widespread in the commercial social media analytics industry. Both of these approaches could be framed as an illustration of human behaviour on the internet, but their differences show that the way that we embrace big data or digital methods influences the understanding of digital publics and citizenship that we gain from the translation of mass online data.

My recently submitted PhD study investigated online public political conversation in the UK. Drawing on the work of previous scholars who have focussed on the deliberative online public sphere (such as Coleman and Gotze, 2001; Coleman and Moss, 2012; Mutz, 2006; Wright and Street, 2007; Graham, 2012), the study acknowledged the importance of interpersonal exchange between participants and exposure to diverse and opposing viewpoints in the formation of preferences and informed opinion. My initial motivation was to ask how interface design might influence people as they talk about politics in online spaces, but this required an examination of the more human, less technologically determinate factors that are also, and often more significantly, involved in political expression.

Over the course of the study it became obvious that the methodology used to investigate these concepts influences the insight obtained; something that many researchers have discussed in the context of digital methods within social science (Baym, 2013; Boyd and Crawford, 2012; Clough et al., 2015; Gitelman and Jackson, 2013; Kitchin and Lauriault, 2014; Kitchin, 2014; Manovich, 2011; Van Dijck, 2014). Technologically mediated questions can be answered through technology-centric methods to give technologically focussed answers, while questions involving human nature, motivation and interaction can be answered by qualitative, human-centred methods in order to provide human-centred answers. These approaches represent the divide between the large scale, quantitative analysis of big data methods and small scale qualitative approaches. In order to address this issue, I employed a methodology which was designed to combine these approaches through directed iterations of analysis that was initially large scale and quantitative, but increasingly small scale and qualitative.Read More… Algorithmic Intelligence? Reconstructing Citizenship through Digital Methods

Thinking with selfies


Kath Albury @KathAlbury
continues our edition of ‘The Person in the (Big) Data‘ by talking about her research into young people and sexting. Instead of educating those who worked with young people about social media and the digital, Kath developed an innovative Selfie Workshop with colleagues where she got participants to produce and reflect on their own selfies through the lens of introductory media theory. Instead of telling educators about sexting and social media representation, Kath facilitated an experience in which they would be directly involved. This kind of embodied learning is a wonderful way of generating new data about the social implications of mediation and offers the opportunity to engage directly to empower the community under study. 

Having undertaken a range of research investigations into ‘hot button’ issues such as Australian pornography producers and consumers, young people’s use of social media for sexual health informationyoung people’s responses to sexting, and selfie cultures, I am regularly invited to address sexual health promotion professionals (including clinical staff and teachers) seeking to better understand ‘what media does to young people’.

In the process, I have become increasing concerned that while online and mobile media practices are now ubiquitous (if not universal) elements of young Australians’ everyday sexual cultures, many sexuality education and health promotion professionals seem to have had little (or no) access to foundational training in media and communications technologies and practices.

Consequently, the Rethinking Media and Sexuality Education project sought to investigate the desirability and utility of providing sexuality educators and health promotion professionals with an introduction to the theoretical and methodological frameworks underpinning my research on media and sexuality.

Rather than discussing young people’s media practices directly, I shared some frameworks for thinking critically about media, gender and sexuality without seeking to quantify ‘impact’ or ‘effects’, and invited participation in a series of exercises adapted from the Selfie Course, with the aim of offering a prototype toolkit that might be applied across different professional settings and contexts.

How do selfies communicate a desire for intimacy? Participants in the Selfie Workshop are tasked with creating selfies for different audiences and contexts. (Pic used with permission from creator.)

The workshop introduced participants to a range of media theories (including Stuart Hall’s ‘encoding/decoding’ model ), followed by hands-on exercises drawn from the Selfie Course, particularly the Sexuality, dating and gender module, which I co-authored with colleagues Fatima Aziz and Magdalena Olszanowski. In the context of the Rethinking Media workshop, I briefly acknowledged the stereotypical ‘duckface selfie’, then moved on to introduce other selfie genres that were clearly read as an expression of ‘identity’, without revealing the photographer’s face. These the pelfie (a pet selfie), a range of body part selfies (such as the foot selfie, aka felfie), and the shelfie – a self-portrait featuring the contents of the photographer’s bookshelf.

The first activity was adapted from ‘The Faceless Selfie’ which my Selfie Researcher Network colleagues and I described as an exercise exploring the ways that “people navigate the ubiquity of online surveillance while simultaneously wishing to connect with others on social media sites”. This activity invites participants to use their own mobile phones to create a selfie that their friends or family would definitely recognise as them, without showing their faces.Read More… Thinking with selfies

Trace ethnography: a retrospective


Stuart GeigerStuart Geiger @staeiou continues our edition of ‘The Person in the (Big) Data‘ with a reflection on his practice of ‘trace ethnography’ that focuses on the trace-making techniques that render users’ activities and intentions legible to each other. Importantly, Stuart argues, we as researchers need to see these traces in the context of our active socialization within the community in question, rather than passively reading traces through lurking. 

When I was an M.A. student back in 2009, I was trying to explain various things about how Wikipedia worked to my then-advisor David Ribes. I had been ethnographically studying the cultures of collaboration in the encyclopedia project, and I had gotten to the point where I could look through the metadata documenting changes to Wikipedia and know quite a bit about the context of whatever activity was taking place. I was able to do this because Wikipedians do this: they leave publicly accessible trace data in particular ways, in order to make their actions and intentions visible to other Wikipedians. However, this was practically illegible to David, who had not done this kind of participant-observation in Wikipedia and had therefore not gained this kind of socio-technical competency. 

For example, if I added “{{db-a7}}” to the top an article, a big red notice would be automatically added to the page, saying that the page has been nominated for “speedy deletion.” Tagging the article in this way would also put it into various information flows where Wikipedia administrators would review it. If any of Wikipedia’s administrators agreed that the article met speedy deletion criteria A7, then they would be empowered to unilaterally delete it without further discussion. If I was not the article’s creator, I could remove the {{db-a7}} trace from the article to take it out of the speedy deletion process, which means the person who nominated it for deletion would have to go through the standard deletion process. However, if I was the article’s creator, it would not be proper for me to remove that tag — and if I did, others would find out and put it back. If someone added the “{{db-a7}}” trace to an article I created, I could add “{{hangon}}” below it in order to inhibit this process a bit — although a hangon is a just a request, it does not prevent an administrator from deleting the article.

File:Wiki Women's Edit-a-thon-1.jpg

Wikipedians at an in-person edit-a-thon (the Women’s History Month edit-a-thon in 2012). However, most of the time, Wikipedians don’t get to do their work sitting right next to each other, which is why they rely extensively on trace data to coordinate render their activities accountable to each other. Photo by Matthew Roth, CC-BY-SA 3.0

I knew all of this both because Wikipedians told me and because this was something I experienced again and again as a participant observer. Wikipedians had documented this documentary practice in many different places on Wikipedia’s meta pages. I had first-hand experience with these trace data, first on the receiving end with one of my own articles. Then later, I became someone who nominated others’ articles for deletion. When I was learning how to participate in the project as a Wikipedian (which I now consider myself to be), I started to use these kinds of trace data practices and conventions to signify my own actions and intentions to others. This made things far easier for me as a Wikipedian, in the same way that learning my university’s arcane budgeting and human resource codes helps me navigate that bureaucracy far easier.Read More… Trace ethnography: a retrospective

Verklempt: Historically Informed Digital Ethnography


VerklemptI’m not one to speak about theory and method in the abstract. But when I am asked about my method, I typically respond that I use historically informed ethnography. However, whenever I say this I think of Mike Meyers’ SNL character Linda Richman. On Richman’s public access show, she and her friends talked about “about coffee, New York, dawters, dawgs, you know – no big whoop – just coffee talk.” During their discussions Richman would often become “verklempt,” such as in recalling meeting Barbara Streisand; overcome with emotion, she’d turn to her guests with a prompt: “The Prince of Tides is neither about a Prince nor tides – discuss.”

Hence, while I might say “historically informed ethnography,” I think to myself that “my work is neither historical nor ethnographic – discuss.”

Historically informed

As a computer science undergraduate I loved (and minored in) history. I still do love history and find that while I am typically focusing on contemporary communities and how they work together, historical context is important to my developing understanding of the practices of today.

When I went off to graduate school for a PhD, I was very much inspired by a little known work about Quakers: Michael Sheeran’s1 Beyond Majority Rule: Voteless Decisions in the Religious Society Of Friends. This was an ethnography of their consensus decision-making, but began with an introduction to their history, one that greatly informs the present-day. For instance, Quakers’ decision-making is a reflection of the origins of Protestantism. In short, under Protestantism it was thought that divine will could be discerned via the individual rather than through the church. However, the idea of individual discernment allowed for some unusual (and ill-favored) beliefs, such as those of the Ranters and the messianic Quaker James Naylor. This, in turn, brought increased persecution by the state. Hence, early Quakers faced the problem of how to represent themselves as moderate and nonthreatening. Their solution, in part, was to adopt a position of pacifism and community consensus. This historical context imparted a much richer understanding than if I had only read of their current day decision making. Accordingly, I tried to do the same thing with respect to Wikipedia collaboration by placing it in the historical context of what I called the pursuit of the universal encyclopedia.

Hence, even when I am focused upon the seemingly faddish phenomena of the digital realm, I challenge myself to ask if this is truly something never seen before? It rarely is, which then permits me to ask the more interesting and productive question of how is it different from (or a continuation of) what has gone before?

Is this history or ethnography? And at what point, in trawling through online archives, does ethnography become history?

A digital interlude

Much of my quandary about history and ethnography relates to my domain of study. I love being able to immerse myself in the conversations and cultural artifacts of a community. Much of this is likely a reflection of my personality. I can be shy and I enjoy hunting through archives for something that is novel and leads to an insight. I am often happy to work alone as I read through blogs, wiki pages and email archives. Yet, is this history or ethnography? And at what point, in trawling through online archives, does ethnography become history? (When the sources are dead?)

I’m fortunate that I tend to study open communities and geeks. This means that many of my sources are prolific self-documenters, publishing their thoughts and contributions in public. Consequently, I have many primary sources, and I want to share them with my readers. In fact, after a decade of work, I have over four thousand sources and as I’ve done this work, I’ve continued to develop a system by which I can easily document, find, and manage this information. I recently did a screencast of the two tools I’ve developed for this.

Of course, this is not to say that conversations and interviews with community members are not useful. I’ve attended many a conference, Meetup, and un-conference. Many times people have shared with me context and background that has been invaluable to my understanding and portrayals. Sometimes, I delight in a key insight or wonderful quotation I can use from an interview. However, I do take lesser pleasure in an insight communicated to me privately than one I can find publicly. I don’t attempt to rationalize or advocate for this position, it is simply my preference. (I suspect many of the lofty words spent on academic distinctions is to justify similar differences in personal sensibilities and social habitus).Read More… Verklempt: Historically Informed Digital Ethnography

A Psychologist Among Ethnographers: an Interview with Beatriz Arantes of Steelcase


Beatriz Arantes (@beatriz_wsf) is a psychologist and senior researcher based in Paris for Steelcase’s global research and foresight group WorkSpace Futures, providing expertise on human emotion, cognition and behavior to inform organizational practices and workplace design.

Talk to any ethnographer outside of academia, and you will surely find a fascinating tale. In this post for the January EPIC theme, I interviewed Beatriz Arantes (@beatriz_wsf) where she spins a rivitetting account spanning multiple continents. She recounts to us how she started out as a clinical psychologist and then ended up researching work spaces in Paris at Steelcase. One of the reasons we started Ethnography Matters is because we wanted to make the work that ethnographers do inside companies more public, so we are very happy to have feature Beatriz’s research.

Beatriz is currently a senior researcher for Steelcase, a leading provider of workplace settings and solutions for companies all over the world.  She is in the WorkSpace Futures group where she researches workplace behaviors and needs from multi-stakeholder perspectives to inform marketing, design and innovation, and examines how technology is changing these behaviors and needs. She has recently devolved into the necessary conditions for worker wellbeing, which you can read about here.

For more posts from this EPIC edition curated by contributing editor Tricia Wang (who gave the opening keynoted talk at EPIC this year), follow this link.

 

Steelcase's 360 Magazine; Issue 67 on Wellbeing

Steelcase’s 360 Magazine; Issue 67 on Wellbeing

Beatriz, so you work with other ethnographers at Steelcase. So what do you gain by going to EPIC, a conference with more ethnographers?
EPIC was the first conference I ever went to that focused on my specific line of work, which was incredible. Yet within that focus, there was amazing breadth. The world is so big that we can’t each master it all. At Steelcase, we do take a broad look at the human condition and user experience in order to eventually narrow the application down to work situations, but there are definitely topics that are outside our scope. At EPIC, I could just delight in the variety of cultures, approaches, themes and theories. It’s a way to renew my own approach, to find inspiration, and make unprecedented connections. All of this enriches my own work. Besides, at such a conference, there is room to play, as well as to discuss the serious issues that we don’t usually take time for in our day to day.

Anything in particular that stood out for you?
I was also particularly enthralled with the quality of the keynote talks, each bringing profound wisdom on issues that had been gnawing on my mind and just provided the insight I needed. To have that put on a platter in an entertaining format, surrounded by peers… it’s a priceless experience.

Oh like what?
Like on the cultural origins of our visceral reactions to technology and artificial intelligence by Genevieve Bell, and like David Howe’s phenomenal critique of marketing’s dash for the privatization of the senses. What these talks all did was apply anthropological lenses to study our own culture’s assumptions – very dominant assumptions that often get the indisputable “science” stamp of approval, that end up clouding our judgment on the possibility of alternative realities.  This is important work, that challenges the dominating worldview that we take for granted and remains deeply entrenched, which is powerful because it allows us to really see our assumptions and opens new paths for exploration.  That’s why I liked your talk so much.

Why, thank you!
I loved your dissection of the very messy and emotional debate that went into establishing scientific measurement of electricity. Shedding light on the human-ness of measurement is extremely important in this moment in history, where we have never been so widely preoccupied as a society with measuring things as a way to reveal the truth about reality, through algorithms and big data. As if these measures existed in some pure form, waiting to be discovered. Your talk challenged our assumptions with an example of a measurement that we all take for granted. What you reminded us is that measurement is a human cultural production and we cannot put it above as unchallenged law. Scientific findings are constantly being revised, because they are our useful —  but crude and fallible —  approximations of reality. We can keep raising this caution until we turn blue in the face, but you shared a very elegant demonstration in your talk. This kind of argument provides substance to the debate we really should be having as a society to challenge the supremacy of algorithmic truth.Read More… A Psychologist Among Ethnographers: an Interview with Beatriz Arantes of Steelcase

What We Buy When We Buy Design Research: Bridging “The Great Divide” between Client and Agency Research Teams


Andrew Harder

Andrew Harder (@thevagrant) is a researcher who likes to make things. He specialises in aligning emerging market user insights with shipping software using ethnography, usability testing, product sprint workshops and elbow grease.

Hannah Scurfield

Hannah Scurfield (@theduchess) is a design research manager working for Intel in London. She works with technologists and designers to drive software innovation and strives to institutionalise user empathy.

Editors note: This blog post is from Andrew Harder (@thevagrant) and Hannah Scurfield (@theduchess) who ran the workshop What we buy when we buy design research at EPIC 2013.  I invited Andrew and Hannah to guest contribute to the January EPIC 2013 theme because their workshop speaks to a much needed and missing conversation on what exactly clients are buying and what agencies are delivering in design research. Their articles allows us to peek into some of the important discussions that emerged from the workshop. They share with us several strategies that should be considered in the execution of design research processes.

Both Andrew and Hannah a very unique background that enables them to speak from the perspective of agencies and clients.  Having moved from agency research to in-house research, they understand the affordances and challenges that boutique firms and large corporations experience.  All views expressed are the authors own not those of their employers.

For more posts from this January EPIC edition curated by contributing editor Tricia Wang, follow this link.
design-5Like most good ideas to come out of England, the inspiration for our workshop at EPIC 2013 came from conversations in the pub. In this case, we were talking about “the great divide” between client and agency research teams.

Within a few years of each other, we had both left user experience agencies to work as design research managers inside big companies. Despite having worked in-house previously, this marked a transition in both our careers.

In agencies we  both sold design research to large companies. We faced similar challenges; fighting for more budget and time in the field to do more insightful work and wanting earlier involvement with designers so we could shape their work without compromise.

When we moved in-house, we faced new territory. Suddenly we had all the time we wanted, years of it. We had a research budget, sometimes a lot of it. We could work with designers from the minute they got their brief or in some cases, we were working to shape the design brief.

Yet we were also faced with some hard truths that we hadn’t anticipated. In our experience working for big consumer product companies means you are a small cog in a large machine, with objectives and dependencies that spread far beyond a specific research project. Couple that with a complex web of product owners and stakeholders and a design team to keep engaged, and you start to see why design research projects often come unstuck.

Often after spending budget on ethnographic research, design teams are still struggling later on, wanting insights that the research did not provide. And sometimes, no matter how clearly the external research agencies were briefed on project objectives, the deliverables unwittingly undermined the project vision, approach or relationships.

For both client and agency teams, keeping a research project on track is an art form in itself. However when we spoke with our colleagues and friends in research, it confirmed something we had suspected: nobody in industry or academia is openly discussing the process of buying design research. We can study project management styles but the topic of design research project management has been overlooked. The subject appears to be ‘taboo’, much to the detriment, we believe, of both client and agency research teams.Read More… What We Buy When We Buy Design Research: Bridging “The Great Divide” between Client and Agency Research Teams

Lessons Learned From EPIC’s Mobile Apps & Quantified Self Workshop


MikeGotta_CasualMike Gotta (@Mikegotta) is a Research Vice President for collaboration and social software at Gartner. He has more than 30 years of experience in the IT industry, with 14 of those years spent as an industry analyst advising business and IT strategists on topics related to collaboration, teaming, community-building, and social networking. He has expanded his research to include quantified self trends as well as the business use and organizational value of ethnography. He is currently pursuing a master’s degree in Media Studies at The New School in New York City.

At EPIC 2103Mike Gotta (@Mikegotta) gave a workshop, Mobile Apps & Sensors: Emerging Opportunities For Ethnographic Research, that examined mobile apps developed for ethnographic research uses. I asked Mike to contribute to the January EPIC theme at Ethnography Matters because his research is always spotlighting some of the most fascinating trends in the tech industry. In this article, Mike provides a wonderful overview of his workshop, but even more interesting is his discussion of all the different ways the dialogue veered away from the original topic of the workshop. Essentially, things didn’t go as Mike had planned. The new direction, however, offered Mike a lot of insights into the future of mobile apps, which led him to reflect on personalized sensors as part of Quantified Self trends and the increasing importance of APIs in future research tools.  If you’re a qualitative researcher who wants to know how to make use of the latest mobile apps, this is a must-read article. The second half of Mike’s article can be read on Gartner’s blog.

Mike is currently at Gartner, Inc. (NYSE: IT), which describes itself as the world’s leading information technology research and advisory company. Mike is a familiar face at Ethnography Matters; during his time at Cisco Systems, Mike contributed to Ethnography Matters a piece that has become one of the most often-cited pieces of research on the role of ethnography in  Enterprise Social Networks (ESN).

For more posts from this January EPIC edition curated by contributing editor Tricia Wang, follow this link.

Slide1You might wonder – what’s a technology industry analyst doing at EPIC and why deliver a workshop on mobile apps and sensors?

The world of the IT industry analyst is becoming much more inter-disciplinary as societal, cultural, economic, media, demographic, and technology trends become more intertwined. These trends, perhaps, were always entangled in some fashion and we are only now becoming more interested in how the patterns of everyday life are mediated by various technologies.

There was a time when industry analysts could cover technology trends and their business relevance as long as they had an IT background. That might still be true in some cases – maybe – but in my opinion, being well-versed in social sciences is becoming a baseline competency for those in my profession.

Which brings me back to EPIC 2013. I had been looking into synergies across design, ethnography, and mobile and was happy to deliver a workshop for EPIC attendees to look at advances in mobile apps that support ethnographic research. As a group, we identified the pro/con’s of mobile apps and discussed how field research could be better supported. The topic was relevant not only to the ethnographic community but also to audiences who interact frequently with industry analysts: digital marketers, innovation teams, design groups, product/service managers, and IT organizations.  It struck me that EPIC (as a conference and organization) is in a position to act as a yearly event touch point between those in the social sciences and business/technology strategists interested in the same issues.Read More… Lessons Learned From EPIC’s Mobile Apps & Quantified Self Workshop

Technology and Fieldwork: Ethnographic quandaries


mcmanusJohn McManus studies Turkish football fans in the diaspora at Oxford University’s Center on Migration, Policy and Society (Compas).

Editor’s note: This event report is the final post in the ‘Being a student ethnographer‘ series. It documents a discussion – the third of the Oxford Digital Ethnography Group’s (OxDEG) events this term – dedicated to ‘technology and fieldwork’. Open to the entire university, OxDEG draws students and faculty from a wide variety of departments but is led by students from the Oxford Internet Institute and the School of Anthropology and Museum Ethnography. In this seminar, participants discussed what technologies are useful for ethnographers studying social activity in digital environments and recognized common concerns.

tech
Technology and Fieldwork, Fieldwork and Technology: this was a massive subject and we had only 90 minutes to discuss. Throw into the mix a broad range of disciplines (computing studies, ethnomusicology, anthropology), season with some striking subject matter (Wikipedia, ethnomusicology of the chip music scene, Uranium extraction in Tanzania) and what do you get?  A frank and wide-ranging debate on ethnography, in fact. The main “take away” message was, you are not alone. It was heartening to reach across the disciplinary boundaries and see that those in other departments are struggling with very similar theoretical and methodological problems.

Top on the list was the specialist language of computing – with its parsing, programming and algorithms – acting sometimes as a barrier to ethnographers engaging in innovative research methods. How to get over this hump? One suggestion was for a tweaking of anthropological methods training course. If you’re going to study Turkish village practices, you learn Turkish. Balinese Cockfights? Some sort of Indonesian might come in handy. Why, then, do we rarely hear departments exhorting potential digital ethnographers to go take a course in Python or some other programming language? Read More… Technology and Fieldwork: Ethnographic quandaries