Tag Archives: collaboration

The human-side of artificial intelligence and machine learning


StevenGustafsonNote from the Editor, Tricia Wang: Next up in our Co-designing with machines edition is Steven Gustafson (@stevengustafson), founder of the Knowledge Discovery Lab at the General Electric Global Research Center in Niskayuna, New York. In this post, he asked what is the role of humans in the future of intelligent machines. He makes the case that in the foreseeable future, artificially intelligent machines are the result of creative and passionate humans, and as such, we embed our biases, empathy, and desires into the machines making them more “human” that we often think. I first came across Steven’s work while he was giving a talk hosted by Madeleine Clare Elish (edition contributor) at Data & Society, where he spoke passionately about the need for humans to move up the design process and to bring in ethical thinking in AI innovation. Steven is a former member of the Machine Learning Lab and Computational Intelligence Lab, where he developed and applied advanced AI and machine learning algorithms for complex problem solving. In 2006, he received the IEEE Intelligent System’s “AI’s 10 to Watch” award. He currently serves on the Steering Committee of the National Consortium for Data Science, based out of University of North Carolina. Recently. he gave the Keynote at SPi Gobal’s Client Advisory Board Summit in April 2016, titled “Advancing Data & Analytics into the Age of Artificial Intelligence and Cognitive Computing”.

landscape-1457536221-alphago (1)Recently we have seen how Artificial Intelligence and Machine Learning can amaze us with seemingly impossible results like AlphaGo. We also see how machines can generate fear with perceived “machine-like” reasoning, logic and coldness, generating potentially destructive outcomes with a lack of humanity in decision making. An example of the latter that has become popular is how self driving cars decide to choose between two bad outcomes. In these scenarios, the AI and ML are embodied as a machine of some sort, either physical like a robot or car, or a “brain” like a predictive crime algorithm made popular in the book and film “Minority Report” and more recently TV show “Persons of Interest.

I am a computer scientist with the expertise and passion for AI and machine learning, and I’ve been working across broad technologies and applications for the past decade. When I see these applications of AI, and the fear or hype of their future potential, I like to remember what first inspired me. First, I am drawn to computers as they are a great platform for creation and instant feedback. I can write code and immediately run it. If it doesn’t work, I can change the code and try it again. Sure, I can make proofs and develop theory, which has its own beauty and necessity at times, but I remember one of the first database applications I created and how fun it was to enter sample data and queries and see it work properly. I remember the first time I developed a neural network and made it play itself to learn without any background knowledge how to play tic tac toe. This may be a very trivial example, but it is inspiring nonetheless.

Can a machine write its own code? Can a machine design a new, improved version of itself? Can a machine “evolve” like humans into a more intelligent species? Can a machine talk to another machine using a human language like English? These were all questions that excited me as an undergraduate computer scientist, and that led me to study AI and ML during grad school, and these are all questions that can be answered with a Yes! Machines, or computers and algorithms, have been shown in different circumstances to achieve these capabilities, yet both the idea that machines have the capabilities and the idea that machines can learn are scary concepts to humans in the general sense. But when we step into each one of these achievements, we find something that I believe is both creative, inspiring and human.

But let me step back for a minute. Machines can not do those things above in a general sense. For example, if I put my laptop in a gym with a basketball, it can’t evolve a body and learn to play basketball. That is, it can’t currently do that without the help of many bright engineers and scientists. If I downloaded all my health data into my phone, my phone is not going to learn how to treat my health issues and notify my doctor. Again, that is it can’t do that currently without the help of many smart engineers and scientists. So while my machine can’t become human today on its own, with the help of many engineers and scientists solving some very interesting technology, user experience, and domain specific problems, machines can do some very remarkable things, like drive a car or engage in conversation.

The gap that creative, intelligent and trained engineers and scientists play today is a gap that must be closed for intelligent machines that both learn and apply that learning. That gap is also a highly human gap – it highlights the desire of our species, accumulation of knowledge, our ability to overcome challenging problems, and our desire to collaborate and work together to solve meaningful problems. And yes, it can also highlight our failures to do the right thing. But it is a human thing, still.

Read More… The human-side of artificial intelligence and machine learning

Digital Visual Anthropology: Envisaging the field


Screen shot 2013-11-28 at 3.40.03 PMShireen Walton is a D.Phil student in Anthropology at the Institute of Social and Cultural Anthropology, University of Oxford and member of the Oxford Digital Ethnography Group. Shireen studies online communities of Iranian photographers with a special focus on photo blogs.

Editor’s note: In this post for our ‘Being a Student Ethnographer‘ edition, Shireen Walton relays a conversation with David Zeitlyn at a special seminar on Digital Visual Anthropology (DVA) in Oxford earlier this month. As someone new to the online field, Shireen has been forced to think rather seriously over the past few years about some of the big questions concerning the visual sub-category of a contemporary digital anthropology. David Zeitlyn is based at Oxford University’s Institute of Social and Cultural Anthropology and has been a key figure in the developing relationship between Social Anthropology and ICT – especially in opening up innovative pathways for the use of multimedia, visualisation and Internet technologies in social anthropological research projects.

The main issue faced by all digital researchers, it seems, is to think first and foremost about how the traditional practice of ethnography translates to the online context. They have to do this in a manner both faithful and rigorous enough to constitute ethnographic research, whilst being adaptable enough to meet fresh challenges stemming from new zones of (online) engagement: a challenging prospect. Leading on from this, anthropologists are then forced to consider what existing methodological tools they might rely on in order to even broach these new topics whilst creatively, and rather bravely, suggesting how they might need updating.

One of the broadest issues we considered in the seminar was whether digital anthropology can these days be regarded as a new, official sub-discipline within mainstream anthropology as Horst and Miller recently declared in the introduction to their edited volume, Digital Anthropology (Horst and Miller 2012). Following on from this, might we then propose that the visual sub-field of a digital anthropological project could then itself constitute a ‘sub-sub field?’ These issues require thinking about where contemporary DVA might sit within the mainstream anthropological canon, including its established methods and epistemological boundaries.

Defining DVA essentially involves two main considerations as either site of or method of research, (or both), as Sarah Pink has identified in her seminal article entitled: Digital Visual Anthropology: Potentials and Challenges, (Pink 2011). In the case of my own research for example, studying the Iranian ‘photo-blogosphere’ constitutes both a site of enquiry – i.e. a visual system of popular Iranian cultural expression on the Internet, as well as a method of enquiry, using the online medium to access these communities and conduct online participant observation amongst them. I rely on digital and visual technologies including the Internet, the digital camera, and a digitally-curated online exhibition, in order to situate myself in the field and conduct research in a technologically-relevant manner which befits the activities of my participants.Read More… Digital Visual Anthropology: Envisaging the field

What does it mean to be a participant observer in a place like Wikipedia?


The vision of an ethnographer physically going to a place, establishing themselves in the activities of that place, talking to people and developing deeper understandings seems so much simpler than the same activities in multifaceted spaces like Wikipedia. Researching how Wikipedians manage and verify information in rapidly evolving news articles in my latest ethnographic assignment, I sometimes wish I could simply go to the article as I would to a place, sit down and have a chat to the people around me.

Wikipedia conversations are asynchronous (sometimes with whole weeks or months between replies among editors) and it has proven extremely complicated to work out who said what when, let alone contact and to have live conversations with the editors. I’m beginning to realise how much physical presence is a part of the trust building exercise. If I want to connect with a particular Wikipedia editor, I can only email them or write a message on their talk page, and I often don’t have a lot to go on when I’m doing these things. I often don’t know where they’re from or where they live or who they really are beyond the clues they give me on their profile pages.Read More… What does it mean to be a participant observer in a place like Wikipedia?