Cheering up the chatbot


The speech to text tool on my phone is convinced that “ethnography” = “not greasy.” (At least “not greasy” tends to be a postive thing?) Generally STT and voice commands work great on it though. You have to talk to it the right way: Enunciate; dramatic pauses between each word; don’t feed it too many words at once. The popular speech recognition application Dragon NaturallySpeaking emphasizes that users train the system to recognize their voices, but there’s always an element of the system training its users how to talk.

For entertainment purposes, it’s best to avoid the careful pauses and smush things together, producing text message gems like “Send me the faxable baby.”  It’s the mismatches between human intention and machine representation that can make using natural language interaction tools like STT, chatbots and speech prediction both frustrating and hilarious. When it’s bad, it’s really really good.

I’ve been playing with the game Cheer up the Chatbot the last couple days (from RRRR, “Where the games play you”).

Chatbot has an unusual way of interacting with people, as so many chatbots do.

Screen explaining Chatbot's mental disorders

Screen explaining Chatbot’s mental disorders

Understandably, Chatbot is sad.

chatbotissad

Poor chatbot

 

The goal is to get Chatbot to smile.

Open-ended questions make robots happy

Open-ended questions make robots happy

 

The game is a mix of bot and human-to-human chat, where you switch between talking to the game’s bot and to different players who are presented as the “Chatbot” speaker to each other.  When you hit a moment where there are enough players with different agendas online — including some who don’t know how the game works, some presenting as Chatbot, and some presenting as people — it can get weird.

There are so many layers of anthropomorphism, and… robotomorphism (?) at work.  Make Chatbot happy! Chatbot suffers from psychological disorders. The bot mimics human language (terribly, as the game designers freely admit), while players mimic Chatbot’s terrible mimicry, and  present themselves as robots — really anthropomorphized robots on the lookout for other cute robots. Players who think they’re talking to a bot when they’re really talking to a person interpret Chatbot’s replies as “really good AI!” And yet the ways that people use language to mimic the bot can be unconvincing, kind of like a reverse Turing test.

Reading Heather’s interview with Stuart Geiger about robot ethnography brings some of this weirdness into relief. The stereotypical robot is explicitly anthropomorphized and uses some approximation of natural language, which could make a chatbot both engaging and awkward as an example for thinking about robots and culture. It’s an anthropocentric example of robot-ness. And using a program would be insufficient as a way to immerse oneself in robot culture, or as a way to understand digital representations.

But it is interesting to think about anthropomorphism, agency and intent in this context.  Reading my own post over there are a few places where I pause to wonder if I’m using an anthropomorphic metaphor or referring to an agency/ capacity to act that “things” can have from some perspectives. For example:

NaturallySpeaking  “training its users how to talk”? – Agency, I think. The system enacts change.

“Make Chatbot happy!” – Anthropomorphism.

“Get Chatbot to smile”  – A “smile”? Anthropomorphic. But “getting Chatbot to smile” reminds me that there is nothing players can do to make Chatbot smile, as far as I can tell, but it’s the possibility that gets people to chat and strategize.

When I first played the game I thought of smiling as something I could “get Chatbot to do.” But only Chatbot can get Chatbot to smile, and that’s true in some sense even if I could do something to provoke a smile. The meaning I ascribe to Chatbot’s smile has shifted.

What meanings might a robot ascribe? Sam Ladner writes that:

The real essence of ethnography is the study of culture or as Geertz would say, the “webs of significance” or the meaning individual social actors ascribe to objects, events, or people.

Can robots ascribe meaning? A related question from some perspectives might be whether people can ascribe meaning… and when/whether that matters.

Tags: , , , ,

No comments yet.

Leave a Reply