Tag Archives: Molly Templeton

Why do brands lose their chill? How bots, algorithms, and humans can work together on social media



Note from the Editor, Tricia Wang: The fourth contributor to the Co-designing with machines edition is Molly Templeton (@mollymeme), digital and social media expert, Director of Social Media at Everybody at Once, and one of the internet’s first breakaway YouTube stars. Her piece urges brands’ social media strategy to look beyond the numbers when working in the digital entertainment and marketing industry. Molly gives specific examples where algorithms don’t know how to parse tweets by humans that are coded with multiple layers of emotional and cultural meaning. She offers the industry a new way to balance the emotional labor in audience management with data analysis. Her articles draws on her work at Everybody at Once, a consultancy that specializes in audience development and social strategy for media, entertainment, and sports.

@Tacobell spent an hour sending this same gif out to dozens of people. The account is probably run by humans (most social media presences today are). And they were following best practice by “replicating community behavior,” that is, talking the way normal people talk to each other (a human taco bell fan would definitely send a gif). But when @tacobell only sends the same gifs out over and over again, it’s uncanny. It’s pulling the right answers from the playbook, but at the wrong frequency.  

Why do brands lose their chill?

I think that brands lose their chill when they don’t let their social media managers exercise empathy. The best brands on social media balance the benefit of interaction with the risk of human error – managers are constantly concerned with pissing off the organization, or the audience, and ultimately trying to please both sets of real people. Hitting campaign goals and maximizing efficiency are important, but social media managers need to bring humanity to their work. They have to understand the audience’s moods and where they’re coming from, and they have to exercise empathy at every level: customer service, information and content sharing, community management, call-to-actions, participation campaigns, crisis and abuse management. That is a lot of emotional labor.   

With the recent chatter about chat bots on Facebook’s messenger platform, a lot of people are thinking about how bots can take over communications roles from humans. I’ve been thinking a lot about the opposite: how can machines help people manage the emotional labor of working with audiences? Can bots ever help with the difficult, and very human task of managing with empathy?  

Social media is a business of empathy  

Emotional connections drive social media. When people gather around the things they feel passionate about, they create energy. It’s because of limbic resonance — the deep, neurological response humans have to other people’s emotions. As my colleague Kenyatta Cheese says, it’s that energy that makes participating as a fan on social media feel as electric as it does when you’re part of a physical crowd.  Read More… Why do brands lose their chill? How bots, algorithms, and humans can work together on social media

Co-designing with machines: moving beyond the human/machine binary



web-7525squareLetter from the Editor: I am happy to announce the The Co-Designing with Machines edition. As someone with one foot in industry redesigning organizations to flourish in a data-rich world and another foot in research, I’m constantly trying to take an aerial view on technical achievements. Lately, I’ve been obsessed with the future of design in a data-rich world increasingly powered by of artificial intelligence and its algorithms. What started out over a kitchen conversation with my colleague, Che-Wei Wang (contributor to this edition) about generative design and genetic algorithms turned into a big chunk of my talk at Interaction Design 2016 in Helsinki, Finland. That chunk then took up more of a my brain space and expanded into this edition of Ethnography Matters, Co-designing with machines. In this edition’s introductory post, I share a more productive way to frame human and machine collaboration: as a networked system. Then I chased down nine people who are at the forefront of this transformation to share their perspectives with us. Alicia Dudek from Deloitte will kick off the next post with a speculative fiction on whether AI robots can perform any parts of qualitative fieldwork. Janet Vertesi will close this edition giving us a sneak peak from her upcoming book with an article on human and machine collaboration in NASA Mars Rover expeditions. And in between Alicia and Janet are seven contributors coming from marketing to machine learning with super thoughtful articles. Thanks for joining the ride! And if you find this to be engaging, we have a Slack where we can continue the conversations and meet other human-centric folks. Join our twitter @ethnomatters for updates. Thanks. @triciawang

giphy (1)

Who is winning the battle between humans and computers? If you read the headlines about Google’s Artificial Intelligence (AI), DeepMind, beating the world-champion Go player, you might think the machines are winning. CNN’s piece on DeepMind proclaims, “In the ultimate battle of man versus machine, humans are running a close second.” If, on the other hand, you read the headlines about Facebook’s Trending News Section and Personal Assistant, M, you might be convinced that the machines are less pure and perfect than we’ve been led to believe. As the Verge headline puts it, “Facebook admits its trending news algorithm needs a lot of human help.”

The headlines on both sides are based in a false, outdated trope: The binary of humans versus computers. We’re surrounded by similar arguments in popular movies, science fiction, and news. Sometimes computers are intellectually superior to humans, sometimes they are morally superior and free from human bias. Google’s DeepMind is winning a zero-sum game. Facebook’s algorithms are somehow failing by relying on human help, as if collaboration between humans and computers in this epic battle is somehow shameful.

The fact is that humans and computers have always been collaborators. The binary human/computer view is harmful. It’s restricting us from approaching AI innovations more thoughtfully. It’s masking how much we are biased to believe that machines don’t produce biased results. It’s allowing companies to avoid taking responsibility for their discriminatory practices by saying, “it was surfaced by an algorithm.” Furthermore, it’s preventing us from inventing new and meaningful ways to integrate human intelligence and machine intelligence to produce better systems.

giphyAs computers become more human, we need to work even harder to resist the binary of computers versus humans. We have to recognize that humans and machines have always interacted as a symbiotic system. Since the dawn of our species, we’ve changed tools as much as tools have changed us. Up until recently, the ways our brains and our tools changed were limited to the amount of data input, storage, and processing both could handle. But now, we have broken Moore’s Law and we’re sitting on more data than we’re able to process. To make the next leap in getting the full social value out of the data we’ve collected, we need to make a leap in how we conceive of our relationships to machines. We need to see ourselves as one network, not as two separate camps. We can no longer afford to view ourselves in an adversarial position with computers.

To leverage the massive amount of data we’ve collected in a way that’s meaningful for humans, we need to embrace human and machine intelligence as a holistic system. Despite the snazzy zero-sum game headlines, this is the truth behind how DeepMind mastered Go. While the press portrayed DeepMind’s success as a feat independent of human judgement, that wasn’t the case at all. Read More… Co-designing with machines: moving beyond the human/machine binary