The event was put on by the Data & Society think/do tank in New York, organized by danah boyd. Hancock’s talk was on-the-record, and I took a few notes. His remarks tracked closely with what he said in July at a Microsoft “faculty summit,” so I will use that text to help me represent what he said.
Summary of his presentation
Hancock told us he wanted to devote the next few years of his work to moving this discussion forward, by which he meant the ethics and transparency of big data research. He said he was especially concerned about the mistrust of science that the Facebook controversy had kicked up, “which I regret very deeply.” He said he didn’t want others to go through what he went through, a reference to the hate mail and threats directed at him once the study became famous on the internet.
The Facebook happy/sad study (my shorthand) had its origins in Hancock’s earlier work attempting to disprove a thesis in psychology: that “emotional contagion” — where one person “catches” an emotional mood from another without being aware of it — was unlikely to happen through text communication. He disagreed with that thesis. Facebook, he said, had followed his research because text updates are so important to the company.
In 2012 researchers at the company decided to test a claim commonly heard: that when users share happy news on Facebook (“I got a new job!” “We’re getting married!”) it makes others feel down about their own shabby lives. They got in touch with Hancock because they figured he would be interested in collaborating. (As Facebook’s Adam Kramer would later put it: “We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.”)
You can see this history reflected in the abstract of the scientific paper that Kramer, Hancock and Jamie Guillory later published in the Proceedings of the National Academy of Sciences:
We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
And in this finding:
…the fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison
By the way, that number, N = 689,003, tells you a lot about why an academic researcher might want to collaborate with Facebook. Any study with 700,000 participants — actually, “subjects” is a better term because they didn’t know they were participating — is bound to look impressive because the large number of people whose reactions are being tested means extra validity for the results.
After reviewing the research design and the findings (showing a “contagion” effect but a very small one) Hancock turned to the reactions he received after news of the study broke on social media and in the press over the weekend of June 27-29, 2014, as well as his own reasoning for why he felt that doing the study was “okay.”
Here, his main point was that he didn’t anticipate the storm to come because in his mind the very slight manipulation of the News Feed met the “minimal risk of harm” test that permits academic researchers to proceed with an experiment even when subjects have not been informed about what is happening. It was minimal, he reasoned, because the Facebook algorithm manipulates News Feeds all the time, in far more dramatic ways than the “contagion” experiment.
The problem, he told us, was that users don’t know how the Facebook algorithm works. They are unaware that Facebook is manipulating and changing it constantly. They think they’re getting everything their friends and family are sending. As he said at the Microsoft event:
I’m not sure whether this means we need to bring in an education component to help people understand that their news feeds are altered all the time by Facebook? But the huge number of e-mails about people’s frustration that researchers would change the news feed indicates that there’s just no sense that the news feed was anything other than an objective window into their social world.
The other thing that stood out for him was just how personally people were taking this! The reactions he got made it clear to Hancock that the Facebook News Feed wasn’t just entertainment or trivia but… something bigger, deeper. Here he drew a contrast between media depictions making fun of social media as “what I had for lunch today,” and what angry emailers told him during the storm. Here’s how he put it in July:
This surfaced a theme that the news isn’t just about what people are having for breakfast or all the typical mass media put-downs of Twitter and Facebook. Rather, this thing that emerged about seven years ago [Facebook] is now really important to people’s lives. It’s central and integrated in their lives. And that was really important for me to understand. That was one of the things that caught me off guard, even though maybe in hindsight it shouldn’t have.
Later, during the question period, Hancock said that if he a “do-over,” he would not choose to do this study again. His reasoning in July:
I think our study violated people’s sense of autonomy and the fact that they do not want their emotions manipulated or mood controlled. And I think it’s a separate issue whether we think emotions are being manipulated all the time, through advertising, etc. What became very clear in the e-mail was that emotions are special… If we work on one of these special classes or categories of human experience, like emotion, without informed consent, without debriefing, we could do larger harm than just harm to participants.
Hancock described a harrowing experience at the center of the storm. The police came to his home to tell him, “we have to figure out how to keep you safe.” The president of Cornell University received calls demanding that he be fired. His family in Canada was contacted by Russian journalists who were trying to get to Hancock through them. He couldn’t sleep. He wondered if he had done something deeply wrong. The academic journal that published his study was considering whether to withdraw it, which would have been a huge blow to his reputation. He said he began to breathe more freely when a panel of scholarly peers split on whether the study violated research protocol, which said to him: These are tough issues. There is no consensus.
My impressions and reactions.
Now I’m going to shift from summarizing what Hancock said to telling you what I think— and what I asked him during the question period.
Disclosure: I have been critical of Jeff Hancock (see here and here.) And while I did gain more sympathy for him by hearing about his experience, and a better understanding of his work by learning about his scholarly background, I’m not at all convinced. More on that in a moment.
Still, I give Hancock a lot of credit for coming to talk to skeptical colleagues, for permitting the session to be on-the-record, for admitting that he wouldn’t do the “contagion” study again, for acknowledging other failures of imagination, for being personable and contrite, and for recognizing that lots of people have lots of problems with what he and Facebook did. No one should have to experience threats to personal for safety for having conducted an academic study, and none of us can predict how we would react in that situation.
As fellow faculty, a colleague, I feel I owe Jeff Hancock my considered opinion about his public performance and scholarly reasoning, even as I recognize that in the center of an internet storm we are not only professionals in a field, but human beings with fears for ourselves and our families. So here is what I think: I’m not convinced.
I’m not convinced that Hancock knew enough about Facebook and its users to even wander into this territory. It’s really kind of shocking to hear a social psychologist and scholar of communication express surprise that users of Facebook take their News Feeds very personally. That’s like saying: “I learned something from my experience. People are serious about this ‘friends and family’ thing. It’s not just a phone company slogan!” We expect you to know that about people before you start experimenting on them.
The relevant contrast is not between emailers informing Jeff Hancock that their News Feed feels quite personal to them and ill-informed press accounts making fun of social media, which is how he framed it, but between a nuanced and studied understanding of something, a pre-condition for scholarly work, and a lazy, person-on-the-street level of knowledge, which is what he essentially admitted to.
I’m also not convinced that Hancock is the man to be “leading a series of discussions among academics, corporate researchers and government agencies” about putting right what was revealed to be wrong by the Facebook study. His experience may be a case study in the need for change. It does not qualify him to convene the change discussion.
Part of the reason I say that involves his decision-making in the six-week interval between the weekend when the controversy broke, June 27-29, and August 12, when he surfaced as a reformer in this New York Times article. Hancock disappeared from the public sphere during this time, while other players made statements and answered at least some of the questions that angry users and alert journalists were asking. That’s not leadership. That’s the opposite of leadership. And this is what I asked him about.
My question to Hancock
I’m going to reproduce my question here. It’s not verbatim, I have added a few details and links, but it’s essentially the same thing I said in the Union Square Ventures conference room October 23rd.
Thanks for doing this, Jeff. My question is simple: why do they give us tenure? But it requires some explanation. In June, Cornell sent out a press release about your study. (“‘Emotional contagion’ sweeps Facebook.”) Clearly it was proud of the work one of its faculty members had done. By definition, the purpose of a press release is to invite publicity and discussion in the public sphere. I’m sure no one anticipated how much attention your study would receive, but still: the invitation was there.
When the world heard about your study, finding a lot to question in it, you absented yourself from that debate for more than six weeks. But this is the very discussion that you told us — today — you want to lead! One of your co-authors, Adam Kramer, tried to address some of the questions in a Facebook post. The editor of the article you published, Susan Fiske, spoke to the press about her decision-making. Cornell addressed the controversy in a statement it released on June 30. But you said nothing on Facebook, the platform where the research was done. You were silent on Twitter. (I was checking.) You wrote nothing on any blog. You cancelled interviews with journalists.
The issues you say you want to work on over the next few years were very much alive in that six week period. People were paying attention to them! Now I recognize that a lot of the attention was ill-intentioned, over-the-top, angry and threatening and very far from the ideal of a calm and rational discussion. I recognize that you felt under attack. But still: I don’t understand your decision-making.
So I ask again: why do they give us tenure? What’s the deal? Is it just: we can’t lose our jobs if people hate what we say? Or do they give us tenure precisely so we can participate in the debate when our work comes under scrutiny and a white hot controversy erupts in the public sphere?
In reply to me, Jeff Hancock said he had never really thought about what tenure was for before all this happened. Beyond that, his response came down to: I was freaked out, I had no training or experience with this, and didn’t know what to do. So I kept quiet. He said he asked some colleagues about whether to respond publicly, including danah boyd. He got requests to go on TV but turned them down. He added that he could have posted a public note that he would not be commenting for a while but didn’t.
I can understand all that. I can sympathize with it. I can recognize — as I’ve tried to do so several times in this post — that he was in a difficult spot, undergoing a trial that few of us can imagine. Nonetheless, I’m not convinced. Based on what I heard last week, I don’t think he knew what to say back to people who had said to him: “How dare you manipulate my news feed!” (Hancock’s paraphrase.) His thoughts on the matter (how did I dare to…?) were superficial— and unfortunately they still are. He has an account, but didn’t go on Facebook to explain, as Adam Kramer (untenured) did. Perhaps because his own ignorance of lived experience on the platform would have been revealed. He was happy to promote his work on Twitter…
— jeff hancock (@jeffhancock) June 5, 2014
…Unhappy when Twitter turned against him. I’m sorry, but I don’t think this is the deal for professors with tenure and academic freedom operating in the public sphere and conducting research about social media. Unlike most of the American work force, we can’t lose our jobs for speaking up. So we speak up when our work is questioned. If people don’t understand how we do our studies, we try to explain how we operate. When the press is suddenly interested in our research, we pick the right forum and answer the questions as best we can. If a lot of the attacks are in bad faith, we find the critics acting in good faith and respond to them.
And if we don’t have answers when the lights are on and reputations are made, well, maybe we’re not the best people to be leading a public discussion about big data and modern society.
(Other participants who were there may want to add their notes about what Jeff Hancock said or give their impressions. Please use the comments for that. And if you want to correct me about anything, please do.)