The event was put on by the Data & Society think/do tank in New York, organized by danah boyd. Hancock’s talk was on-the-record, and I took a few notes. His remarks tracked closely with what he said in July at a Microsoft “faculty summit,” so I will use that text to help me represent what he said.
Summary of his presentation
Hancock told us he wanted to devote the next few years of his work to moving this discussion forward, by which he meant the ethics and transparency of big data research. He said he was especially concerned about the mistrust of science that the Facebook controversy had kicked up, “which I regret very deeply.” He said he didn’t want others to go through what he went through, a reference to the hate mail and threats directed at him once the study became famous on the internet.
The Facebook happy/sad study (my shorthand) had its origins in Hancock’s earlier work attempting to disprove a thesis in psychology: that “emotional contagion” — where one person “catches” an emotional mood from another without being aware of it — was unlikely to happen through text communication. He disagreed with that thesis. Facebook, he said, had followed his research because text updates are so important to the company.
In 2012 researchers at the company decided to test a claim commonly heard: that when users share happy news on Facebook (“I got a new job!” “We’re getting married!”) it makes others feel down about their own shabby lives. They got in touch with Hancock because they figured he would be interested in collaborating. (As Facebook’s Adam Kramer would later put it: “We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.”)
You can see this history reflected in the abstract of the scientific paper that Kramer, Hancock and Jamie Guillory later published in the Proceedings of the National Academy of Sciences:
We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
And in this finding:
…the fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison
By the way, that number, N = 689,003, tells you a lot about why an academic researcher might want to collaborate with Facebook. Any study with 700,000 participants — actually, “subjects” is a better term because they didn’t know they were participating — is bound to look impressive because the large number of people whose reactions are being tested means extra validity for the results.
After reviewing the research design and the findings (showing a “contagion” effect but a very small one) Hancock turned to the reactions he received after news of the study broke on social media and in the press over the weekend of June 27-29, 2014, as well as his own reasoning for why he felt that doing the study was “okay.”
Here, his main point was that he didn’t anticipate the storm to come because in his mind the very slight manipulation of the News Feed met the “minimal risk of harm” test that permits academic researchers to proceed with an experiment even when subjects have not been informed about what is happening. It was minimal, he reasoned, because the Facebook algorithm manipulates News Feeds all the time, in far more dramatic ways than the “contagion” experiment.
The problem, he told us, was that users don’t know how the Facebook algorithm works. They are unaware that Facebook is manipulating and changing it constantly. They think they’re getting everything their friends and family are sending. As he said at the Microsoft event:
I’m not sure whether this means we need to bring in an education component to help people understand that their news feeds are altered all the time by Facebook? But the huge number of e-mails about people’s frustration that researchers would change the news feed indicates that there’s just no sense that the news feed was anything other than an objective window into their social world.
The other thing that stood out for him was just how personally people were taking this! The reactions he got made it clear to Hancock that the Facebook News Feed wasn’t just entertainment or trivia but… something bigger, deeper. Here he drew a contrast between media depictions making fun of social media as “what I had for lunch today,” and what angry emailers told him during the storm. Here’s how he put it in July:
This surfaced a theme that the news isn’t just about what people are having for breakfast or all the typical mass media put-downs of Twitter and Facebook. Rather, this thing that emerged about seven years ago [Facebook] is now really important to people’s lives. It’s central and integrated in their lives. And that was really important for me to understand. That was one of the things that caught me off guard, even though maybe in hindsight it shouldn’t have.
Later, during the question period, Hancock said that if he a “do-over,” he would not choose to do this study again. His reasoning in July:
I think our study violated people’s sense of autonomy and the fact that they do not want their emotions manipulated or mood controlled. And I think it’s a separate issue whether we think emotions are being manipulated all the time, through advertising, etc. What became very clear in the e-mail was that emotions are special… If we work on one of these special classes or categories of human experience, like emotion, without informed consent, without debriefing, we could do larger harm than just harm to participants.
Hancock described a harrowing experience at the center of the storm. The police came to his home to tell him, “we have to figure out how to keep you safe.” The president of Cornell University received calls demanding that he be fired. His family in Canada was contacted by Russian journalists who were trying to get to Hancock through them. He couldn’t sleep. He wondered if he had done something deeply wrong. The academic journal that published his study was considering whether to withdraw it, which would have been a huge blow to his reputation. He said he began to breathe more freely when a panel of scholarly peers split on whether the study violated research protocol, which said to him: These are tough issues. There is no consensus.
My impressions and reactions.
Now I’m going to shift from summarizing what Hancock said to telling you what I think— and what I asked him during the question period.
Disclosure: I have been critical of Jeff Hancock (see here and here.) And while I did gain more sympathy for him by hearing about his experience, and a better understanding of his work by learning about his scholarly background, I’m not at all convinced. More on that in a moment.
Still, I give Hancock a lot of credit for coming to talk to skeptical colleagues, for permitting the session to be on-the-record, for admitting that he wouldn’t do the “contagion” study again, for acknowledging other failures of imagination, for being personable and contrite, and for recognizing that lots of people have lots of problems with what he and Facebook did. No one should have to experience threats to personal for safety for having conducted an academic study, and none of us can predict how we would react in that situation.
As fellow faculty, a colleague, I feel I owe Jeff Hancock my considered opinion about his public performance and scholarly reasoning, even as I recognize that in the center of an internet storm we are not only professionals in a field, but human beings with fears for ourselves and our families. So here is what I think: I’m not convinced.
I’m not convinced that Hancock knew enough about Facebook and its users to even wander into this territory. It’s really kind of shocking to hear a social psychologist and scholar of communication express surprise that users of Facebook take their News Feeds very personally. That’s like saying: “I learned something from my experience. People are serious about this ‘friends and family’ thing. It’s not just a phone company slogan!” We expect you to know that about people before you start experimenting on them.
The relevant contrast is not between emailers informing Jeff Hancock that their News Feed feels quite personal to them and ill-informed press accounts making fun of social media, which is how he framed it, but between a nuanced and studied understanding of something, a pre-condition for scholarly work, and a lazy, person-on-the-street level of knowledge, which is what he essentially admitted to.
I’m also not convinced that Hancock is the man to be “leading a series of discussions among academics, corporate researchers and government agencies” about putting right what was revealed to be wrong by the Facebook study. His experience may be a case study in the need for change. It does not qualify him to convene the change discussion.
Part of the reason I say that involves his decision-making in the six-week interval between the weekend when the controversy broke, June 27-29, and August 12, when he surfaced as a reformer in this New York Times article. Hancock disappeared from the public sphere during this time, while other players made statements and answered at least some of the questions that angry users and alert journalists were asking. That’s not leadership. That’s the opposite of leadership. And this is what I asked him about.
My question to Hancock
I’m going to reproduce my question here. It’s not verbatim, I have added a few details and links, but it’s essentially the same thing I said in the Union Square Ventures conference room October 23rd.
Thanks for doing this, Jeff. My question is simple: why do they give us tenure? But it requires some explanation. In June, Cornell sent out a press release about your study. (“‘Emotional contagion’ sweeps Facebook.”) Clearly it was proud of the work one of its faculty members had done. By definition, the purpose of a press release is to invite publicity and discussion in the public sphere. I’m sure no one anticipated how much attention your study would receive, but still: the invitation was there.
When the world heard about your study, finding a lot to question in it, you absented yourself from that debate for more than six weeks. But this is the very discussion that you told us — today — you want to lead! One of your co-authors, Adam Kramer, tried to address some of the questions in a Facebook post. The editor of the article you published, Susan Fiske, spoke to the press about her decision-making. Cornell addressed the controversy in a statement it released on June 30. But you said nothing on Facebook, the platform where the research was done. You were silent on Twitter. (I was checking.) You wrote nothing on any blog. You cancelled interviews with journalists.
The issues you say you want to work on over the next few years were very much alive in that six week period. People were paying attention to them! Now I recognize that a lot of the attention was ill-intentioned, over-the-top, angry and threatening and very far from the ideal of a calm and rational discussion. I recognize that you felt under attack. But still: I don’t understand your decision-making.
So I ask again: why do they give us tenure? What’s the deal? Is it just: we can’t lose our jobs if people hate what we say? Or do they give us tenure precisely so we can participate in the debate when our work comes under scrutiny and a white hot controversy erupts in the public sphere?
In reply to me, Jeff Hancock said he had never really thought about what tenure was for before all this happened. Beyond that, his response came down to: I was freaked out, I had no training or experience with this, and didn’t know what to do. So I kept quiet. He said he asked some colleagues about whether to respond publicly, including danah boyd. He got requests to go on TV but turned them down. He added that he could have posted a public note that he would not be commenting for a while but didn’t.
I can understand all that. I can sympathize with it. I can recognize — as I’ve tried to do so several times in this post — that he was in a difficult spot, undergoing a trial that few of us can imagine. Nonetheless, I’m not convinced. Based on what I heard last week, I don’t think he knew what to say back to people who had said to him: “How dare you manipulate my news feed!” (Hancock’s paraphrase.) His thoughts on the matter (how did I dare to…?) were superficial— and unfortunately they still are. He has an account, but didn’t go on Facebook to explain, as Adam Kramer (untenured) did. Perhaps because his own ignorance of lived experience on the platform would have been revealed. He was happy to promote his work on Twitter…
Adam Kramer, Jamie Guillory and I just published our paper on emotional contagion on Facebook http://t.co/L3BBFEce9W @KegsnEggs
— jeff hancock (@jeffhancock) June 5, 2014
…Unhappy when Twitter turned against him. I’m sorry, but I don’t think this is the deal for professors with tenure and academic freedom operating in the public sphere and conducting research about social media. Unlike most of the American work force, we can’t lose our jobs for speaking up. So we speak up when our work is questioned. If people don’t understand how we do our studies, we try to explain how we operate. When the press is suddenly interested in our research, we pick the right forum and answer the questions as best we can. If a lot of the attacks are in bad faith, we find the critics acting in good faith and respond to them.
And if we don’t have answers when the lights are on and reputations are made, well, maybe we’re not the best people to be leading a public discussion about big data and modern society.
(Other participants who were there may want to add their notes about what Jeff Hancock said or give their impressions. Please use the comments for that. And if you want to correct me about anything, please do.)
13 Comments
One element lacking in the ethics analysis commonly seem in discussing this episode is that the manipulation was completely unnecessary to obtaining the data. The very same amount and type of data could have been obtained by mining the historical data of unmanipulated feeds: pick out 300,000 happy messages and 300,000 sad messages, using the exact same measures used in the study and then roll forward with the analysis.
This sort of big data crunching is what professional researchers do all the time in the commercial world, and while I don’t fault academics for being ignorant of this, I do fault the IRB that seems not to have consisted its duty to consider alternative, less intrusive experiments on unknowing subjects.
Hi, Randall. I’m pretty sure the researchers would say in reply: We were testing a small increase in happy or sad inputs, and using a control group with no increase to make sure that the results we got could be associated with the marginal uptick in happy or sad content.
So picking out 300,000 happy messages and 300,000 sad messages, and then looking ahead at what happened would not have been the same data, or the same study.
As someone else who was there, thanks for a thoughtful commentary. My question is how much of this criticism to lay on Hancock himself.
I’m not convinced that Hancock knew enough about Facebook and its users to even wander into this territory. It’s really kind of shocking to hear a social psychologist and scholar of communication express surprise that users of Facebook take their News Feeds very personally.
You’re right, he was remarkably ill-informed about how his subjects behave and what they think. He also made what look, in retrospect, like remarkably naive judgments about how they were likely to react to being experimented on without consent. But this also seems to me to highlight a larger risk of social scientists using data from social media. Unlike in the lab, they’re going into territory they may not know, under conditions they may not have created. So any changes to the protocols about how social science uses such data should take that into account, but isn’t the problem at least partly that such collaborations are pretty new?
I’m also not convinced that Hancock is the man to be “leading a series of discussions among academics, corporate researchers and government agencies” about putting right what was revealed to be wrong by the Facebook study.
Who, then, is? And it seems a little contradictory to criticize Hancock for being too silent after the story broke and then for taking too active a role in the discussion in the aftermath.
I’m sorry, but I don’t think this is the deal for professors with tenure and academic freedom operating in the public sphere and conducting research about social media. Unlike most of the American work force, we can’t lose our jobs for speaking up. So we speak up when our work is questioned.
This is an excellent point and deserves to be made more widely. But given how hard it’s been to persuade even journalists to get out of their ivory towers and engage with the public (your specialist subject, Jay!) it should be no surprise that academics are reluctant, not least since their institutional culture discourages them actively from publishing things that aren’t double-checked, peer-reviewed and very slowly thought out. Communicating complex issues to the public when you’re under attack is an extremely difficult skill — companies hire experts to do it for them. Academics have no training in it and little experience.
In any case, if I understand right, your beef with Hancock is not merely that he didn’t use the power that tenure gave him to speak out, but that when he did speak up, what he said was wanting.
Thanks, Gideon.
It seems a little contradictory to criticize Hancock for being too silent after the story broke and then for taking too active a role in the discussion in the aftermath.
Maybe it seems so on first reading, but I don’t think it is. My argument is. “You didn’t answer the bell. You are not the one to lead us from here.” That may be uncharitable, but it is not contradictory.
On top of that, I don’t think Hancock has enough ‘local knowledge’ of users, the people he’s studying. By local knowledge I mean that kind of familiarity that comes from dwelling among them. He deals with people through experiments and data sets.
I think it would make more sense for him to say, “I’m going to shift my work and devote the next few years to interviewing the users of social platforms to figure out where I went wrong. My plan is to take a more anthropological approach, and then curve back toward experimental psychology, hopefully with some lessons learned.”
But this is not what he told us. So a third reason I don’t think he is the one to lead this discussion is a lack of self-awareness or awareness of the limits of his knowledge.
if I understand right, your beef with Hancock is not merely that he didn’t use the power that tenure gave him to speak out, but that when he did speak up, what he said was wanting.
Yes. Both.
If Hancok never really thought about what tenure means before earning it, perhaps he should step aside in favor of the many, many tenure-track and would-be tenure-track faculty in this country who have given it a great deal of thought. Honestly, that admission just beggars belief.
Yep.
A relevant piece in the Los Angeles Review of Books by Evan Selinger:
http://lareviewofbooks.org/review/okcupids-xoxo-big-surveillance
That links to an extremely relevant piece by James Grimmelmann:
https://medium.com/@JamesGrimmelmann/illegal-unethical-and-mood-altering-8b93af772688
Thanks. Also very relevant:
Why the Facebook Experiment is Lousy Social Science
https://medium.com/@gpanger/why-the-facebook-experiment-is-lousy-social-science-8083cbef3aee
He said he was especially concerned about the mistrust of science that the Facebook controversy had kicked up, “which I regret very deeply.”
What is the cause of Professor Hancock’s deep regret?
a) The mistrust of science
b) The Facebook controversy
c) The idea that b) had kicked up a)
Aha.
I research and publish on Facebook and other social media. I employ a methodology of literary close reading and, crucially, participant ethnography. I am knocked flat by Hancock’s basic illiteracy in the medium / platform / community he is claiming expertise in by doing this work.
Recently, I found myself at the centre of a viral tweet storm. This was unexpected, but it’s my research area, so I spent essentially a whole week doing social media management, and radio and TV and print interviews, and was covered, actually, as far away as Australia. I figure I need to do social media to study social media, and if what I was working on had an impact it was my duty to follow through on any interest the public had. (And let’s be clear: I was tweeting about sexism, and I got a lot of hate mail, too.)
Hm. I’m not convinced, either.
Just curious: if Hancock had been more literate in what Facebook means to real people, do you think he still would have done the study? I guess I mean, do you think it’s possible for someone who does grasp the depth of people’s fondness for Facebook to have still gone ahead and done this study and not have felt any ethical qualms about it? It seems you take issue with Hancock’s conduct and responses to the firestorm (fair enough), but do you think that there is value in this type of study?
I think a certain distance from the user’s actual experience, people as people, was almost part of the design of this experiment. People as data, bits.
Hancock contradicted himself again on November 9th, this time on video:
http://youtu.be/fhHzOsyIp84?t=2m19s