Episode 114 - Ethics in Science, with Special Guest Interview, Dr. Jeffrey Robbins
Recap: This episode was spurred by the #FacebookExperiment where researchers tried to influence moods of ≈700,000 people, without their explicit knowledge or consent. Using that as a starting point, I interview my father (Dr. Jeffrey Robbins) about ethics in science and especially in human research. I then talk about my take on the subject and bring it back to ethics in general, and contrast that with pseudoscientists.
Puzzler for Episode 113: There was no Puzzler in this episode.
Q&A: There was no Q&A in this episode.
Additional Materials:
- References
- Kramer, A.D.I., Guillory, J.E., and J.T. Hancock. (2014) "Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks." Proceedings of the National Academies of Science, 111:24, 8788-8790. doi: 10.1073/pnas.132004011.
- Declaration of Helsinki: via Wikipedia || text via the World Medical Association
- American Psychological Association: Human Research Protections (with both legislation and guidance)
- US Department of Health and Human Services: Code of Federal Regulations, Title 45 (Public Welfare) Part 46 (Protection of Human Subjects)
- Wikipedia: Ethical Research in Social Science
- News Stories:
- Raw Story: UK Regulator Investigates Facebook Over Psychological Experiment on Users
- CNET: Facebook's Emotion Manipulation Study Faces Added Scrutiny
- CNET: Facebok's Mood Study: How You Became the Guinea Pig
- CNET: Facebook Is Always Trying to Alter People's Behavior, Says Former Data Scientist
- Wired: Don't Worry, Facebook Still Has No Clue How You Feel
- Time: Facebook Experiment Shows How Researchers Rely on Tech Companies ("The Real Reason You Should Be Worried About that Facebook Experiment")
- CNN: Opinion– Did Facebook's Experiment Violate Ethics?
- Forbes: Facebook Added 'Research' to User Agreement 4 Months After Emotion Manipulation Study
- Logical Fallacies / Critical Thinking Terms addressed in this episode: Ethics
- Relevant Posts on my "Exposing PseudoAstronomy" Blog
Transcript
Intro: There’s no actual claim for this episode, this is one of those that really falls outside of the standard straight-forward debunking things, or ones that I can even really tie to physics, astronomy, or geology. Though I’ll do a bit of that in the wrap-up just to show I can. I was all set to do the Norway Spiral episode, when it was announced around June 28 that Facebook had published a study in a leading science journal about its ability to influence the mood of people by showing them happy or sad news items.
I was really ticked. Seeing that story made me very angry, and I posted several follow-ups to Facebook, or at least to my personal Facebook page, preferring to leave my righteous indignation to those who call me “Friend.” Eric B. suggested, on June 29, that I do a podcast episode about it. I thought about it, and then decided that this is as good a time as any, and it will let me bring in perhaps one of the V-est of VIPs, my dad.
Interview with Dr. Jeffrey Robbins
Bio: Jeff Robbins is currently Professor of Pediatrics at the largest Children's Hospital in the United States. He heads the Division of Molecular Cardiovascular Biology and is Executive Director of the The Heart Institute. He has won numerous national and international awards for his research in cardiovascular disease and oversees a number of clinical trials. Though his mother, and I, are still waiting for him to get a Nobel.
Because this was a live interview, there is no transcript. Here is a list of the questions that we discussed:
- Question: I don’t want to take up too much of your time, so let’s cut right to the chase. First off, in broad brushstrokes, can you describe general rules of ethics in scientific research?
- Question: Ethics in science is something that is typically overlooked, or at least pushed by the wayside, in almost every scientific endeavor EXCEPT when dealing with anything medical because then we’re dealing with animals and humans. What are the ethics there, what are the considerations?
- Question: What generally needs to be done to get approval to do human research, and what are the ethical guidelines when doing it? [IRB, informed consent, right to be removed]
- Question: What is it like in other countries?
- Question: What happens when someone is found to not follow the rules?
- Question: Personally, what do you think of what Facebook did? And, do you think that the Facebook employee should be held to a different ethical standard than the two co-authors from Cornell?
- Anything else that you think listeners should know about research ethics, especially as they may pertain to human research?
My Take on the Facebook Study
In the interest of full disclosure, I wrote this part before I interviewed my dad, so my views here are not influenced by his.
With that in mind, I think that the Facebook study was incredibly unethical. And before I start getting these comments, let me say this: I fully recognize that Facebook is not legally bound by the same ethics as university researchers receiving public money. I also fully recognize that Facebook is just one of many internet companies who routinely perform these kinds of tweaks to try to alter user behavior to better sell their products. And, I fully recognize that advertisers have been using psychology studies for decades or maybe even centuries to try to better sell their products. And I recognize that the study itself produced a pretty minor effect, like observing one word change in 50,000. And I recognize that other researchers who actually do this kind of work have said that the methods used are bad and it doesn’t actually show what they claim it does.
I realize all of that. I still think that the study was unethical. I think that the professional researchers involved should have refused on ethical grounds because there was no specific informed consent or ability to opt out.
I also think that the claim that the terms and conditions for Facebook saying that user data may be used for “research” is too vague to ethically cover this kind of study.
At its core, Facebook wanted to see if they could alter human emotions — CHANGE how you are feeling. The methods and results don’t matter, it’s the intent and the experiment was designed to try to do that. And I think the ethics of that MUST be erred on the side of caution. Again, I realize that end effects were very minor, but given the sample size of nearly 700,000, you can be guaranteed that many people suffered from depression who were part of this study. And, going in, Facebook did not know the magnitude of the effect. Instead of being 0.002%, it may have been 10%. THAT’s why research ethics are in place.
The Facebook non-apology is also rather disturbing. I will read it, in full, posted on the lead author’s Facebook page. And just so you know, the other authors were instructed to stay quiet about this and direct all inquiries to Facebook, at least in the first few days of this.
OK so. A lot of people have asked me about my and Jamie and Jeff's recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook. We didn't clearly state our motivations in the paper.
Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody's posts were "hidden," they just didn't show up on some loads of Feed. Those posts were always visible on friends' timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.
And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it -- the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.
The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.
While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.
No where in there is an actual acknowledgment of the ethical issues. It’s an, I’m “very sorry for the way the paper described the research and any anxiety it caused.” Not, “We shouldn’t’ve done this without informed consent, and we’ve taken steps to put in an ethics department that will review ALL Facebook research in the future and make sure that it complies with academic standards.”
Also, Forbes pointed out two days after this all broke in the news-o-sphere, that in Facebook’s Terms and Conditions from when the study was done in January 2012, the “your data may be used for research” line was not present:
In January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that “research” is something that might happen on the platform.
Four months after this study happened, in May 2012, Facebook made changes to its data use policy, and that’s when it introduced this line about how it might use your information: “For internal operations, including troubleshooting, data analysis, testing, research and service improvement.” Facebook helpfully posted a “red-line” version of the new policy, contrasting it with the prior version from September 2011 — which did not mention anything about user information being used in “research.”
And, the Forbes article points out there was no filter for anyone to be an adult, that people under 18 were likely part of this without their parents’ consent.
But, I’ll say more explicitly this time: I think that an explicit opt-in needed to happen for this kind of study. Burying this in a lengthy Terms and Conditions that they KNOW no one reads simply does not pass ethical muster, so far as I’m concerned.
With that in mind, Facebook has its defenders. I disagree with them. And the journal itself has now realized that they should have been more thorough in vetting the ethics of this study in deciding whether they would publish it. In the version now online, there is an extra page at the beginning, and it is headlined, “Editorial Expression of Concern,” and it is signed by the editor-in-chief, Inder M. Verma. In full, it reads:
PNAS is publishing an Editorial Expression of Concern regarding the following article: “Experimental evidence of massive- scale emotional contagion through social networks,” by Adam D. I. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock, which appeared in issue 24, June 17, 2014, of Proc Natl Acad Sci USA (111:8788–8790; first published June 2, 2014; 10.1073/ pnas.1320040111). This paper represents an important and emerging area of social science research that needs to be approached with sensitivity and with vigilance regarding personal privacy issues.
Questions have been raised about the principles of informed consent and opportunity to opt out in connection with the research in this paper. The authors noted in their paper, “[The work] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” When the authors prepared their paper for publication in PNAS, they stated that: “Because this experiment was conducted by Facebook, Inc. for internal purposes, the Cornell University IRB [Institutional Review Board] determined that the project did not fall under Cornell’s Human Research Protection Program.” This statement has since been confirmed by Cornell University.
Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the “Common Rule”). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.
I really hope that Cornell’s IRB takes a lot of flack for this, and that they re-examine their policies.
Bringing It Back to Astronomy
In an attempt to return to the general subject matter of this podcast, do ethics play a role in more of the hard sciences, such as astronomy, geology, or physics? They do, but to a much smaller level because we typically don’t deal with people.
But, they are always there. For example, when I go to write a grant, I have to answer questions about whether my research will involve human studies. Or culturally sensitive studies. Or anything related to Native Americans. If it does, there is a lot of extra paperwork and approvals that I have to go through.
Scientific rules of ethics also play a general role in data-gathering and reporting. Simply put, ethics would require that we don’t fake our data. And other than in undergraduate labs sometimes, we don’t. Or very, very rarely, and in those cases, they are almost always rooted out (and Australians listening should stop sniggering). Science is self-correcting, because it requires objective duplication, and when stuff can’t be duplicated, we may start to look at the person who conducted the work.
Take the discussion in episode 55, the interview with Brian Hynek about ET life, and our discussion of GFAJ-1, the bacterium that was supposedly discovered by a postdoctoral student that was said to have replaced the phosphorus in its DNA with arsenic. And the team did not do the tests required to really show that, and it later turned out that their work was kinda shady.
While the bacterium did its job and Got Felisa A Job (GFAJ), that controversy has caused many of us to not take her at her word before. Looking at her papers, she hasn’t published since 2011, and she hasn’t been first-author on a peer-reviewed paper or even conference abstract since GFAJ-1, back in 2010.
Contrast that with the ethics of pseudoscientists, and the claims of poor ethics by pseudoscientists flung at real scientists. I have been accused by at least three pseudoscientists of lying, cheating in some way, being a shoddy scientist, or using paid time at work or work resources to do my outreach stuff like this podcasts.
All of those would be ethical violations (except being a shoddy scientist, and I think my publication record speaks against that). And any of those serious ethical violations would threaten my job or career in general. It is illegal for me to use work resources - because mine were paid for from a NASA grant - to do anything but my actual work. While no one’s REALLY going to care if I spend 2 minutes on Facebook, these kinds of things are actually monitored. And, a pseudoscientist making the simple accusation of that could cause a serious ethics investigation. Whether they realize that or not, it shows that we, in academia and science in general, take ethics very seriously.
Again, contrast that with the pseudoscientist. Many that I have dealt with really don’t have ethical qualms with lying or faking data. How do I know? Because it’s been pointed out to them, and they have acknowledged receipt of that information, yet they continue to make the claims or use the data.
Alternatively, some simply lie about what people have said or their positions on subjects. Join us on the Facebook page for the podcast, and look at June 24 for an example on that.
Bringing It Back to Astronomy
By way of wrap-up, the purpose of this episode is to explain to those of you who aren't in science some more of what goes on behind the scenes. Yes, science is a human endeavor, and as such the people involved can make mistakes and do stupid things and not play by the rules that the rest of us do.
But, it is particularly rare, especially with respect to many other fields. When a scientist is found to have faked a major study, it is front-page news. When a politician is found to have lied or broken general ethics, it's a normal day.
And it would be nice if pseudoscientists understood that. By way of another example, take climate change. People could legitimately question the data, the methods, the conclusions. But when the only retort is "FRAUD" on the part of every researcher working in climate research, perhaps now you understand why that's just, generally, a non-starter for other scientists. This is why scientists are consistently among the highest-rated in trustworthy-ness among all professions, when polled.
So, in the end, I think that research ethics are important, and most scientists take them very seriously. Many pseudoscientists don’t. Many corporations don’t. Bringing this back to the initial topic, Facebook may think that what it did was fully justified and covered by their Terms and Conditions, but I think the majority of the scientific community does not, and the fact that two researchers from Cornell - and one of them is now at University of California, San Francisco - participated in this work and that Cornell’s IRB took a pass, should be very troubling, and I hope that the academic authors are held accountable in some way.
Provide Your Comments: