i was recently engaged in a totally civil disagreement on facebook* about whether things are changing in our field in response to all of the concerns about the robustness of our scientific findings, and the integrity of our methods. i said i was optimistic. someone asked me if i could elaborate on why i am optimistic. this is for you, brent roberts.
1. the obvious reasons
the fact that the
open science framework exists, that
the reproducibility project exists, that
social psychology did a special issue on replications, that there have been workshops on reproducibility at NSF, that
SPSP had a taskforce on best practices, that APS had a taskforce that led to changes in submission guidelines and procedures at
psych science, that
psych science commissioned and
published an article and
tutorials on effect estimation approaches by geoff cumming, that
perspectives coordinates and publishes registered replication reports, and that every conference i've been to for at least the last two years has included some presentations on replicability. everyone already knows all of this, but it's easy to take it for granted. i'm pretty sure that if i had told someone in 2007 that all of this would happen in the next seven years, they would not have believed me.
2. i am not a pariah
i am shocked and amazed that people still tolerate me after i spew my not-always-entirely-well-thought-out opinions on my blog. when i started blogging i never thought that 1,500 people would see my posts. i'd like to think this has something to do with my biting sense of humor, but i know it's because there is a huge amount of interest in this topic. other blogs, like
sanjay srivastava's and
daniel lakens's get tons of readers, i'm sure.
betsy levy paluck has 1,500 followers on twitter.
dorothy bishop has 16,000.** think about it, people are choosing to spend their free time reading about these things. lots of people. that blows my mind.
also, brian nosek, who has arguably done more for The Cause than anyone else in social/personality psychology, is almost universally well-liked. one indicator of this is that brian is the winner of this year's spsp prize for distinguished service to the field of social/personality psychology. perhaps there are people out there who hate his guts, but no one dares say it in public because his reputation is that strong. if nothing else, i am extremely proud of my field for recognizing the value of everything he has done.
3. key people
there are several constituencies whose opinions i think are especially relevant for whether or not optimism is warranted.
the first is the decision-makers.
as suggested by my
last post, i think among the people with the most power to implement change are journal editors. and the people who select the journal editors get to choose who wields this power. which brings me to a little story.
recently i was at a reproducibility workshop at the center for open science. in one of the breakout sessions, a group of us were discussing what journals and societies can do to increase the reproducibility of social science research. several editors in chief were there, as well as several executive officers of Very Large professional social science organizations. at one point, one of the executive officers said that it would be difficult to make large-scale changes across all of the association's journals because not all editors would be on board, even with policies that have broad support from members. at this point, i said something i have been thinking for a long time: if a society claims to value X, they should appoint editors who share that value. for example, if a society explicitly values theoretically-rich research, they should select editors in chief for their journals who share that value and will implement it. so if a society values particular research practices, it should make agreement with that value a litmus test for selecting editors in chief. what happened next made me extremely hopeful: the executive officer said that it might make sense to ask candidates for editor in chief of the association's journals to submit a statement on reproducibility. i'm still waiting to see if that will happen. if it does, it will make me very happy. note that i'm not saying every society must value reproducibility (though i think they should), i'm just saying that if they do explicitly claim to value it, then it makes sense that they should care what their editors in chief think about the issue (and even then, it makes sense that they would be willing to accept a range of views, but some views, i.e., those explicitly in tension with the society's values, should probably be dealbreakers).
the second constituency that matters a lot: the youths. i may be wrong, but my strong impression is that there is very broad support for change among junior scientists in our field. in fact, i have been amazed and heartened by grad students' interest in these issues. i am also surprised at how many of the prospective graduate students who contact me mention this issue. these are people who have not yet started their phd. and they are looking for mentors who care about these issues. in my opinion,
this interest among early career researchers is going to trickle up. if our graduate students and junior colleagues want to talk about these issues, we are going to talk about these issues. to all of you early career people out there, keep the pressure on. i don't mean be aggressive about it or dogmatic - in fact, if you just joined the field i would encourage you to read and listen a lot before making up your mind - but keep bringing it up, keep asking questions, keep expressing your concern and interest about these issues. your voice matters more than you think, and it gives the rest of us hope.
why it matters that things are changingwhy does it matter if optimism is warranted? i think it matters for a few reasons.
first, if things are changing and most people are on board with these changes, that means that writing and talking about these issues, spending some of your research time on them, is not career suicide. it means it's ok for graduate students to spend some (probably not a lot, but some) of their time on this. my limited experience so far suggests that they will still get good jobs. i could be wrong, and this is a dangerous conclusion to come to if i am wrong, but i don't think i am.***
second, and this is perhaps the most important point i want to make in this post, if what we're doing is working, we don't need to change our approach.
so far, i can say that i agree with and stand by most of the efforts that have been made in the name of scientific integrity/reproducibility etc. i am extremely proud of the people who are carefully, rigorously carrying out replication studies (and i don't like it one bit when their motives are called into question - these people are doing a gigantic service to the field and are being incredibly diligent and careful. most of them are the model of scientific rigor).****
i read almost all of the discussions on the facebook ISCON page, and while there are occasionally remarks (on both 'sides') that i find unnecessarily personal or emotional, i find them to be overwhelmingly calm, reasonable, and fruitful. i wish there were more women participating in those discussions, but overall i am proud of the level of discourse there and elsewhere.
what i like about the conversation that is happening is that it's about the science. it is not about the people behind the science, or their motives. those things should not come into it. if you want to vent about specific people and their motives, start a gchat group with your friends. but don't publicly accuse someone of having bad motives, or lying, or whatever.
this goes for criticisms of original research and criticisms of replication studies: don't tell me what you think the authors' motives were, tell me what's wrong with the science.
the flip side of this is that it is completely fair game to criticize people's methods, results, and interpretation of their results. that is not personal. that is not bullying. that is just science, and in fact it's good science. science without critical evaluation is not science. the fact that it sometimes happens post-publication does not make it unscientific or personal. and a personal attack in response to purely scientific criticism is bullying.
here's a good rule: critical comments about research should take the same tone as reviewers' comments during the peer review process. we don't tolerate personal attacks there, and we shouldn't in our public discourse. but we should tolerate, and even encourage, skepticism and criticism of the science. i know that's a nuanced position, but i'm sticking with it.
i am afraid that pessimism is going to drive people i mostly agree with to do things i don't like in the name of a cause i care deeply about. i'm afraid people are going to get impatient, think that our current approach isn't working, and start getting emotional or personal. besides the fact that that would be wrong and unscientific, it would also backfire. it will not convince anyone and may turn people off who are still deciding who to listen to.
so if you are one of those people who want things to change and are getting impatient, i hope you will give the current approach a little more time. every month a new journal adopts new disclosure guidelines, data sharing policies, methodological standards, etc. i know that people have been talking about reforming our research practices for decades, and for a long time nothing changed. but things are changing now. so many things.
No one dares say that hate Nosek's guts in public? Well, let me be the first, if only to help unleash the backlog. Anyone whose last name can pronounced either as Nosack or Nosex has got to be a douche, amirite?
Posted by: Arina Bones | 02 December 2014 at 07:28 AM
Dr. Bones, you are an embarrassment to yourself.
#hatersgonnahate
Posted by: Brian Nosek | 02 December 2014 at 09:14 AM
I don't appreciate your personal attack on Arina, Brian. Arina's criticism was purely scientific.
Posted by: simine | 02 December 2014 at 09:26 AM
In reference to point 3, key people, I recommend more people throw their hat in the ring for editing. I remember a few years ago (when I was assoc ed. at pspb) suggesting to a reproducible-science-minded person that he (it was a he) consider being an editor, and he said that if he did, no papers would be published. That person went on to make huge impacts in other ways. But I hope more people with varied backgrounds seek out editorial positions. It is often thankless (I think I have more enemies now), but arguably an influential place to be as standards are changing among authors AND reviewers.
Example: I recently handled (at jesp) a paper where I asked for direct replication from the authors. The authors subsequently reported two attempts, with larger samples, neither one yielding anywhere near the size of the effect from the initial (greatly underpowered) study or a significant effect. The authors still argued for their effect, based on a meta-analysis showing the overall effect to be different from 0 (p was about .045). However, the effect size from the first study (and the only one significant on its own) lay OUTSIDE the .95 CI (and excluding that one atypical finding from the meta-analysis would have caused the overall effect to not be different from 0). The response of one reviewer: Collapse the data from the three studies into a single study, to avoid the "awkward" finding that the study with the tiniest sample had triple the effect size of the other studies. If this is the kind of issue you or readers of your blog would like to be able to make a decision on, it might be worth considering throwing your hat in the ring for editing.
Posted by: Tony Freitas | 03 December 2014 at 09:16 AM
I love you Dr. Bones ! Please write more articles :)
Posted by: realitybites | 05 December 2014 at 09:37 AM