[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
bear, having recently joined SIPS
i have found scientific utopia.*sometimes, when i lay awake at night, it's hard for me to believe that science will ever look the way i want it to look,** with everyone being skeptical of preliminary evidence, conclusions being circumscribed, studies being pre-registered, data and materials being open, and civil post-publication criticism being a normal part of life.then i realized that utopia already exists. it's how we treat replication studies.i've never tried to do a replication study,*** but some of my best friends (and two of my grad students) are replicators. so i know a little bit about the process of trying to get a replication study published. short version: it's super hard.we (almost always) hold replication studies to an extremely high standard. that's why i'm surprised whenever i hear people say that researchers do replications in order to get an 'easy' publication. replications are not for the faint of heart. if you want to have a chance of getting a failed replication**** published in a good journal, here's what you often have to do:
- carefully match the design and procedure to the original study. except when you shouldn't. adapt the materials to your local time and culture, but don't change too much. pre-test your materials if possible. get the design approved by people who probably wish you weren't doing this study.
- pre-register everything. be thorough - expect people to assume you were biased (more biased than the original researchers) and that these biases might have led you to inadvertently cherry-pick.
- run a shitload of participants.
- one more time, you forgot a potential moderator that the reviewers just thought of.
- make sure that nowhere in your manuscript do you state with confidence that the evidence shows there is no effect. admit that you might have made a type II error. circumscribe all your conclusions and give serious consideration to alternative explanations, even if you think they're very unlikely.
- make all of your materials publicly available. expect people to question your expertise, so provide as much information as possible about all details of your method.
- make all of your data publicly available. this is not optional. you don't get the same liberties as original researchers - open data is an imperative.
- get through peer review, with a decent chance that one of your reviewers will perceive your manuscript as a personal attack on them.
- once your paper is published be prepared for it to be picked apart and reanalyzed. your conclusions will not be allowed to stand on their own, they will be doubted, and your results will be discounted for a variety of reasons.
- you will told by well-respected senior scientists that you are harming your field, or wasting your time, or betraying your lack of creativity, or that you must be trying to take people down.from the tone of my writing, i wouldn't blame you for assuming that i think we are unfair to replication studies. actually, i think we are unfair to original studies. i would like us to treat original studies more like we treat replication studies (except maybe that last point).imagine, if you can, a world where we hope for the following in original research:
- pour tons of thought and effort into the study design before you think about collecting data. show the design and procedure to skeptics who don't believe your theory. let them pick it apart and make changes. build a study that even they would agree is a strong test.
- think about whether your materials would work in other times and places, so that researchers who want to replicate your study in other contexts know what factors to consider.
- pre-register your study. document everything assuming people will want to rule out the possibility that you got your result by inadvertently exploiting flexibility in the research process.
- run a shitload of participants.
- assume your readers will not believe your first study results, and will propose hidden moderators. repeat it a couple times just to be sure, testing some potential moderators along the way (pre-register these, too).
- draw very circumscribed conclusions. seriously consider the possibility that your result is a type I error. entertain alternative explanations that are not sympathetic to your theory, and do not wave them away. sit with them. cuddle them. let them into your heart.
- make all of your materials and data publicly available.
- tell us about anything in your file drawer, let readers decide if it really belongs in the file drawer.
- expect others to reanalyze your data, criticize your design, analyses, and conclusions, and propose post-hoc alternative explanations. admit that they might be right, we can't know for sure until future research directly tests these explanations.
- be accused of letting your bias creep in, in ineffable ways that cannot be cured by pre-registration and open data, and accept that this is a possibility.
- accept that it will take a lot more to determine if the effect is real. consider an adversarial collaboration.of course not every study (original or replication) can have all of these features, and we should be flexible about tolerating studies with various limitations. there are important side effects of raising standards in absolute, black and white ways. we need room for studies that fall short on some criteria but are important for other reasons. i know, because my own work falls far short of many of these ideals. but these are nevertheless ideals we should strive for and value.one of the beautiful things about replication studies is that they make everyone see the value of these practices. people who didn't care much for open data, pre-registration, etc., are often in favor of those practices when it comes to replication. people who are resistant to the idea that original researchers' motives and biases could lead to systematic distortions in the published literature are more sympathetic to this concern when it comes to replication researchers' motives and biases.can you imagine what we'd say if a replication author refused to share their data or materials for no good reason? can you imagine the reaction if we found out they hid some studies in the file drawer?***** can you imagine how we'd react if their conclusions went way beyond the evidence?there seems to be a consensus that replication studies, including materials, data, and results, belong to all of us. we feel we are owed complete transparency, and we expect to all have a say in how the studies should be evaluated and interpreted.those are the correct reactions, but not just to replications. these reactions provide an opportunity to build on this new common ground. let's apply those values to all research, not just replications.* no, i don't mean SIPS. i'm barely even going to mention SIPS in this blog post. or our new website. with its new domain name: https://improvingpsych.org/***** hypocrisy is one of my favorite pastimes. for example just this morning i ate a corndog for breakfast while simultaneously believing people should refrain from eating meat AND should eat a healthy balanced breakfast.
**** not that we know whether a replication will succeed of fail ahead of time (in all the cases i know of, the replicators were genuinely curious about the effect and open to any outcome). but i'm talking about failed replications here because i think they're subjected to more scrutiny. if the replication results come out positive, you face different challenges in getting those results published (some people think it's harder to publish successful replications than failed ones, but the evidence is not clear).*****
cj and jöreskog, having recently discovered GIFs