[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
bear, having recently joined SIPS
i have found scientific utopia.*
sometimes, when i lay awake at night, it's hard for me to believe that science will ever look the way i want it to look,** with everyone being skeptical of preliminary evidence, conclusions being circumscribed, studies being pre-registered, data and materials being open, and civil post-publication criticism being a normal part of life.then i realized that utopia already exists. it's how we treat replication studies.i've never tried to do a replication study,*** but some of my best friends (and two of my grad students) are replicators. so i know a little bit about the process of trying to get a replication study published. short version: it's super hard.we (almost always) hold replication studies to an extremely high standard. that's why i'm surprised whenever i hear people say that researchers do replications in order to get an 'easy' publication. replications are not for the faint of heart. if you want to have a chance of getting a failed replication**** published in a good journal, here's what you often have to do:
- carefully match the design and procedure to the original study. except when you shouldn't. adapt the materials to your local time and culture, but don't change too much. pre-test your materials if possible. get the design approved by people who probably wish you weren't doing this study.
- pre-register everything. be thorough - expect people to assume you were biased (more biased than the original researchers) and that these biases might have led you to inadvertently cherry-pick.
- run a shitload of participants.
- repeat.
- one more time, you forgot a potential moderator that the reviewers just thought of.
- make sure that nowhere in your manuscript do you state with confidence that the evidence shows there is no effect. admit that you might have made a type II error. circumscribe all your conclusions and give serious consideration to alternative explanations, even if you think they're very unlikely.
- make all of your materials publicly available. expect people to question your expertise, so provide as much information as possible about all details of your method.
- make all of your data publicly available. this is not optional. you don't get the same liberties as original researchers - open data is an imperative.
- get through peer review, with a decent chance that one of your reviewers will perceive your manuscript as a personal attack on them.
- once your paper is published be prepared for it to be picked apart and reanalyzed. your conclusions will not be allowed to stand on their own, they will be doubted, and your results will be discounted for a variety of reasons.
- you will told by well-respected senior scientists that you are harming your field, or wasting your time, or betraying your lack of creativity, or that you must be trying to take people down.
from the tone of my writing, i wouldn't blame you for assuming that i think we are unfair to replication studies. actually, i think we are unfair to original studies. i would like us to treat original studies more like we treat replication studies (except maybe that last point).imagine, if you can, a world where we hope for the following in original research:
- pour tons of thought and effort into the study design before you think about collecting data. show the design and procedure to skeptics who don't believe your theory. let them pick it apart and make changes. build a study that even they would agree is a strong test.
- think about whether your materials would work in other times and places, so that researchers who want to replicate your study in other contexts know what factors to consider.
- pre-register your study. document everything assuming people will want to rule out the possibility that you got your result by inadvertently exploiting flexibility in the research process.
- run a shitload of participants.
- assume your readers will not believe your first study results, and will propose hidden moderators. repeat it a couple times just to be sure, testing some potential moderators along the way (pre-register these, too).
- draw very circumscribed conclusions. seriously consider the possibility that your result is a type I error. entertain alternative explanations that are not sympathetic to your theory, and do not wave them away. sit with them. cuddle them. let them into your heart.
- make all of your materials and data publicly available.
- tell us about anything in your file drawer, let readers decide if it really belongs in the file drawer.
- expect others to reanalyze your data, criticize your design, analyses, and conclusions, and propose post-hoc alternative explanations. admit that they might be right, we can't know for sure until future research directly tests these explanations.
- be accused of letting your bias creep in, in ineffable ways that cannot be cured by pre-registration and open data, and accept that this is a possibility.
- accept that it will take a lot more to determine if the effect is real. consider an adversarial collaboration.
of course not every study (original or replication) can have all of these features, and we should be flexible about tolerating studies with various limitations. there are important side effects of raising standards in absolute, black and white ways. we need room for studies that fall short on some criteria but are important for other reasons. i know, because my own work falls far short of many of these ideals. but these are nevertheless ideals we should strive for and value.one of the beautiful things about replication studies is that they make everyone see the value of these practices. people who didn't care much for open data, pre-registration, etc., are often in favor of those practices when it comes to replication. people who are resistant to the idea that original researchers' motives and biases could lead to systematic distortions in the published literature are more sympathetic to this concern when it comes to replication researchers' motives and biases.can you imagine what we'd say if a replication author refused to share their data or materials for no good reason? can you imagine the reaction if we found out they hid some studies in the file drawer?***** can you imagine how we'd react if their conclusions went way beyond the evidence?there seems to be a consensus that replication studies, including materials, data, and results, belong to all of us. we feel we are owed complete transparency, and we expect to all have a say in how the studies should be evaluated and interpreted.those are the correct reactions, but not just to replications. these reactions provide an opportunity to build on this new common ground. let's apply those values to all research, not just replications.* no, i don't mean SIPS. i'm barely even going to mention SIPS in this blog post. or our new website. with its new domain name: http://improvingpsych.org/***** hypocrisy is one of my favorite pastimes. for example just this morning i ate a corndog for breakfast while simultaneously believing people should refrain from eating meat AND should eat a healthy balanced breakfast.
**** not that we know whether a replication will succeed of fail ahead of time (in all the cases i know of, the replicators were genuinely curious about the effect and open to any outcome). but i'm talking about failed replications here because i think they're subjected to more scrutiny. if the replication results come out positive, you face different challenges in getting those results published (some people think it's harder to publish successful replications than failed ones, but the evidence is not clear).*****
cj and jöreskog, having recently discovered GIFs
>that's why i'm surprised whenever i hear people
>say that researchers do replications in order
>to get an 'easy' publication.
I would argue that the history of the resistance to change by those in positions of power shows that "inventing something spurious that sounds vaguely plausible and will cause waverers to have second thoughts about whether this change is really such a good idea" is a popular technique.
In this case, have a look at who the people are who are saying this, and consider whether they might just have an interest in things remaining the way they are.
Posted by: STeamTraen | 23 August 2016 at 03:12 AM
"I would argue that the history of the resistance to change by those in positions of power shows that "inventing something spurious that sounds vaguely plausible and will cause waverers to have second thoughts about whether this change is really such a good idea" is a popular technique.
In this case, have a look at who the people are who are saying this, and consider whether they might just have an interest in things remaining the way they are."
I have thought about this and it puzzles me why, at least it seems, that "the old guard" seem so reluctant to change.
These people are mostly "established" professors with job security, lots of publications and citations, etc. It's not like the supposedly "bad incentives" that people blame for everything that is wrong in science currently play a role for them.
That is why I wonder if it could have something to do with possibly coming to the realization that they spent there entire life chasing smoke, and have actively contributed to creating a mess of science (e.g. by means of not submitting their null findings to a journal). That realization might just be too much to psychologically cope with.
I've only published a single "exploratory" article. I have not contributed to publication bias thankfully, but view my publication as not publication-worthy because I think exploratory work is not suitable for publication (I reason it probably mostly clouds the literature, and probably leads to citations based on very low quality evidence which makes it possible to come up with just about any theory/story to "scientifically" back up).
Even with a single published exploratory article, I myself have had a very hard time coming to grips with publishing, what I now consider to be, substandard work. I can't even imagine what I would think if I had published 50 or more of such articles, with a file-drawer to match...
Posted by: Anonymous | 23 August 2016 at 09:09 PM
Thanks for the link to the SIPS page! I've read some of the projects that you all are working on and they look great. I couldn't find a suggestion box on the site, so I thought I'd post the following here:
I like the project concerning writing an overview paper on replicability. That got me thinking. Has SIPS thought about writing a paper about psychological theories?
It may just be me, but I almost don't know anything about psychological theories, how to optimally test them, if psychological science is progressing optimally with regard to evaluating and testing theories, if these are even the correct terms to use, what influence failed replications could/should have for psychological theories, etc.
I think such a paper might be something that a lot of people could refer to, and use, in various current discussions, and perhaps could be seen as relevant to some of the goals of SIPS.
I don't know if such a project would be useful, but I do know that I haven't been taught anything whatsoever about what psychological theories exactly are, how performing psychological research relates to them, what failed replications could/should imply for psychological theories, etc. I also know that I would be very interested in such a paper.
In the case SIPS hasn't thought about writing such a paper, I thought I'd suggest it here.
Thank you, and all the other SIPS people, for all your efforts !!
Posted by: Anonymous | 24 August 2016 at 04:36 AM