reviewer 2 is not buying it.
i've had a blog post stuck in my head* for a few months now, and the new post on datacolada.org is finally spurring me to write it.
i'll admit it. i'm reviewer 2. i'm the one who doesn't quite believe that you're telling the whole story, or that things are as simple and neat as you make them out to be. that's because we've all been taught to pretty things up (pp. 8-10). it sucks for authors that reviewers are super skeptical, but it's the price we are now paying for all those years of following bem's advice. there are going to be more and more reviewer 2s out there. i'm pretty sure most people have already run into the skeptical reviewer who suspects you had other measures, other data, or that you didn't plan those covariates a priori. if you haven't yet, it's just a matter of time.
there's nothing personal about it. the reason reviewer 2 thinks that you're telling the best possible story you could extract from your data is because why wouldn't you? that's the game we were taught to play, those were the rules until now. so when reviewer 2 asks you to disclose all flexibility in your data analyses, to share your data, to describe all your measures, etc., she isn't accusing you of anything other than having been trained in a pre-revolution era. it's not rude, it's just healthy skepticism.
when i read a paper, i assume the authors are showing the most beautiful result they could get, unless they give me a reason to think otherwise. the fantastic thing about pre-registration is that it's an almost foolproof way of convincing readers and reviewers that you're being honest about what was planned and what wasn't. if you can pre-register, that is definitely what you should do, for all the reasons described in the datacolada post and more.**
what if you can't pre-register? what if the data are already collected? or what if you pre-register one thing, but then you get a new idea about what you can do with the data after you collect them? in other words, when you want to publish something that was in fact exploratory, what's the best you can do to assuage reviewer 2 (and all the skeptical readers)?
first, acknowledge that any solution is second best to pre-registration. as someone who has never pre-registered anything and possibly never will,*** i think it's important to admit this. the fact that some research is really hard to pre-register doesn't mean that it wouldn't be better if it was pre-registered. just like even though i will probably continue to use college student participants, i don't pretend that my research wouldn't be better if i used a more diverse sample. i am perfectly comfortable acknowledging that a certain practice is ideal, and that i lack the resolve to do it. it's called hypocrisy and i'm sticking with it.
second, be transparent. if you want to convince readers that you're not selling them a touched-up version of your results, show them the flaws.**** tell the reader what other analyses you tried, what other studies you ran, how the results look with different decision rules about outliers and covariates, etc. in other words, write paper two in mickey's tale of two papers.
what if being totally transparent takes up too much space? one solution is to put some stuff in an appendix or supplementary materials. i'm not against that, but i don't think it's a good enough solution because most readers won't notice it, so it still ends up letting authors get away with pretending the touched-up version is the whole truth. so, i came up with an additional solution, a new subsection for every results section: your most damning result.
i am pretty enamored of this 'most damning result' idea. i would love to make it a standard section of every paper. i think it would help readers evaluate the strength of your evidence - if you can still convince readers that you have strong evidence for your conclusion, even after showing them the most damning result, that's pretty impressive. like getting a date after posting your best and worst selfie.
i think the "most damning result" section would also encourage us to be more skeptical of our own results. probably most of us have no problem writing off our most damning result as not informative because, in retrospect, of course we shouldn't have expected that manipulation to work/that item to make sense/that cover story to be convincing. but having to at least write up that result like any other might give us some pause. if we look at touched-up data too long, we start to forget what real data look like, and being forced to stare at the blemishes could do us some good.
bottom line:
if you can pre-register, pre-register.
if you can't, be transparent, and show us that you're telling the whole story. like by highlighting your ugliest, most damning result.
* i think it's responsible for my recurring ear infection.
** here's another: if you can get your pre-registered study to work with 25 people per condition,***** i won't harass you about your sample size.
*** i know, i'm a bad, bad person.
**** of course, if showing all the flaws makes your results inconclusive, it's a sign that you should probably go do some more research before writing anything up. transparency is not a silver bullet - if the data don't give a clear answer, transparency won't save you (and will, in fact, make it harder for you to publish your result. but that's a good thing. for science. not for you.)
***** good luck with that.
the nice reviewer. (don't try to pet him.)
I totally agree with footnote 4 (and disagree with Neuroskeptic here). If your worst results mean your data are inconclusive, I guess that means they might not be published, at least not in a top-tier journal. Yes, I understand that we want a complete record of all the data out there, but I'm not yet convinced that all data need to be published. Don't get me wrong, some inconclusive data NEED to be published, especially if they disconfirm previously held beliefs or if the dependent variable (e.g., mortality) if very important. But I'm still not convinced that all inconclusive results are publishable. Great post!
Posted by: Michael Inzlicht | 03 December 2015 at 01:11 AM
I believe that the "most damning result" question was put to the GOP presidential candidates in a debate. It did not result in useful information. Scientists are likely to behave in the same manner; why would they go out of their way to make themselves "look bad?"
Posted by: Chris | 03 December 2015 at 08:48 AM
You know, it's always so easy for professors like you to say these things from your high horse. Then you get applause from all other distinguished professors who think you are doing a fantastic job and making the world a better place. It's one giant circle-jerk.
In the end, you are all focusing on treating the symptoms and not the disease. The focus now is on decreasing questionable research practices by requiring pre-registration, posting data, etc. Those are the symptoms. None of you, none, ever discuss the reason why people engage in such practices. That is the disease. And that disease is the incredibly crushing pressure there is to publish articles. It is no wonder that people cave and start behaving unethically, when getting a job requires X publications. Of course people will cheat to get there! Until that specific issue is addressed researchers will find ways to work the system. This I absolutely guarantee you. You can require pre-registration and someone will find a way around it. It's just a matter of time. I am certainly not saying that pre-registration is not useful, but it is not the vaccine. It is the Tylenol. How to treat the disease requires a much more provocative discussion.
It is reviewers like you, who assume the worst of people, who write mean and unhelpful comments that propagate this vicious cycle.
I will leave a fake email address because I know if I left my real name, it would affect the way reviewers like you view my submissions. That is a fucked up system.
Posted by: Jennifer | 03 December 2015 at 09:46 AM
Hi Jennifer,
Thanks for your comment. I completely agree with you that ultimately we need to treat the disease, which means changing the incentive structure and the way we evaluate people for jobs and promotion. I do think I have spoken out about this, on my blog and in talks, but I agree that's not enough. I also try to emphasize the importance of looking beyond quantity when I am on search committees and when I write tenure letters. Finally, I hope that being involved in the peer review process as editor and reviewer can also help treat the disease, by making it easier for people who do the slow, meticulous kind of science we'd all like to see more of publish in good journals. I am sure there is more I and people in positions like mine could do, and I would be very interested in hearing ideas about this.
Many of us do care about the unrealistic expectations that junior scholars face, even if we are not always doing enough about this. Here are some papers and blog posts that have touched on these issues:
http://pps.sagepub.com/content/7/6/615.full
http://opim.wharton.upenn.edu/~uws/papers/fewer.pdf
http://psych-your-mind.blogspot.com/2013/06/quality-v-quantity-in-publication.html#more
http://onlinelibrary.wiley.com/doi/10.1111/spc3.12166/abstract (paywall, sorry)
http://sometimesimwrong.typepad.com/wrong/2014/03/having-it-all.html
Although I agree with much of what you write, I am not sure how reviewers who ask for more transparency are propogating this vicious cycle. Perhaps I am naive, but I think reviewers who do this are doing a service to those researchers who want to be transparent and do research without cutting corners, by rewarding that kind of research and holding everyone to that standard. I agree that mean comments in reviews are never a good idea, but we can't equate skepticism with meanness. In my view, skeptical reviewers who push for transparency are part of the solution.
But the bottom line is, I agree the system is messed up. I want to help fix it. I am eager to hear ideas about how we can do this better.
Best,
Simine
Posted by: Simine Vazire | 04 December 2015 at 03:43 AM
Nice post Simine. My only comment is that by presenting one's ugliest and most damning results one also dramatically increases one's chances of the paper being rejected. We've all seen how reviewers gleefully expose "weak" or "unconvincing" findings and that is enough to resoundingly kill one's chances with a prominent journal. As Jennifer says above, this is a cost most heavily felt by junior researchers.
The answer, in my view, is to submit papers to a format that doesn't assess the publishability based on results - and that of course (as you know) is Registered Reports. https://osf.io/8mpji/wiki/home/ If our goal is to change the incentive structure we need to change the power structure, and RRs declaw reviewers from sinking papers because they find the results weak, unconvincing, ugly (insert descriptor) etc.
As an editor for RRs at a number of journals, I've seen just how biased peer review actually is. It is becoming thematic that reviewers decide that the method of a pre-registered study was flawed because the reviewer disliked the results, even when the reviewer explicitly *approved* these exact methods before results existed. With RRs, the author is protected against such biased decision making - manuscripts cannot be rejected on these grounds - but with conventional review authors are fully exposed and are easy prey.
That's why I feel that your suggestion of presenting one's ugliest results, while absolutely correct from a scientific and ethical point of view, will never ever (ever ever ever) happen under traditional articles formats at prominent journals, where more than anything else, the authors need a good story.
And the moral (for me) is -- everyone get pre-registering and supporting Registered Reports and together we can fix this problem.
Posted by: Chris Chambers | 05 December 2015 at 12:26 AM
I would agree with the point that Jennifer is making, but might take it one step further. Individuals whom are drawn to academia are generally very achievement oriented, which can and does influence how they interpret and present data. I would therefore argue that there is as much pressure coming from within the individual as there is from the system.
Also, from a philosophical standpoint, most things in this world are a mix of strengths & weaknesses, and attempts to correct one weakness in a system often led to another. As a result, it will always be an imperfect process, no matter how you structure it. I have also been surprised on occasion by successes that have occurred, in spite of the poorly structured systems that guide them (i.e. a beautiful mess).
Posted by: Dave Geyer | 05 December 2015 at 03:24 AM