[for flip yourself - part i see here]
we’ve recently seen a big push to make the scientific process more transparent. bringing this process out in the open can bring out the best in everyone – when we know our work will be seen by others, we’re more careful. when others know they can check the work, they trust us more. most of our focus has been on bringing transparency to how the research is done, but we also need transparency about how it’s evaluated – peer review has become a core part of the scientific process, and of how scientific claims get their credibility. but peer review is far from transparent and accountable.
we can help bring more transparency and accountability to peer review by ‘flipping’ ourselves. just like journals can flip from closed (behind a paywall) to open (open access), we can flip our reviews by spending more of our time doing reviews that everyone can see.
one way we can do this is through journals that offer open review, but we don’t need to limit ourselves to that. thanks to preprint servers like PsyArXiv, authors can post manuscripts that anyone can access, and get feedback from anyone who takes the time to read and comment on their papers. best of all, if the feedback is posted directly on the preprint (using a tool like hypothes.is), anyone reading the paper can also benefit from the reviewers’ comments.
closed review might have been necessary in the past, but technology has made open review really simple.* sign up for a hypothes.is account, search your field’s preprint server or some open access journals in your field, and start commenting using hypothes.is. this approach to peer review is ‘open’ in multiple senses of the word. anyone can read the reviews, but also, anyone can participate as a reviewer. evaluation is taken out of the hands of a few gatekeepers and their selected advisors, and out from behind an impenetrable wall.
there are several advantages and risks of open review, many of which have been discussed at length. i’ll summarize some of the big ones here.
advantages:
- less waste: valuable information from reviewers is shared with all readers who want to see it. reviewers don’t have to review the same manuscript multiple times for different journals. everyone gets to benefit from the insights contained in reviews. a few recent examples of public, post-publication reviews vividly illustrate this point: these three blog posts are full of valuable information i wouldn’t have thought of without these reviews (even though all three papers are in my areas of specialization).
- more inclusive: more people with diverse viewpoints and areas of expertise can participate, including early career researchers who are often excluded from formal peer review. personal connections matter less. this will allow for more comprehensive evaluations distributed across a team of diverse reviewers with complementary expertise and interests.
- more credit, better incentives: it’s easier to get recognition for public reviews, and getting recognition for one’s reviews can create more incentives to do (good) reviews.
- better calibration of trust in findings: when a paper is published that’s exactly in my area of expertise, i might catch most of the issues other reviewers would catch (though let’s be honest, probably not). but when we need to evaluate papers even a bit outside our areas of expertise, knowing what others in that area think of the paper can be extremely useful. think of the professor evaluating her junior colleague for tenure based on publications in a different subfield. or the science journalist figuring out what to make of a new paper. or policy makers. instead of relying just on the crude information that a paper made it through peer review, all of us can form a more graded judgment of how trustworthy the findings are, even if the paper is a bit outside our expertise.
risks:
- filtering out abusive comments: one benefit of the moderation provided by traditional peer review is that unprofessional reviews – abuse, harassment, bullying – can be filtered out. they sometimes aren’t, if you believe the stories you hear on social media, but there is at least the threat of an editor catching bad actors and curbing their behavior. there are solutions to this problem in other open online communities (e.g., up- and down-voting comments, and having moderators review flagged comments). perhaps having more eyes on the reviews will lead to a more effective system.
- protecting vulnerable reviewers: many people who may want to participate as reviewers in an open review system could be taking big risks – those with precarious employment, those still in training, or anyone criticizing someone much higher up in the hierarchy. the traditional peer review system allows reviewers to remain unidentified (known only to the editor), which provides more safety for these vulnerable reviewers (if they get invited to review). open review systems should also find a way to allow reviewers to post comments without revealing their identity. this is in tension with the desire to keep out trolls and bullies, though. once again, i think we can look to other communities online to learn what has worked best there. in the meantime, perhaps allowing researchers to post reviews on behalf of unidentified reviewers (much like an editor passes on comments from unidentified reviewers) may be a good stopgap.
- conflicts of interest: the open review system could easily be abused by authors who ask their friends to post favorable comments. conflicts of interest may be more common than we’d like to believe in the traditional system, too, and it would be a shame to exacerbate that problem. in my opinion, all open reviews should begin with a declaration of anything that could be perceived as a conflict of interest, and there should be sanctions (e.g., downvotes or flagging) against reviewers who repeatedly fail to accurately disclose their conflicts.
- unequal attention: if open review is a free-for-all, some papers will get much more attention than others, and that will almost certainly be correlated with the authors’ status, among other things. one advantage of the traditional peer review system is that it guarantees at least one pair of eyeballs on every submission (though some form desk rejection letters leave wide open the possibility that those eyeballs were not aimed directly at the manuscript). of course, we don’t know that status doesn’t affect the way a paper is treated at journals (it almost certainly does). the rich-get-richer “matthew effect” is everywhere, and it’ll be a challenge for open review. perhaps open review will push the scientific community to more fully acknowledge this problem and develop a mechanism to deal with it.
what now?
i’ve attended many journal clubs where someone, usually a graduate student, asks “how did this paper get into Journal X?” we then speculate, and the rest of journal club is essentially a discussion of what we would’ve said had we been reviewers. often the group comes up with flaws that seem to not have been dealt with during peer review, or ideas that could have greatly improved the paper. the fact that we can’t know the paper’s peer review history, and can’t contribute our own insights as reviewers, is a huge waste. open review can remedy this.
open review has some challenges to overcome. it will not be a perfect system. but neither is the traditional peer review system. it is not uncommon to hold proposals for reform to much higher standards than current policies are held, often because we forget to ask if the status quo has any of the problems the new system might (or bigger ones). one advantage of an open review system is that we can better track these potential problems, and identify patterns and potential solutions. those of us who want open review to succeed will need to be vigilant, dedicated, and responsive. open review will have to be experimental at first, subject to empirical investigation, and flexible.
to start with, i think we need to do what that cheesy saying tells us: be the change you wish to see in the world. here’s what i plan to do, and i invite others to join me in whatever way works for them:
-search for preprints (and possibly open access publications in journals) that are within my areas of expertise.
-prioritize manuscripts that have gotten little or no attention.
-try to keep myself blind to the authors’ identities as i read the manuscript and write my review (this will be hard, but i have years of practice holding my hand up to exactly the right place on my screen as i download papers and scroll past the title page).
-write my review: be professional – no abuse, harassment, or bullying. stick to the science, don’t make it personal. not knowing who the authors are helps with this.
-review as much or as little as i feel qualified to evaluate and have the skills and time to do well. i’ll still try to contribute something even if i can’t evaluate everything.
-after writing my review but before posting it, check the authors’ identities, then declare my conflicts of interest at the beginning of all reviews (if i have serious conflicts, don’t post the review).
-post my review using hypothes.is
-contact the author(s) to let them know i’ve posted a review.
-i may also use plaudit to give it a thumbs up to make it easier to aggregate my evaluation with others’
-post a curated list of my favorite papers every few months.this might sound like a lot of work. it’s not much different than what you’re probably already doing for free for commercial publishers, who take your hard work, hide it from everyone else, give you no credit or much possibility of recognition, use your input to curate a collection of their favorite articles, and then sell back access to those articles to your university library, who uses money that could otherwise go to things you need.
the beauty of open review is that you can do just the bits that are fun or easy for you. if you want to go through and only comment on the validity of the measures used in each study, go for it. if you just want to look at whether the authors made appropriate claims about the external validity of their findings, knock yourself out. if you just want to comment “IN MICE” on the titles of papers that failed to mention this aspect of their study, well, that’s already taken care of. by splitting up the work and doing the parts we’re best at, we can do what closed peer review will rarely accomplish – vet many different aspects of the paper from many different perspectives. and we can help shatter the illusion that there is a final, static outcome of peer review, after which all flaws have been identified.
you’re probably already doing a lot of this work. when you read a paper for journal club, you’re probably jotting down a few notes about what you liked or found problematic. when you read a paper for your own research, you might think of feedback that would be useful to the authors, or to other readers. why not take a few minutes, get a hypothes.is account, and let the rest of the world benefit from your little nuggets of insight? or, if you want to start off easy, stick to flagging papers you especially liked with plaudit. every little bit helps.
want to have your paper reviewed?
by posting a preprint on PsyArXiv, you’re signaling that the paper is ready for feedback and fair game to be commented on. but there are a lot of papers on PsyArXiv, so we could prioritize papers by authors who especially want feedback. if you’d like to indicate that you would particularly like open reviews on a paper you’re an author on, sign up for a hypothes.is account and add a page note to the first page of your manuscript with the hashtag “#PlzReviewMe”
constraints on generality
do i think this will be a magic solution? no. it might not even work at all – i really don’t know what to expect. but after many years on editorial teams, executive committees of professional societies, task forces, etc., i’m done waiting for top-down change. i believe if we start trying new things, take a bottom-up, experimental approach, and learn from our efforts, we can discover better ways to do peer review. i don’t think change will come from above - there are too many forces holding the status quo in place. and to be clear, i’m not abandoning the traditional system – like most people, i don’t feel i can afford to. i’ll keep working with journals, societies, etc., especially the ones taking steps towards greater transparency and accountability. but i’m going to spend part of my newfound free time trying out alternatives to the traditional system, and i hope others do, too. there are many different ways to experiment with open review, and if each of us tries what we’re most comfortable with, hopefully some of us will stumble on some things that works well.
* if you have a complicated relationship with technology like i do, this guide to annotating preprints will be helpful.
It occurs to me that open review could also reduce the amount of reviewer person-hours that are spent on any given manuscript. If your article is accepted by the 4th journal that you submit it to, it probably had 8 or more reviewers spending time on it. If all reviews were open, the second editor could already have used the reviews from the first journal to form an opinion about the manuscript (and whether it's being treated appropriately). Some publishers ask reviewers "Is it OK if we send your review to the editor in the event that the authors resubmit to another of our journals", but this shouldn't be limited to each publisher's domain.
Posted by: STeamTraen | 03 July 2019 at 09:28 PM