i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others. but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words. i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them). there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.
i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?" here are their answers, in three parts, which i will post in three separate blog posts this week.
find part i of the series here (with a longer introduction)
find part iii of the series here
Part II: Emma Henderson, Anne Israel, Ruben Arslan, and Hannah MoshontzI feel safer than most to embrace open research because I’m not set on staying in academia. However my lack of trepidation is not the case for most ECRs: There’s a constant background radiation of work-based anxiety amongst those researchers who would, in an ideal world, be uncompromisingly bold in their choices. But they’re hampered by a “publish or perish” culture and a lack of sustainability and security in their jobs (if they have jobs in the first place).
The decision to embrace open research shouldn’t leave us vulnerable and looking for career opportunities elsewhere - taking our skills and enthusiasm with us. Academic freedom is the scaffolding for best research practice: unrestrained exploration, and a loyalty to data, regardless of whether it yields “desired” or “exciting” outcomes.
As a community we have the tools, the talent, and the tenacity to change things for the better, but to sustain these changes we need fundamental reform in both employment and research evaluation. Here we need established, tenured academics to educate publishers, employers, funders, and policy makers, pointing them towards research that prizes integrity over performance. People higher up need to aim higher.I am mainly struggling with the experience that collaborative research projects across different psychological (sub-)disciplines are still rare and difficult to implement, even though there often is a substantial overlap between key research questions. When I entered into my PhD program, I thought the essence of research was to try to ask and answer smart questions in multidisciplinary teams in order to understand complex phenomena from different perspectives and to make our research understandable to the public as well as to different neighboring fields. Instead, I often feel that researchers nowadays spent a large amount of time fighting over limited resources by trying to prove that their way of doing research is the right one, that their questions are the better ones, and that their ideas are superior to those of other colleagues.
Don’t get me wrong - I am aware that collaborating with others can be quite a challenge: we learn different research practices, we speak different research dialects, and in order to work together productively we need to invest a lot of chronically scarce resources (such as money, time, and patience). However, in my opinion, not investing these resources cannot be an option, because one good integrative research project can be worth a lot more than ten isolated ones. Moreover, complex research questions require complex methods and as much competence as we can get to answer them adequately. Thus, it is about time that we overcome the constraints currently discouraging interdisciplinary work, such as outdated incentive structures that value first-authorships over teamwork, or unequal pay across different subdisciplines, to name just a few examples. We shouldn’t forget that the major goal of research is gaining a deeper understanding of important phenomena and providing the public with relevant new insights. In the end we are the people, who build the system. I hypothesize it’s worth it - let’s collect data.Have you ever decided not to read a friend's paper too closely (or even not at all)?
I have. We need more public criticism of each other's work. I won't pretend I love getting reviews as an author, but I like reading others' reviews when I'm a reviewer. I often learn a great deal. Many colleagues know how to identify critical flaws in papers really well, but all that work is hidden away. The lack of criticism makes it too easy to get away with bad science. No matter how useful the tools we make, how convincing the arguments we spin and how welcoming the communities we build, good, transparent, reproducible, open science requires more time to publish fewer papers. We cannot only work on the benefits side. There need to be bigger downsides to p hacking, to ignoring genetic confounding, to salami slicing your data, to overselling, and to continuing to publish candidate gene studies in 2019 to name a few.
Maybe these problems are called out in peer review more often now, but what do you do about people who immediately submit elsewhere without revisions? Two to three reviewers pulled from a decreasingly select pool. Roll the dice a few times and you will get through.
So, how do we change this? The main problem I see with unilaterally moving to a post-publication peer review system (flipping yourself) is that it will feel cruel and unusual to those who happen to be singled out in the beginning. I certainly felt like it was a bit unfair for two teams that I happened to read their work in a week where I was procrastinating something else. I also had mixed feelings because their open data let me write a post-publication peer review with a critical re-analysis. I do not want to deter data sharing, but then again open data loses all credibility as a signal of good intentions if nobody looks.
I thought unilaterally declaring that we want the criticism might be safer and would match critics with those receptive to criticism, so I put up an anonymous submission form and set bug bounties. I got only two submissions in the form so far and no takers on the bug bounties.
So, I think we really need to just get going. Please don't go easy on early-career researchers either. I'm going on the job market with at least two critical commentaries on my work published, three corrections and one erratum. Despite earning a wombat for the most pessimistic prediction at this year's SIPS study prediction workshop I don't feel entirely gloomy about my prospects.
I'd feel even less gloomy if receiving criticism and self correction became more normal. Simine plans to focus on preprints that have not yet received a lot of attention, but I think there is a strong case for focusing on "big" publications too. If publication in a glamorous* journal reliably led to more scrutiny, a lot more people would change their behaviour.
* I'd love if hiring criteria put less weight on glam and more on quality, but realistically there will not be a metrics reform any time soon and we cannot build judgments of quality into our metrics if reviews are locked away.I think that being new to the field doesn't necessarily give me any insight into truly new issues that others haven't identified or started to address, but it does give me some insight into the abstract importance of issues independent of their history or causes. I also think that being somewhat naive to the history and causes of issues in the field helps me see solutions with more clarity (or naivete, depending on your perspective!). There are two issues that I think people don't pay enough attention to or that they see as too difficult to tackle, and that I see as both critical and solvable.
The first issue is that most research is not accessible to the public. We spend money, researcher time, and participant time conducting research only to have the products of that research be impossible or hard to access for other scholars, students, treatment practitioners, journalists, and the general public. In addition to the more radical steps that people can take to fundamentally change the publication system, there are simple but effective ways that people can support public access to research. For example, individual researchers can post versions of their manuscripts online. Almost all publication agreements (even those made with for-profit publishers) allow self-archiving in some form, and there are a lot of wonderful resources for researchers interested in responsibly self-archiving (e.g., SHERPA/RoMEO). If each researcher self-archived every paper they'd published by uploading it to a free repository (a process that takes just a few minutes per paper), almost all research would be accessible to the public.
The second issue is that there are known (or know-able) errors in published research. I think that having correct numbers in published scientific papers is a basic and important standard. To meet this standard, we need systems that make retroactive error correction easy and common, and that don't harm people who make or discover mistakes. There have been many exciting reforms that help prevent errors from being published, but there are still errors in the existing scientific record. Like with public access to research, there are small ways to address this big, complicated issue. For example, rather than taking an approach where errors are individually reported and individually corrected, people who edit or work for journals could adopt a systematic approach where they use automated error detection tools like statcheck to search a journal's entire archive and flag possible errors. There are many more ways that people can tackle this issue, whether they are involved in publishing or not (e.g., requesting corrections for minor errors in their own published work, financially supporting groups that work actively on this issue, like Retraction Watch).
Comments