i was supposed to give a talk at the Metascience 2023 conference, but instead i am a block away in a hotel room with a (very mild, so far) case of covid. i know, so 2022. so i typed up my talk and here it is.
very special thanks to the MetaMelb lab group for their "behind the livestream" text commentary from the conference, and to them and alex holcombe for comments on an earlier version of this talk ♥ (and also to my dinner group last night who i inadvertently exposed, and who let me take two whole leftover pizzas back to the hotel (that i intended to share with my lab, i swear!). and to the Center for Open Science for paying for the pizzas (and also for creating a world where this blog, and my career, are possible). if you ever get stuck in a hotel room with covid, try to make sure you have supportive friends and colleagues nearby, and two whole pizzas in your fridge.)
i’ve been known to be critical of eminence and prestige, but it’s really not prestige itself that i’m against. i’m not totally naïve. one way to think about prestige is just having a good reputation. i know we’re never going to get rid of reputations, and anyways some research is better than other research, and so deserves a better reputation. the goal shouldn’t be to eliminate that – we want good science to have a good reputation. the problem is unearned prestige. i propose there are at least two wide open paths to unearned prestige, and today i’m going to talk about how transparency can help close those.
the first is that a specific research output, like a manuscript, is mistakenly evaluated as being good science even though it's not. in this scenario, the journal sincerely wants and tries to select the best science, but some manuscripts that are not in fact great science are mistakenly evaluated positively. this could happen because of active misrepresentation on the authors' part, of course, but that's not the only way. in fact, i believe most authors sincerely believe their work is very good, and they try to present it as such and convince journals of that because they believe it. the problem is it's often very hard for reviewers and editors to tell what's genuinely good and what just looks good. and that's because we often don't have all the information we need. as Clarebout is famously quoted as saying, "An article about a computational result is advertising, not scholarship. The actual scholarship is the full software environment, code and data, that produced the result." so of course when we have to judge research based only on the article, we're going to be wrong a lot.but that's not the only way prestige can be unearned. the other way is when there is a mismatch between a journal's reputation and its actual practices. if a journal has a reputation for selecting for good science, but what they actually select for is something else, then plenty of research will get a reputation for being good science -- because it's published in that journal -- even when it isn't. and this can happen even when a close reading of the article actually would have been enough to see that this isn't exemplary science.in many ways, this is a bigger loophole than the first -- the unearned prestige is bestowed on every article that makes it into the journal. all you need to do to get this unearned prestige is figure out what these journals actually are selecting for, and be good at that. of course that might not be easy or possible for everyone, for example if a journal is selecting for famous authors or fancy institutions. but for some people, that will be a lot easier than producing high quality science.hopefully it's clear how transparency can help close these paths to unearned prestige. at least that first path where articles reporting shoddy science can pass for good science. requiring basic transparency -- by which i just mean the information needed to verify a researcher's claims, to poke and prod at them and see if they stand up to scrutiny -- this will help reduce the chances that individual outputs will be given a better reputation than they deserve.but what may be less obvious is that the second path to unearned prestige, through journals that have an unearned reputation for selecting good science, can also be addressed by more transparency. but a different kind of transparency: transparency in journals' evaluation processes.transparency: not just for researchers anymorethe efforts towards open science and transparency have been focused mostly on individual researchers' practices, and little attention has been paid to the transparency of journals' evaluation processes. but we are often outsourcing evaluations of prestige to journals -- they get to decide which research and which researchers get a good reputation. we hand over prestige to journals, and let them distribute prestige to researchers. but we don't expect hardly any transparency from journals. this is a really big problem, because it means we don't really know if journals are indeed selecting the best science, or what they are selecting for.in fact, many of the "top" journals don't even claim to be selecting the best science, at least not if we mean the most accurate and careful science. many top journals openly admit that they are selecting for impact or novelty, and few do much to really ensure the accuracy or reliability of what they publish. if journals really cared about publishing the best science, we would see many more journals invest in things like reproducibility checks, or tests of generalizability or robustness. but most journals, even the pickiest and richest ones, don't do this. what's worse, when other researchers check the accuracy of their published papers, they often don't care, don't want to publish it, or are even annoyed. that is who we are outsourcing our evaluation of prestige to. that's not good enough.what do i mean by transparency from journals? first, i mean that their policies should be clear about what they are selecting for and these policies need to match actual practice at the journal. there are a lot of unwritten - or at least un-shared with the public - practices that journals encourage their editors and reviewers to follow. this was made explicit to me when i became editor in chief of a journal in my field, and within a year was accused by the publication committee of stepping on important people's toes because i was desk rejecting some powerful people's papers. i was naive enough to be shocked, and didn't realize they were just saying the quiet part out loud. when i stubbornly refused to believe that they were telling me not to desk reject famous people's papers, one of the members of the publication committee made it very clear. she sent me an email saying "Just today I received a complaint from a colleague, who is a senior, highly respected, award-winning social psychologist. He says: "About a month ago I had an experience with Simine that I found extremely distasteful. The end result is that I will not submit another paper to SPPS as long as she is associated with the journal..."" when i offered to discuss the paper in question and my decision, it became clear that the merit of my decision was not the point.my view is that famous people should not get special treatment from journal editors, but if they do, this should be stated explicitly. in this way, i guess PNAS's 'contributed' track, in which NAS members can arrange their own peer review process and have a 98% acceptance rate, is perhaps just a more transparent way of doing what most fancy journals are probably doing secretly. i hadn't thought of it this way before, but i suppose that's a step in the right direction - their policy is transparent enough that people like me can shout about it on twitter, and ask them to abolish it, and that's surely better than doing it in secret.at a minimum, journals should make their practices transparent to authors, and ensure that what they say they're selecting for is actually what submissions are evaluated on. at Collabra: Psychology, the journal that i'm editor of, i try to do this by making the informal guide that i share with editors available to anyone, publicly.second, journals should make the content of peer reviews and editors' decision letters public, at least for the articles they publish ('transparent peer review'). this would give us better information to assess what journals are actually evaluating during peer review, and how thoroughly they're doing it.third, journals should not only be more welcoming of metaresearch that examines how strong their published papers are, like replications and reproducibility checks, they should actively solicit, fund, and publish such audits. that would show a commitment to making sure they're living up to their stated aims and values. if they want a reputation for selecting the best science, invest in the evidence necessary to back up that claim.prestige: the hidden curriculumtaken together, what these two paths to unearned prestige mean is that there is effectively a 'hidden curriculum' for prestige. if we think about it, who is going to be better at using these loopholes? it is much easier to create manuscripts that look, superficially, like good science, or that look like whatever journals are actually selecting for, if you have privileged information. to benefit from these paths, you have to understand the discrepancy between what is advertised and what actually matters. this favors those with better connections, more status.to oversimplify a bit, this creates two classes of researchers. the everyday researchers and the well-connected. while the everyday researcher has to play by the official rules, and work to get a good reputation the hard way, by doing good science, the well-connected researchers get a free ride.what's worse, the free ride that some researchers are getting is invisible. the lack of transparency in individual research outputs means that we can't see if some research is incomplete or misleading, if some shortcuts have been taken, inadvertently or not. and the fact that journals' evaluation processes aren't transparent means that we can't see if some authors are getting special treatment, or if the evaluations are emphasizing something other than what the journal claims to be selecting for.indeed, this special track, the gondola ride, was made clear to me when, earlier in my career in the early days of psychology's replication crisis, when norms were changing fast and goal posts kept moving, a professor at a prestigious university said to me, in an exasperated tone, "just tell me what to do so i won't get criticized." that's when i realized that there were two tracks - one treacherous and unpredictable track for most researchers, where you have to do your best and hope it's what journals want, and one for the more privileged, where you follow a formula and things generally work out in your favor.of course i'm oversimplifying, but the point i want to make is that there are unwritten rules and hidden advantages, and that open science, and open evaluation, can help bring those to light.how can transparency help?so, if the goal is to reduce unearned prestige, how can transparency help? i see a few ways. first, as i've already talked about, transparency can make it easier to tell the good research from the bad, and to tell the journals that are doing this well from those that are doing it badly, in other words, closing the two loopholes to unearned prestige. but transparency can also help strengthen the good path - the association between the good research and a good reputation. the more information we have about what was done, the more calibrated our evaluations can be. in other words, transparency doesn't guarantee credibility, transparency guarantees* we'll get the credibility we deserve (* void where not paired with scrutiny). but this is only a possibility that transparency opens up - it doesn't actually guarantee anything. we have to actually use the transparency to poke and prod at the research to see how solid it is.of course there are a lot of messy details, such as what we mean by 'good science', what we want to reward, and so on. and one important caveat is that we need to be careful about how selecting for the best individuals, or individual outputs, may not be what's best for science as a whole. i don't have time to go into this, but google "chickens" and "group selection", or Richard McElreath's talk "Science is like a chicken coop", or Leo Tiokhin and colleagues' paper "Shifting the level of selection in science".transparency is scarynow i want to shift gears and talk about how people react to the shifting norms towards greater transparency in science, and i want to distinguish between two superficially similar reactions.first, any change in norms can lead to uncertainty during the transition, and to startup costs related to adjusting to the new norms. this is true even if the change is welcome and good. we've all gotten used to playing by one set of rules, and we've invested resources into learning how to play that game, and now the rules are changing. in the short run, this makes things harder.and this short-term cost is greatest for the everyday researchers, especially those with fewer resources and less good connections. it's harder to know what the new rules are, and to spend the time and money needed to develop the new skills. and the sunk costs invested into the old system will be harder to bear. but i also want to emphasize that this is a short term problem, and i believe it's one that can be addressed by making accessible and usable tools, templates, training materials, and so on. and as much as changes to the system are painful, the old system is worse, especially, in my opinion, for lower status researchers. it's just hugely unfair.second, another reason the shift towards transparency is scary is that making things more transparent will shake up the status quo. to the extent that some prestige is unearned, that some people have been benefiting from hidden rules and advantages, greater transparency means that some people and findings will stand to lose prestige. and of course this fear is greatest among those who have been doing very well in the old system - especially those who came by that prestige unfairly.so, when we hear "transparency is scary", we should ask which of these two fears is likely behind that feeling. is it the very legitimate fear of the short-term costs and uncertainty that less well-resourced researchers are most vulnerable to? or is it the more existential fear of those with unearned prestige afraid of losing it? those who don't want the rules to change, or even to be said out loud, because they've been benefiting from all the opaqueness?and one of the most frustrating things i've seen is when the prestigious group uses the most vulnerable groups as cover to protect their advantage. there are very legitimate fears about changing the rules of the game, but those fears are addressable, and, i think, much less treacherous for the average researcher than the costs of the old system, and they should not be used as a shield to perpetuate inequities and unfair practices.what can we do?so what can we do to make the change towards more transparency less painful for everyday researchers? we can make the new rules explicit and transparent, to make things more predictable and reduce the hidden curriculum. and of course, we want to make those new rules fair and equitable. in addition, we should make new rules for increased transparency not just for researchers, but also for journals. indeed, i think that hidden practices in journal peer review are a huge source of inequity. given how much power we give to journals to decide who gets a good reputation (and a job, and a grant, and a raise, and awards), we should expect a lot more transparency from them.these points are perhaps obvious, so i want to end on a few maybe less obvious things we can do to increase transparency, and do so in a way that is as painless as possible for researchers who want to do the right thing.first, perhaps counterintuitively, we should get to a point where transparency is required, rather than optional, as fast as possible. i agree with brian nosek's approach, illustrated in this pyramid he made, that we need to go through all of these other steps before we make transparency required. but let's hurry up. because optional transparency is inequitable, and unsustainable.let me illustrate with an anecdote. imagine you're a journal editor and you're evaluating two papers. paper 1 is transparently reported, and you can see that the conclusions are not well-supported by the evidence. maybe you can tell that the results aren't robust to other reasonable ways of analyzing the data, or that they made an error in their statistical analysis code, or you can tell that what the authors say was planned, or interpret as a confirmatory test, was not actually what was planned. in other words, the transparency allows you to see that the conclusions are on shaky ground. then there's paper 2, which opted out of transparency, and just tells you a neat and tidy story where everything works out. as an editor, you're stuck. of course you're going to tell the authors of paper 1 that their conclusions should be better calibrated to the evidence, but what do you say to the authors of paper 2? you can give them the benefit of the doubt and take their claims at face value, which will have the effect of punishing the authors of paper 1 for their transparency, and of selecting transparent research, and researchers, out of the literature and the field. or you can not give the authors of paper 2 the benefit of the doubt, and tell them that without the information needed to verify their claims, you won't believe them. first of all, that'll make you a really unpopular editor (and possibly not an editor for much longer, if the authors of paper 2 are powerful), but more importantly, if that's what you believe, then you are de facto mandating transparency, and it's not great to do that only in unwritten practice. you should just make that official policy.a system where transparency is optional cannot function, not for very long. and it will always favor those who choose to be less transparent, who know how to sell their work. the most equitable system is one in which you can't opt out of having your work judged on its merits, poked and prodded at, scrutinized. transparency levels the playing field because it forces everyone to give their critics ammunition. but only if it's required.also relevant, i recently read tom hostler's paper "the invisible workload of open research", which argues that "open research practices present a novel type of academic labour with high potential to be mismeasured or made invisible by workload models, raising expectations to even more unrealistic levels." i agree, but especially if researchers who engage in open practices have to compete with researchers who don't. if everyone has to be transparent (again, not sharing everything all the time, but sharing to the extent necessary for others to verify/scrutinize your claims), these problems are greatly reduced.until transparency is required, though, there is something each of us can do, and it's to demand a minimum level of transparency when reviewing for journals. this is the idea behind the Peer Reviewers' Openness (PRO) initiative - it capitalizes on the power reviewers have to change the system. if a journal wants you to donate your time, it is completely reasonable to ask for the information you need to evaluate the paper. of course the authors of this initiative have thought about the exceptions and grey areas. i encourage you to read it and consider signing up.finally, another more radical thing we can do to make science more transparent and more equitable is to take back control of how we allocate prestige. we don't need to wait around for journals to reform themselves and open themselves up to scrutiny and accountability. in fact, there's no reason to think journals will ever do that - most of them are in the business of chasing impact and profits, so why would we expect them to change? it's time to move on.
I agree with most of what you write, but I don't buy this bit: "to oversimplify a bit, this creates two classes of researchers. the everyday researchers and the well-connected. while the everyday researcher has to play by the official rules, and work to get a good reputation the hard way, by doing good science, the well-connected researchers get a free ride."
Yeah, I get that you say you're oversimplifying. But I don't think your description is an oversimplification, I think it's way off. I say this for two reasons:
1. Lots of "everyday," non-well-connected researchers still do bad science and get published in good journals. They're not well-connected, but some way or another they figure out how to do it, and journals publish their work. Just to choose an example, was Satoshi Kanazawa "well-connected" when he published his terrible papers claiming that beautiful parents had more daughters, etc.? I don't think so.
2. Lots of well-connected researchers "work to get a good reputation the hard way, by doing good science."
I agree with you that there are different paths to success. I'm just wary of your framing as of the outsiders being the good guys and the well-connected being the bad guys. I say this partly because I'm about as well-connected as you can get (Ph.D. from Harvard, etc.) and partly because one thing we've seen in the past is the old guard attempting to quash dissent and criticism by claiming that the bad science being criticized is being done by vulnerable early-career researchers. The vulnerable early-career researchers who publish bad work are taking space and attention that could be going to vulnerable early-career researchers who publish good work!
Posted by: Andrew Gelman | 13 May 2023 at 12:42 AM