[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
before modern regulations, used car dealers didn't have to be transparent. they could take any lemon, pretend it was a solid car, and fleece their customers. this is how used car dealers became the butt of many jokes.scientists are in danger of meeting the same fate.* the scientific market is unregulated, which means that scientists can wrap their shaky findings in nice packaging and fool many, including themselves. in a paper that just came out in Collabra: Psychology,** i describe how lessons from the used car market can save us. this blog post is the story of how i came up with this idea.last summer, i read Smaldino and McElreath's great paper on "The natural selection of bad science." i agreed with almost everything in there, but there was one thing about it that rattled me. their argument rests on the assumption that journals do a bad job of selecting for rigorous science. they write "An incentive structure that rewards publication quantity will, in the absence of countervailing forces, select for methods that produce the greatest number of publishable results." (p. 13). that's obviously true, but it's not necessarily bad. what makes it bad is that "publishable result" is not the same thing as "solid study" - if only high quality studies were publishable, then this wouldn't be a problem.so this selection pressure that Smaldino and McElreath describe is only problematic to the extent that "publishable result" fails to track "good science." i agree with them that, currently, journals*** don't do a great job of selecting for good science, and so we're stuck in a terrible incentive structure. but it makes me sad that they seem to have given up on journals actually doing their job. i'm more optimistic that this can be fixed, and i spend a lot of time thinking about how we can fix it.****a few weeks later, i read a well-known economics article by Akerlof, "The market for "lemons": Quality uncertainty and the market mechanism" (he later won a nobel prize for this work). in this article, Akerlof employs the used car market to illustrate how a lack of transparency (which he calls "information asymmetry") destroys markets. when a seller knows a lot more about a product than buyers do, there is little incentive for the seller to sell good products, because she can pass off shoddy products as good ones, and buyers can't tell the difference. the buyer eventually figures out that he can't tell the difference between good and bad products ("quality uncertainty"), but that the average product is shoddy (because the cars fall apart soon after they're sold). therefore, buyers come to lose trust in the entire market, refuse to buy any products, and the market falls apart.it dawned on me that journals are currently in much the same position as buyers of used cars. the sellers are the authors, and the information asymmetry is all the stuff the author knows that the editor and reviewers do not: what the authors predicted ahead of time, how they collected their data, what the raw data look like, what modifications to the data or analyses were made along the way (e.g., data exclusions, transformations) and why, what analyses or studies were conducted but not reported, etc. without all of this information, reviewers can only evaluate the final polished product, which is similar to a car buyer evaluating a used car based only on its outward appearance.because manuscripts are evaluated based on superficial characteristics, we find ourselves in the terrible situation described by Smaldino and McElreath: there is little incentive for authors to do rigorous work when their products are only being evaluated on the shine of their exterior. you can put lipstick on a hippo, and the editor/reviewers won't know the difference. worst of all, you won't necessarily know the difference, because you're so motivated to believe that it's a beautiful hippo (i.e., real effect).that's one difference between researchers and slimy used car dealers***** -- authors of scientific findings probably believe they are producing high-quality products even when they're not. journals keep buying them, and until recent replication efforts, the findings were never really put to the test (at least not publicly).the replicability crisis is the realization that we have been buying into findings that may not be as solid as they looked. it's not that authors were fleecing us, it's that we were all pouring lots of time, money, and effort into products that, unbeknownst to us, were often not as pretty as they seemed.the cycle we've been stuck in, and the one described by Smaldino and McElreath, is the same one Akerlof explained with the used car market. happily, that means Akerlof's paper also points to the solution: transparency. journals have the power to change the incentive structure. all they need to do is reduce the information asymmetry between authors and reviewers by requiring more transparency on the part of authors. give the reviewers and editors (and, ideally, all readers) the information they need to accurately evaluate the quality of the science. if publication decisions are strongly linked to the quality of the science, this will provide incentives for authors to do more rigorous work.we would laugh at a buyer who buys a used car without looking under the hood. yet this is what journals (and readers) are often doing in science. likewise, we would laugh at a car dealer who doesn't want us to look under the hood and instead says "trust me," but we tolerate the same behavior in scientists.but, you might say, scientists are more trustworthy than used car dealers! sure,****** but we are also supposed to be more committed to transparency. indeed, transparency is a hallmark of science - it's basically what makes science different from other ways of knowing (e.g., authority, intuition, etc.). in other words, it's what makes us better than used car dealers.* if you've listened to paula poundstone on NPR, you might think it's already too late** full disclosure: i am a senior editor at Collabra:Psychology. i tried to publish this paper elsewhere but they didn't want it. i do worry about the conflict inherent in publishing in a journal i'm an editor at, and i have been trying avoid it. my rationale for making an exception here is that publishing in Collabra: Psychology is not yet so career-boosting that i feel i am getting a major kickback, but perhaps i am wrong. also, if it helps, you can see the reviews and action letter from a previous rejection, which i submitted to Collabra for streamlined review. [pdf]*** the other journals.**** sometimes i lay awake in bed for hours thinking about this. well, doing that or reading the shitgibbon's tweets and planning my next act of resistance.***** sorry used car dealers. i have little reason to smear you. well, except for the time you didn't want to let me test drive a car then called me a "bold lady" when i insisted that i could be trusted with a manual transmission. (i was 28 years old.) (and i have always driven a stick shift.) (because i am french.)****** not actually sure.underwater hippos
I like your analogy, but i do wonder the following. You write:
"it dawned on me that journals are currently in much the same position as buyers of used cars. the sellers are the authors, and the information asymmetry is all the stuff the author knows that the editor and reviewers do not: what the authors predicted ahead of time, how they collected their data, what the raw data look like, what modifications to the data or analyses were made along the way (e.g., data exclusions, transformations) and why, what analyses or studies were conducted but not reported, etc. without all of this information, reviewers can only evaluate the final polished product, which is similar to a car buyer evaluating a used car based only on its outward appearance."
I wonder if this analogy can be improved. More specifically, if it makes sense to not only view the journal as the buyer, but also the eventual reader of the article. Car dealers (journals) only care about selling the cars (impact factor, etc.), they don't necessarily care about the quality of the car, just that it is being sold. Now, the actual buyers of the car from the dealer (reader of the article) have to actually drive it (cite it, build on it, etc.). They could potentially do care about quality! (and as the reward quality with citations, the journals will start to care).
In other words, to me the most important thing is not that the journal/reviewer can look under the hood, but that the buyer of the car can. Reasoning from there on, i think a possibly more accurate, and useful, analogy would be that the car dealer (journal) buys up cars, which the buyer (reader of the article) then buys from the dealer.
If you only let the car dealer look under the hood, but not the buyer of the car you're still left with the same problem, only on a different (and arguably more important) level.
If this makes any sense, it becomes clear that hiding things like pre-registration from the reader, and only using it on the journal/editor/reviewer level, could be considered to be a sub-optimal practice.
Posted by: Anonymous | 04 March 2017 at 08:22 AM
It's interesting that Anonymous (comment above) interprets the analogy with the journal, not the researcher, in the role of the used car dealer. It would appear that there is scope for discussion here.
I'm reminded of a joke from the IT industry that was being passed round offices 30 years ago in the form of grainy photocopies (that was how office humour circulated back then) that probably dated back many more years:
"What's the difference between a used car salesman [sic, this was the 70s] and a computer salesman? The used car salesman knows when he's lying to you"
Posted by: STeamTraen | 05 March 2017 at 05:34 AM
"It's interesting that Anonymous (comment above) interprets the analogy with the journal, not the researcher, in the role of the used car dealer. It would appear that there is scope for discussion here."
Perhaps i misinterpreted the analogy. In the piece i copy-pasted from the blog post it looks to me that it is stated that the buyer, who should have access to as much information as possible, is the journal/editor/reviewer, which in my opinion leaves out the most important (other) customer: the reader of the paper (member of the general public, fellow-scientist, etc.). I tried to make this clear by extending the car dealer/buyer- analogy.
Most importantly, and possibly relevant to real-world examples i think it could be very important to make the distinction between journal/reviewer/editor level and reader level, otherwise you are just left with the same problem on a different (and arguably more important) level.
It looks to me that making this distinction is already relevant for a discussion about transparency/good practices at two journals who are supposed to be all in favour of these issues (but please correct me if i am wrong):
1) At "Comprehensive Results in Social Psychology", the pre-registration information seems to not be available to the reader by means of the simple inclusion of a link to this information in the final paper, but this information seems to be handled only on an journal/editorial level.
2) At "Psychological Science" it seems to me that they have a demand at the submission level where you indicate that you confirm that you have disclosed methodological information in the paper, like all independent variables investigated (http://www.psychologicalscience.org/publications/psychological_science/ps-submissions). Now, the reader of the paper again has no direct access to this information. They could have authors include something like the "21 word solution" in the paper to actually make the reader aware of this important information.
If i am correct about this all, i reason that in both cases, transparency/good practices are handled only on the level of the journal/editor/reviewer, and not on the level of the final (reader of the) paper, for no apparent reason. To me, this is sub-optimal at best, and possibly setting a dangerous precedent at worst.
Posted by: Anonymous | 05 March 2017 at 07:28 AM