Enter your Email:
Preview | Powered by FeedBlitz

« on flukiness | Main | both ways is the only way i want it* »

Comments

Michael Inzlicht

For those of you want to see my self-analysis on p-checker itself, you can see my first ten papers here: http://bit.ly/1cwjjf1; and you can see my last ten papers here: http://bit.ly/1yw88Nk

R. Chris Fraley

Excellent post!

Erika Carlson

Moving the field forward in theory and in practice. Awesome, Mickey!

Brent Roberts

Huzzah! Great post.

SS

This is great. Thanks so much for posting! It means a lot when known and senior people in the field are honest in this way.

Steven Heine

Way to set the standard for transparency! This is what we all need to do.

Lee Jussim

From The Economist, 10/19/13, article titled Trouble at the Lab, quoting Dr. Bruce Alberts, 12year President of the National Academy of Sciences and former editor of Science:

“And scientists themselves, Dr Alberts insisted, ‘need to develop a value system where simply moving on from one’s mistakes without publicly acknowledging them severely damages, rather than protects, a scientific reputation.’”

I am slowly coming around to thinking our worst problems are not getting it wrong the first time (though for sure that happens and getting less wrong is important). But I think our worst problems are failure to self-correct.

According to Google Scholar, Since 2014, Bargh, Chen, & Burrows (priming elderly stereotypes/walking slow) has been cited 387 times. The Doyen et al failure has been cited 99.

Go here: the priming elderly stereotypes/walking slow phenomenon seems to exist almost entirely on the strength of p-hacking:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381936.
(which is not to say "all priming is hogwash" -- the same paper shows otherwise).

Michael, we have never met. But I thank you for modeling the type of scientific behavior that offers hope that we can actually create a self-correcting psychological science.


Sam Schwarzkopf

I apologise for repeating myself but this is precisely why I think we need a better search engine for scientific studies, be it PubMed, Google Scholar or something else. You shouldn't take an individual study as final evidence but as the first piece of the puzzle.

In this system the DOI of any given study (like the Bargh et al one) should come with a whole tree of links to replications, exact and conceptual as well as related topics, so you can quickly identify the current state of the evidence. Ideally the platform should allow meta-analyses of the effects.

Each node in the tree should come with tags helping you to determine the directness of the replication and thus allowing you to refine the search. This will not only allow you to check if the effect replicates generally but also identify possible reasons why subsequent attempts might have failed which could inform further experiments that directly test those factors. If most replications are missing a crucial aspect of the original study, there would be a good reason to do such a follow up experiment. If the picture is very messy on the other hand, this makes it unlikely that the effect replicates at all.

I think implementing a system like this will take some initial effort but once things get going I don't think it's a major undertaking. It would be in the authors' own interest to ensure their effects are registered properly in the system.

Dan Dolderman

Great discussion! It's impressive that you did this...great way to open up honest dialogue in the field!

The comments to this entry are closed.