Enter your Email:
Preview | Powered by FeedBlitz

« give the gift of data | Main | and now for something a little more uplifting »

Comments

Anonymous

To me it seems that the only real solution to this is the "Registered Reports"-format (https://osf.io/8mpji/wiki/FAQ%203:%20Design%20and%20Analysis/): Pre-registration, high-powered, and no publication bias.

I would love to hear more solutions to the false-positive problem, but so far, if i understood things correctly, the best solution i have read about is the Registered Report-format. I wonder what your, and others', thoughts are on this.

More importantly, i wonder why journals don't adopt this format as the only suitable one.

Sam Schwarzkopf

I understand why the RPP is worrying. The evidential value even in most of the studies (including the replications) is pretty low as Alex Etz's Bayes factor reanalysis of that shows (https://alexanderetz.com/2015/08/30/the-bayesian-reproducibility-project). Clearly we need more power at the outset of a study and we need a culture of independent replications. I honestly don't know how to bring about this change. Even despite all the developments of preregistration and replication efforts the incentive structure is still opposed to this.

However, in this discussion I also worry about another thing. there is a lot of talk about how many of the RPP studies failed to replicate. But aren't we missing the point a little here? Surely as scientists we want to actually discover stuff and increase our understanding rather than making really, really sure that some specific claim is true.

A while ago I had a look at some of the RPP studies that actually did replicate and which did have strong effect sizes. This wasn't very systematic but I already spotted one or two where I immediately thought "Well, this is almost certainly wrong." I can't remember what they were (and I realise this isn't very useful of me to say right now :P) but I'll look it up at a later point. They effect replicated beautifully but the underlying hypothesis is probably still completely false. I do feel like the discussion about replicability is often missing this issue (however, in their defense the RPP authors discuss it very clearly in the paper).

In my view the best way out of this mess is to encourage both. Replicability is critical but I believe we can do it without slowing down the progress of research.

Anonymous

What i also find interesting to think about is what ego-depletion researchers will do now, as a result of the new information of the RRR.

I wonder if they will abandon their line of research. I doubt they will. I think we can look forward to multiple low-powered, possibly p-hacked studies by ego-depletion researchers showing all kinds of "moderators" and who knows what. Then a large replication project will no doubt show that there is really nothing going on. Then multiple new articles will arrive again, etc. I fear it will be a never-ending cycle...

Anonymous

Why do you use capital letters so inconsistently and non-standardly? Please explain or it's going to drive me crazy. Thank you!

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)