Enter your Email:
Preview | Powered by FeedBlitz

« An Oath for Scientists | Main | nothing beats something »

Comments

Brett Buttliere

The carrots just need to be big enough.

Or and Also people are voting by their behavior. The people will not accept something that is forced upon them, if they don't see the value in it. Even speeding tickets -- people, in general, see the value in not speeding too much.

As I said back in 2014 - the solutions need to be so good and so obviously useful and apparent that people want to adopt them themselves -- nothing else will work. That is how innovation is adopted (e.g., Facebook, Cell Phones, Cars) rarely is an innovation put in place permanently where it makes life harder or more frustrating. It just doesn't last.

https://www.frontiersin.org/articles/10.3389/fncom.2014.00082/full

Anonymous

Regarding your 4 scenarios, i think the first two will not happen a lot. The last two will probably happen more, of which i find the 4th scenario the most troubling.

To me, however that one is easily solved: if after all that happened in the last decades researchers still think it's just fine to label certain analyses as "confirmatory" without giving the reader access to the pre-registration information via a 10-15 character link in the paper, i'd say they don't deserve the reader's trust. I sincerely hope "confirmatory analyses" will not simply become the new "we hypothesized that" (cf. John et al., 2012)

Anyway, i am more interested in your "let's make everything open" default and/or how to reward those researchers that are transparent and follow good practices. I think it's hard for journals to "demand" such a thing at this point in time, but perhaps there is an intermediate step/option that i have not heard (enough) about by influential voices (like yourself) in the open science/psychology community: the option of small groups of collaborating researchers.

I reason those researchers that want to perform and publish research with some higher standards (e.g. highly powered, pre-registered, replicating, interpreting null-results, optimally (re-) formulating and testing theories, etc.) are way more dependent on the “input” of their studies compared to those that don’t adhere to these higher standards, and are way more "vulnarable" in the possible current academic system.

I reason it could be very useful for those researchers working on a similar topic/phenomenon/theory and wanting to perform and publish research with some higher standards, if they would cooporate in small groups, e.g. via StudySwap and a format/idea i wrote down here:

http://andrewgelman.com/2017/12/17/stranger-than-fiction/#comment-628652

Anonymous

"I reason it could be very useful for those researchers working on a similar topic/phenomenon/theory and wanting to perform and publish research with some higher standards, if they would cooporate in small groups, e.g. via StudySwap and a format/idea i wrote down here:

http://andrewgelman.com/2017/12/17/stranger-than-fiction/#comment-628652"

Ow, and this format, and StudySwap, also doesn't need several "associate directors" and "committees" like the Psychological Science Accelerator apparently does (like it's a giant big firm or something).

Just one of many possible benefits compared to, for instance, the Psychological Science Accelerator. See:
http://andrewgelman.com/2018/05/08/what-killed-alchemy/#comment-728195

Carl

"but what else could we do in scenarios 3 and 4? assume that their studies had the same flaws as the studies in scenarios 1 and 2? that doesn't seem fair to the authors in scenarios 3 and 4...we can't assume these non-transparent studies have flaws, and we can't assume they don't. "

With a base rate of non-reproducible p-hacking over 50%, aren't small sample studies without pre-registration or other anti-hacking assurances usually flawed along those lines?

The evidential value of the generic study (with uncertain p-hacking levels) to a research consumer is far less than for the transparent study, so why not review it accordingly?

Anonymous

"The evidential value of the generic study (with uncertain p-hacking levels) to a research consumer is far less than for the transparent study, so why not review it accordingly?"

I agree: transparency should be judged differently, and be rewarded. However, i fear most journals really don't care about that. If i am not mistaken the current editor-in-chief of the journal "Psychological Science" has the following to say about pre-registration:

"Just because a study is preregistered does not, in my view, mean that its results warrant submission, let alone publication."

&

"Just because a study was preregistered does not mean that the work was worth doing or informative. It is quite easy to preregister an ill-conceived study."

Source: http://andrewgelman.com/2018/04/15/fixing-reproducibility-crisis-openness-increasing-sample-size-preregistration-not-enuf/#comment-712159

So, there you have it perhaps: for some editors/journals transparency and evidential value may come in 2nd, 3rd, 4th, or 5th place...

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)