Enter your Email:
Preview | Powered by FeedBlitz

« is bad replication a sin? | Main | life after bem »

Comments

Weskaggs

I'm as Bayesian as they come, but I would argue that there is a difference between the evidence required to publish and the evidence required to change one's views. It takes more evidence to convince me of a counterintuitive claim than of a plausible claim, but that doesn't mean it should take more evidence to publish a paper. I don't generally put massive weight on a single publication even if the claims that it makes are highly plausible. A single paper with findings at the p<0.001 level might not be enough to convince me, but if I see a steady accumulation of papers at that level, it's a different story.

Regards, Bill Skaggs

Jennifer T

I think it's important to differentiate findings that are counterintuitive from findings that are paradigm-shifting. It’s possible to be in favor of the latter, but not the former. There are a lot of potential problems with counterintuitive findings, including the perverse incentive structure that's been built up around sound-bite research in psychology. There's also the problem that they often go against the very principles that make us scientists - you know, being reasonable, logical, and thoughtful. There is a good reason we shouldn't *expect* crazy things to happen. There is also a good reason we should be concerned if crazy things start coming out of our work at greater than chance levels (Keep Calm and Read Meehl). These are all reasons why I think the prioritization and overvaluation of counterintuitive findings in psychology has become a real problem.

That’s not the same thing as devaluing paradigm-shifting research, though. I think it’s possible to see paradigm-shifting research come about via processes we all find logical and reasonable. Let’s think about the derivation of my favorite formula, Euler’s formula (because who can pass up a chance to talk about that!). When Euler derived this formula, he supposedly claimed that it “proved the existence of God”. How’s that for paradigm shifting? It is remarkably parsimonious and elegant yet bridges complex concepts (transcendental numbers and trigonometric functions). But it’s completely reasonable, and logical, and follows clear pathways in its identification. So, why can’t psychology have paradigm shifts like that? There’s no reason to jump ahead, to the crazy punchlines, and then try to piece together a posthoc pathway that appeals to logic. That’s not paradigm-shifting, in my opinion (at least not in a good way!).

I think it’s possible to stay the logical course, to be thoughtful, put bricks in the wall, and still have the possibility of paradigm-shifting work emerging from that. It certainly isn’t easy, and the vast majority of us will not shift paradigms in our lifetime. But it shouldn’t be easy! If we look at the number of labs/researchers producing counterintuitive findings, do we really believe that many people are producing meaningful scientific paradigm shifts in psychology? If we follow Kuhn’s ideas further, then we probably shouldn’t anticipate even knowing who is producing real paradigm shifting work in our lifetime. That knowledge will likely only come later. It’s like the ultimate marshmallow test for psychological scientists. Who is willing to take the boring and tedious path toward greater potential scientific payoff?

Roger Giner-Sorolla

For me it's clear: to be useful, a counterintuitive finding must illuminate and support a theory, as Festinger's $1/$20 findings did.

And, counter whose intuitions? If lay beliefs about our own psychology are tested and found wanting, this is surely as interesting as finding they have merit, although laypeople may find it more "mind-blowing" etc.

It is the *non*-intuitive proposition sans theory that should be entered into warily ... ideas like "sweet tastes make people more ambitious," or whatever. If it fails to be supported, you have zero story, and not even a well-known theory to start debunking with a solid null result. Therefore it is more risky to do and report research on such topics. Perhaps this accounts for the skepticism and suspicions of moral hazard surrounding such research, as well.

Jdottan

I liked the post (and the comments above), just have a small piece to add:

It is possible (not always) to set a prior empirically, as demonstrated in my pet favorite recent Bayesian psychology paper: http://onlinelibrary.wiley.com/doi/10.1111/cdev.12169/full

PsychChrisNave

Interesting post, Simine. Two points: 1) Why not go Milgram and ask a series of experts on a topic (or maybe the general public?) to make predictions on what we would expect from a proposed study- why not move the topic from a fun party trick to collecting survey data on what people expect to find? I suppose you could pre-register the hypothesis or competing hypotheses and add more objectivity to what truly is counterintuitive. The implications of our work may be put in context and adjusted by how “counterintuitive” our results are.

2) One way of looking at “counterintuitive” could be from an empirical standpoint- what flies in the face of past findings of psychological phenomena. Meta-analysis can be useful in creating a Bayesian prior or threshold one might have for evaluating new findings in the context of what we know from past findings. Before meta-analysis, we had to rely on lit reviews and have a rough qualitative understanding of what evidence we have for an effect. I’m cautiously optimistic that with the sophistication of meta-analytic techniques and with increased dissemination of knowledge (e.g. online databases, OSF), we can empower journal editors, reviewers and the general public to make an educated assessment of what evidence is needed to challenge or “undo” past findings (I realize we still have file-drawer issues but this is not new- Rosenthal and others have come up with fail-safe N and other ways to estimate the amount of studies needed to overturn an effect decades ago. I’m sure meta-analysts are coming up with new and improved ways for accounting for file-drawer issues). Malle’s 2006 meta-analysis showing no actor-observer effects can help inform future studies on the phenomenon (interestingly- the actor-observer effect is still largely taught in Social Psychology as a counterintuitive effect—so counterintuitive there is perhaps no evidence of its existence!).

I’m not suggesting we hold counterintuitive findings to a greater threshold at the publication or dissemination level if there is full transparency of the methodology, large N, rigorous analytic strategies and cautious interpretations of the implications of the work. Raising the bar too high for counter-intuitive findings to be published gets us back into making silly, arbitrary decision rules like overly stringent alphas/corrections or over-concerning ourselves with Type I error at the expense of Type II. We need to be more honest in our work being exploratory (Sanjay Srivastava eloquently makes this point about work being ground-breaking OR definitive- see link below), be careful in the extrapolations we make from our work and understand that many of our initial conclusions on human behavior may end up being wrong (While we are at it, let’s continue to de-stigmatize the fact that great, high quality research can and will be “wrong” after replications are performed- that failure to replicate has nothing to do with fraud or poor research design).

Sanjay’s blog/article:
http://spsptalks.wordpress.com/2011/12/31/groundbreaking-or-definitive-journals-need-to-pick-one/

Daniel Lakens

If you want a little sip of the Bayesian kool-aid, and know best case scenario's for the p-values you'd need, depending on the prior probability you come up with subjectively, see: http://daniellakens.blogspot.nl/2014/05/prior-probabilities-and-replicating.html

The comments to this entry are closed.