Enter your Email:
Preview | Powered by FeedBlitz

« happy festivus (and the meaning of life) | Main | this is what p-hacking looks like »

Comments

Nick Brown

Your description of nailing down methods reminds me very strongly of what happens when testing dowsers (people who claim to be able detect liquids underground).

Dowsers are among the most sincere of believers in pseudoscience (compared to, say, cold readers), but when they fail to detect the effect they're looking for, they tend to blame it on perturbances caused by that car or this person's hat. So experimenters spend a very long time checking that every minor detail of their setup is to the dowser's satisfaction, and the latter agrees not to claim that any of the items they've checked was the cause of a fault in the vortex (etc). Then they run the experiment, and of course they get a null result. "Oh," says the dowser (always, always, without fail), "It must have been because the vortex (etc) was perturbed by the bird that flew past/the sun going behind a cloud/whatever".

I'm very interested to see what comes out of Kahneman's "adversarial collaborations", but I don't expect much better, because researchers - especially those who have enjoyed a measure of success up to now - are not about to go back to square one. To this outsider, the failure of psychologists to appreciate that the solid stuff we know about social psychology (starting with cognitive dissonance, confirmation bias, and groupthink) also applies to them, is one of the strangest aspects of the field.

simine

thanks for your comment nick, that's really interesting! i agree about how we are ignoring the lessons of our own field (re: motivated cognition etc.). it's bizarre.

re: adverserial collaborations. i think the new 'registered replication reports' at perspectives on psychological science is another really cool way to try to close the methods loophole. i'm very excited about this. (full disclosure: i am about to start helping out with the editing of the RRRs). (fuller disclosure: i have not done any actual work yet, that is all dan simons and alex holcombe.)

Chris C.

You are right that naive falsificationism is foolish. But so is ANY attempt to make science an enterprise that progresses according to deductive logic. What you point to in Meehl (which he got first from Pierre Duhem and later from W. V. O. Quine) is that pure falsification is not possible.

Instead, we must progress with inductive logic, using reason and argument. Modus tollens is the logic of Popper. But it is not an accurate description of how scientists think, nor of how science progresses.

That said, all the suggestions you make serve this latter standard equally well. There is no good substitute for excellent methods, not because they improve falsification, but because they form a strong inductive argument.

Engaging and interesting as always!!!


Vincent JOST

4. Teach stochastic processes (or rather don't do so !) to students in social sciences so that it'll be much harder to track artificial falsification/creations of experimental data.

5. Throw me under the bus******. So at last I'll ask all the questions to god directly and I will not have to wonder anymore about whether Auguste Comte was joking when he praised for scientism, or if instead we are just misinterpreting how a civilization urging for scientific miracles should communicate, work and organize itself.*******

******Give me a hug and let's create a spiritual link, and also send me your RSA key before the bus, so you'll have better likelihood peaks regarding the interpretations you may draw from what you might think I'll have been sending you from beyond the rainbow.

*******These two hypothesis are independent, but I found it funny to put them as opposites :)

Sam Schwarzkopf

Dear Simine

You're probably getting sick of my tirades but I just read this post today and I get the feeling we actually pretty much agree which makes it somewhat bizarre that we are debating. We clearly must be missing each other's points somehow.

Regarding 1: Improve the methods. I find a failed replication using better methods much more convincing than several successful replication using identical methods. I'm getting the feeling social psychologists tend to think of better methods as "larger sample, Bayesian stats" but I actually believe this lacks imagination. Again one of my favourite examples is Doyen et al 2012 who used automated infrared sensors instead of stopwatches to measure timing. A simple but effective methodological improvement.

Regarding 3: I totally agree and said many times that "hidden moderators" are unfalsiable and thus unhelpful. This argument should be discounted. However, the onus of testing alternative hypothesis needn't be on the originator. Like Richard said in the other thread, I think we should stop thinking about this as replicators vs originators (wizards vs muggles?). If you think of a good alternative explanation you should test it. It doesn't matter whether you were the one who first published the effect. If you then fail to replicate, you can (and should!) report that. My point is, we could all be doing this all the time.

The comments to this entry are closed.