Enter your Email:
Preview | Powered by FeedBlitz

« Guest post by Alexa Tullett: a not-so-hypothetical HIBAR | Main | had i been editor in chief »

Comments

koen pauwels

most excellent - I fully agree and I am not even in psychology! And yes, i would also eat your failed donut :-)

Sam Schwarzkopf

Great post, completely agreed! :)

The answer to *** is eating out in a good restaurant.

Chris Crandall

I think the metaphor is both accurate and inaccurate. Given the same recipe, skilled chefs and unskilled chefs generate VERY different results. In baking this is exacerbated (see obligatory reference to Great British Baking Show/Bake-Off, which demonstrates this point amply). A recipe assumes a vast amount of background knowledge and skill--which is not apparent in the recipe. Ditto registered reports.

If you believe that some people are better at writing grants than they are at conducting research, then you're already on the page which criticizes the notion that a registered report is sufficient basis to make a publication (or whatever) decision. But all that is just dissecting the metaphor.

Some of the argument takes the form of "if you've p-hacked or otherwise mangled your results, the work is bad." This is true, but registered reports don't really solve *that* problem, and there are other, better-suited ways to address it (IMO).

But what really got me going was endorsing the notion that "Wallach (1994, 1998) showed that most "theories" tested in JPSP and JESP papers are almost tautological."

No, they did not. They made the argument. If anyone still cares, we differ:

Schaller, M., Crandall, C.S., Stangor, C. and Neuberg, S.L. (1995). "What kinds of social psychology experiments are of value to perform?" A reply to Wallach and Wallach (1994). Journal of Personality and Social Psychology, 69, 611-618.

Schaller, M. and Crandall, C.S. (1998). On the purposes served by psychological research and its critics. Theory and Psychology, 8, 205-212.

Crandall, C.S. & Schaller, M. (2002). Social psychology and the pragmatic conduct of science. Theory and Psychology, 11, 479-488.

NB: These papers are 15-20 years old, and we might change some of the examples or arguments, but the main thrust remains.

Sam Schwarzkopf

@Chris Crandall:

To me there is a difference between providing a sufficiently detailed recipe and being a great cook. When we say that a method section should provide the necessary information for anyone to replicate an experiment, I don't think anybody seriously means to imply that training is irrelevant. Obviously I won't be able to replicate a particle physics experiment even if you give me access to the LHC.

I think in the past not even that was given as methods sections, especially in many top impact journals, were far from detailed enough even for experts to replicate the methods. But that situation is fortunately improving.

The other aspect I believe relates to the level of training that should be required. Comparing particle physics and social psychology experiments is obviously a red herring. But shouldn't we expect a psychology research to be able to replicate psychology experiments?

I'd concede that even within a field there *can* be differences in expertise that could be an issue. There is probably no blanket judgement here but you need to look at this case-by-case. Nevertheless, if a finding is strongly susceptible to experimenter effects, at the very least this implies the result is subtle and/or unreliable. Moreover, if it were me, I would really want to know then what actually made the difference.

The comments to this entry are closed.