i don't mean to pile on,* but want to share two quick thoughts about the jens förster case.
1.
first, i think the evidence is pretty overwhelming that the data in the paper in question are not real. in addition to the original report and LOWI committee's report, additional analyses by the data colada guys show that the pattern of results could never have happened without data manipulation.
this case is interesting in part because the evidence for fraud comes from statistical analyses of the published results, rather than from a whistleblower inside the lab, or a confession. this makes some people uncomfortable. i agree that concluding fraud based on probabilities could be problematic - this case makes us wonder what we would think if the odds were, say, 1 in 10,000. how small does the probability have to be for us to conclude fraud?
i don't know, but i know 1 in 508 quintillion is improbable enough. i agree it is worth thinking about what we should do with borderline cases, but it is also important to recognize that this is not a borderline case. 1 in 508,000,000,000,000,000,000 is at least 10 more zeros than i need in order to be certain.
2.
another interesting aspect of the förster case is his response. one line in particular jumped out at me:
'[i] never did something even vaguely related to questionable research practices.'
really? i can't go two hours without doing something vaguely related to questionable research practices.
this reminded me of a claim diederik stapel made in his response to the interim report on his fraudulent papers (october 31, 2011 - i can't find the english version online anywhere), in which he stated:
'i must emphasize that the errors i have made were not motivated by self-interest.'
why do these guys insist on completely undermining their credibility? being accused of fraud and responding by saying that, not only did you not commit fraud, but you've never done anything vaguely related to questionable research practices is like being accused of murder and responding by saying that not only did you not kill anyone, but you have never told a lie.
maybe förster isn't guilty. maybe someone else manipulated his data. (sometimes my dog likes to play around on spss.**) but it is fascinating that, when faced with a serious accusation like this, he makes a claim that makes it impossible to trust anything else he says. shouldn't a social psychologist know better?
* it's possible that i do mean to pile on. i'm not sure.
** the reason i suspect it was förster himself is that if an innocent person was presented with this evidence, i think their first reaction would not be defensiveness but 'holy shit. who is f*&$%ing with my data?'
Not only is it implausible, as you say, that nothing he has done is vaguely related to questionable practices, but his long open letter (http://retractionwatch.com/2014/05/12/i-never-manipulated-data-forster-defends-actions-in-open-letter/) basically admits to one!
He says that when a study doesn't work, he just changes things and re-runs it. Nothing wrong with that, but if you do that repeatedly and don't report the failures, you'll eventually stumble on one that "works." That's a (common) questionable practice -- it inflates the false positive rate among the reported studies to only report those studies that "worked."
Perhaps his claim that he has done nothing questionable should be interpreted to mean that he doesn't know the full scope of questionable research practices.
Posted by: Daniel Simons | 13 May 2014 at 01:07 AM
this letter (which i hadn't seen when i wrote this post) also provides more food for thought:
http://www.socolab.de/main.php?id=66
(i couldn't get your link to work, dan, might be same letter)
Posted by: simine | 13 May 2014 at 01:18 AM
It's the same letter - here's a corrected link:
http://retractionwatch.com/2014/05/12/i-never-manipulated-data-forster-defends-actions-in-open-letter/
A brief tangent on the issue Dan brings up. In the tweak-and-rerun cycle that Dan describes, you are engaged in exploratory research (whether you realize it or not) with a substantial risk of capitalizing on chance. Upon getting an experiment to "work," the appropriate thing to do would be to set aside that result, re-run the procedure that "worked" with a new sample, and report the results from the second sample. This is the experimental analog to the use of training and confirmatory datasets in cross-validation. If someone did that, it would be less of a problem for them not to report the preliminary experiments.
(I say this is a tangent because Forster does not say that's what he did, so I assume he didn't do it.)
Posted by: Sanjay Srivastava | 13 May 2014 at 01:35 AM
Let me just mention that probabilities by themselves can never prove fraud, because they always leave open the possibility of unintentional error. And in my view, probabilities less than one in a million are all equivalent, because there is always at least one chance in a million that the probability has been calculated incorrectly.
Best regards, Bill
Posted by: Weskaggs | 13 May 2014 at 05:21 AM