Enter your Email:
Preview | Powered by FeedBlitz

« both ways is the only way i want it* | Main | why p = .048 should be rare (and why this feels counterintuitive) »

Comments

Hardsci

I'm going to try to beat Daniel Lakens and the Bayesians to the punch and point out that there are ways to build data peeking into your study design, and do analyses that are not biased by it.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2333729

CookieSci

Good points, although I was surprised that you didn't address the fact that this "optional abortion" strategy will still lead to an inflated error rate for the set of studies that you do end up seeing through to completion. Maybe this is understood?

I came up with the following intuition. Imagine that we have a set of 100 studies where we have a directional hypothesis that the mean is positive, but in fact the null is true (mean=0) for all 100 studies. 5 of these 100 studies would, if we completed them, result in erroneous rejection of the null (i.e., they will end up having big positive means). Now we peek at the data early on and optionally abort studies with observed means that are negative -- if the null is always true, that's half of the studies. The problem is that those 5 error studies are more likely to end up in the non-aborted half of studies than in the aborted half of studies.

I just did a little simulation of studies that have final N=100; each with one optional abortion point at N=5, or N=10, or ..., or N=50; with standard normal data; and using a one-sided test. Error rate is about 7% for studies where we peeked after N=5 but decided to keep going, and it increases up to about 10% for studies where we peeked after N=50. Of course I just picked these parameters out of a hat, the point is it exceeds 5% in general (if we don't do any corrections).

Of course, there is no divine dictate that the error rate must be 5%. It might be fine to accept a higher error rate for some of the good reasons that you mention in this post. But should we not at least acknowledge that it is above the nominal alpha level of our test?

Chris

You cannot " beat Daniel Lakens and the Bayesians to the punch" because they assumed it in their priors.

Dr. R

Hi,

A recent blog on dealing with non-significant results after looking at the data seems relevant.
http://rolfzwaan.blogspot.ca/2015_05_01_archive.html

I would also like to add that most research practices are ok, if you honestly report them. So, if you feel confident about your methods, just say that you picked X times and stopped when criterion Y was reached.


The comments to this entry are closed.