This is CookieSci's Typepad Profile.
Join Typepad and start following CookieSci's activity
Join Now!
Already a member? Sign In
CookieSci
Recent Activity
Good points, although I was surprised that you didn't address the fact that this "optional abortion" strategy will still lead to an inflated error rate for the set of studies that you do end up seeing through to completion. Maybe this is understood? I came up with the following intuition. Imagine that we have a set of 100 studies where we have a directional hypothesis that the mean is positive, but in fact the null is true (mean=0) for all 100 studies. 5 of these 100 studies would, if we completed them, result in erroneous rejection of the null (i.e., they will end up having big positive means). Now we peek at the data early on and optionally abort studies with observed means that are negative -- if the null is always true, that's half of the studies. The problem is that those 5 error studies are more likely to end up in the non-aborted half of studies than in the aborted half of studies. I just did a little simulation of studies that have final N=100; each with one optional abortion point at N=5, or N=10, or ..., or N=50; with standard normal data; and using a one-sided test. Error rate is about 7% for studies where we peeked after N=5 but decided to keep going, and it increases up to about 10% for studies where we peeked after N=50. Of course I just picked these parameters out of a hat, the point is it exceeds 5% in general (if we don't do any corrections). Of course, there is no divine dictate that the error rate must be 5%. It might be fine to accept a higher error rate for some of the good reasons that you mention in this post. But should we not at least acknowledge that it is above the nominal alpha level of our test?
CookieSci is now following The Typepad Team
Jun 2, 2015