This is Hardsci's Typepad Profile.
Join Typepad and start following Hardsci's activity
Join Now!
Already a member? Sign In
Recent Activity
Great post Simine! I want to quibble with one thing. You wrote: "maybe we’re really good at power planning! maybe we know exactly what size our effect will be! and we can run exactly enough subjects to get our p-value just below .05!" But I don't think that is true. The result of good power planning (for non-null effects) would be that the vast majority of p-values would be very small, not just below .05. That's why adequately-powered studies of real effects produce right-skewed p-curves, and that's where the 6:1 ratio you refer to earlier comes from. In fact, even massively underpowered studies will probably produce right-skewed p-curves. I just ran a quick and dirty simulation out of curiosity, and with rho = .3, N = 20, which gives 25% power, you still get a right-skewed p-curve. Hopefully I'm not just being a pedantic nitpicker. I've heard people make the argument elsewhere that left-skewed p-curves could result from good power planning alone, and I don't think it is ever correct.
I'm going to try to beat Daniel Lakens and the Bayesians to the punch and point out that there are ways to build data peeking into your study design, and do analyses that are not biased by it.
Sam, if I understand what you're calling a "control condition," I think you mean building in a way to validate the methods (both that they are generally valid and that they were correctly implemented in a given experiment). If that is missing from a direct replication then it was missing from the original. And it is just as much of a problem for the original study, possibly more so:
Toggle Commented Apr 9, 2015 on on flukiness at sometimes i'm wrong
Sanjay Srivastava Associate Professor Department of Psychology University of Oregon
Sanjay Srivastava Department of Psychology University of Oregon
Hardsci is now following The Typepad Team
Dec 7, 2014