This is PsychChrisNave's Typepad Profile.
Join Typepad and start following PsychChrisNave's activity
Join Now!
Already a member? Sign In
Recent Activity
Interesting post, Simine. Two points: 1) Why not go Milgram and ask a series of experts on a topic (or maybe the general public?) to make predictions on what we would expect from a proposed study- why not move the topic from a fun party trick to collecting survey data on what people expect to find? I suppose you could pre-register the hypothesis or competing hypotheses and add more objectivity to what truly is counterintuitive. The implications of our work may be put in context and adjusted by how “counterintuitive” our results are. 2) One way of looking at “counterintuitive” could be from an empirical standpoint- what flies in the face of past findings of psychological phenomena. Meta-analysis can be useful in creating a Bayesian prior or threshold one might have for evaluating new findings in the context of what we know from past findings. Before meta-analysis, we had to rely on lit reviews and have a rough qualitative understanding of what evidence we have for an effect. I’m cautiously optimistic that with the sophistication of meta-analytic techniques and with increased dissemination of knowledge (e.g. online databases, OSF), we can empower journal editors, reviewers and the general public to make an educated assessment of what evidence is needed to challenge or “undo” past findings (I realize we still have file-drawer issues but this is not new- Rosenthal and others have come up with fail-safe N and other ways to estimate the amount of studies needed to overturn an effect decades ago. I’m sure meta-analysts are coming up with new and improved ways for accounting for file-drawer issues). Malle’s 2006 meta-analysis showing no actor-observer effects can help inform future studies on the phenomenon (interestingly- the actor-observer effect is still largely taught in Social Psychology as a counterintuitive effect—so counterintuitive there is perhaps no evidence of its existence!). I’m not suggesting we hold counterintuitive findings to a greater threshold at the publication or dissemination level if there is full transparency of the methodology, large N, rigorous analytic strategies and cautious interpretations of the implications of the work. Raising the bar too high for counter-intuitive findings to be published gets us back into making silly, arbitrary decision rules like overly stringent alphas/corrections or over-concerning ourselves with Type I error at the expense of Type II. We need to be more honest in our work being exploratory (Sanjay Srivastava eloquently makes this point about work being ground-breaking OR definitive- see link below), be careful in the extrapolations we make from our work and understand that many of our initial conclusions on human behavior may end up being wrong (While we are at it, let’s continue to de-stigmatize the fact that great, high quality research can and will be “wrong” after replications are performed- that failure to replicate has nothing to do with fraud or poor research design). Sanjay’s blog/article:
Toggle Commented Mar 26, 2014 on unbelievable. at sometimes i'm wrong
Great set of posts, Simine! I appreciate the nuance that you and others like Sanjay, Laura, David, and Brent are bringing to the “replication/methodology/ethics crisis”. We have years (and years!) of training in methodology, assessment, and statistics and having to think a little deeply about how we plan, run, and analyze our findings is not such a bad thing. I also echo your sentiment that running large N studies using only self-reported questionnaires is not as rigorous or ecologically valid as taking the time to obtain peer reports, directly observed behavior, behavioral residue, and/or "life data" (e.g., verifying GPA via transcripts, # of facebook friends, polling confirmation of whether someone actually voted). I find it maddening to review manuscripts that use 40 mturkers, pay them $.20 each, and takes all of 5-minutes to complete (questionably validated) questionnaires and would not jump for joy to read a replication of 400 or 4000 mturkers using the same methodology. Instead of publishing less, let’s just think more about our study design to make sure we are utilizing the various methodological tools available to us and that make sense for the phenomena we are studying. None of these suggestions (larger N, multiple methods, replication) are anything new to our field but I'm cautiously optimistic that the renewed attention to these concerns will bring about (thoughtful, methodical) change in our field.
Toggle Commented Mar 7, 2014 on having it all at sometimes i'm wrong
PsychChrisNave is now following The Typepad Team
Mar 6, 2014