Enter your Email:
Preview | Powered by FeedBlitz

« sometimes i'm wrong | Main | i always live without knowing »

Comments

Roger Giner-Sorolla

Nice article. I think it's symptomatic of something wrong in our field that we worry about whether a study is going to "come out" in our favor instead of whether its methods are strong enough to support the knowledge about the world we get from it. I'm pretty happy whenever I design a study that I know will have something of interest to say regardless of its results.

Sanjay Srivastava

Great post.

The best way to evaluate science (and scientists) is for knowledgeable experts to make informed judgments.

The problem is that expertise and time are rare commodities. So people end up making heuristic judgments based on metrics that they can easily see and count. Which just incentivizes people to inflate the metrics. We've all seen the problems generated by the "more publications is better" heuristic. I totally agree that if "more subjects is better" becomes too strongly incentivized, it will create distortions of its own. It is tempting to try to fix it by incentivizing other things, but "more methods is better" or "more dollars spent per subject is better" will create problems too.

So it comes back to expert judgment. And decision-makers who do not have the time or expertise need to trust those who do. If you're a dean and find yourself counting lines on the vitas of tenure candidates in fields you don't know, ask yourself why you aren't trusting the recommendations of the people in the department. Maybe you should. And if you have a good reason not to trust them -- well, you're the freakin' dean, fixing it is your job. They didn't hire you to count things. (I hope.)

PsychChrisNave

Great set of posts, Simine! I appreciate the nuance that you and others like Sanjay, Laura, David, and Brent are bringing to the “replication/methodology/ethics crisis”. We have years (and years!) of training in methodology, assessment, and statistics and having to think a little deeply about how we plan, run, and analyze our findings is not such a bad thing. I also echo your sentiment that running large N studies using only self-reported questionnaires is not as rigorous or ecologically valid as taking the time to obtain peer reports, directly observed behavior, behavioral residue, and/or "life data" (e.g., verifying GPA via transcripts, # of facebook friends, polling confirmation of whether someone actually voted). I find it maddening to review manuscripts that use 40 mturkers, pay them $.20 each, and takes all of 5-minutes to complete (questionably validated) questionnaires and would not jump for joy to read a replication of 400 or 4000 mturkers using the same methodology. Instead of publishing less, let’s just think more about our study design to make sure we are utilizing the various methodological tools available to us and that make sense for the phenomena we are studying. None of these suggestions (larger N, multiple methods, replication) are anything new to our field but I'm cautiously optimistic that the renewed attention to these concerns will bring about (thoughtful, methodical) change in our field.

simine vazire

thanks for your comments!

roger - i agree. i've always been too chicken to run a study that would only be useful if the results came out as i expected. i just don't have very much trust in my ability to come up with correct hypotheses. (incidentally, when i hear people worry about the ethics of wasting participants' time with large samples, i think that's legitimate but in my opinion the participant time that is wasted on studies that 'didn't work' and are relegated to the file drawer is a much bigger problem. at least with large samples, the participants' data are more likely to eventually contribute to the knowledge base, even if each participant's contribution is small.)

sanjay - you're right, we shouldn't use the quantity heuristic, and we should rely on experts to judge quality. i think this is hard for several reasons, one of which is the huge burden that is already placed on experts (as reviewers of journal articles, tenure cases, etc.). (more on that in a future post). another problem is that when we don't use an objective heuristic like quantity, we are left with subjective impressions, which feel more susceptible to implicit (and explicit) bias. i know that the quantity measure is also subject to bias, but somehow the potential for bias is more palpable when relying on people's opinions of quality.

chris - thanks for your comment! i'm optimistic, too. let's hope we're right!

The comments to this entry are closed.