[DISCLAIMER: The opinions expressed in my posts, and guest posts, are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
Guest post by Shira Gabriel
Don’t go chasing waterfalls, please stick to the rivers and the lakes that you're used to.
I haven’t always been the most enthusiastic respondent to all the changes in the field around scientific methods. Even changes that I know are for the better, like attending to power and thus running fewer studies with more participants, I have gone along with grudgingly. It is the same attitude I have towards eating more vegetables and less sugar. I miss cake and I am tired of carrots, but I know that it is best in the long run. I miss running dozens of studies in a semester, but I know that it is best in the long run.
It is not like I never knew about power, but I never focused on it, like many other people in the field. I had vague ideas of how big my cell sizes should be (ideas that were totally wrong, I have learned since) and I would run studies using those vague ideas. If I got support for my hypotheses-- great! But when I didn’t, I would spend time with the data trying to figure out what went "wrong" -- I would look to see if there was something I could learn. I would look to see if there was some other pattern in the data that could tell me why I didn’t find what I predicted and perhaps clue me into some other interesting phenomena.
You know, like Pavlov trying to figure out why his saliva samples were messed up and discovering classical conditioning. That was me, just waiting for the moment when I would discover my own version of classical conditioning1.
I am going to be honest here: I love doing that. I love looking at data like a vast cave filled with goodies where one never knows what can be found. I love looking for patterns in findings and then thinking hard about what those mean and whether, even though I had been WRONG at first, there was something else going on – something new and exciting. I love the hope of discovering something that makes sense in a brand new way. It is like detective work and exploration and mystery, all rolled in one. I’m like Nancy Drew in a lab coat2.
Before anyone calls the data police on me, the next step was never publication. I didn’t commit that particular sin. Instead if I found something I didn’t predict, I would design another study to test whether that new finding was real or not. That was step two.
But this movement made me look back at my lab and the work we have done for the past 17 years3 and realize that although this has worked for me a couple of times I have also chased a lot -- A LOT -- of waterfalls that have turned into nothing. In other words, I have thrown a lot of good data after bad.4
And looking back, the ones that did work -- that turned into productive research areas with publishable findings -- were the ones that had stronger effects that replicated over different DVS right from the start.
When I chased something that wasn't as strong, I wasted huge amounts of my time and resources and, worse yet, the precious time of my grad students. That happened more than is comfortable for me to admit.
So, I think a big benefit for me of our new culture and rules is that I spend less time chasing waterfalls. My lab spends more time on each study (since we need bigger Ns) but we don't follow up unexpected findings unless we are really confident in them. If just one DV or one interaction looks interesting, we let it go for what it likely is -- a statistical fluke.
And we don't just do that because it is what we are now supposed to do, we do it because empirically speaking I SHOULD have been doing that for 17 years. I spent too much time chasing after patterns that turned out to be nothing5.
So I think my lab works better and smarter now because of this change.
As long as I am being so honest, I should admit that I miss chasing waterfalls. Just last week, one of my current PhD students6 and I were looking at a study that is a part of a really solid research program of hers that thoughtfully and carefully increases our science. And in her latest dataset, I felt the mist of a far-off possible waterfall in an unexpected interaction. Could this be the big one? It was tempting, but we aren’t going to chase the waterfall. As much as it seems fun and exciting, our science isn’t built on the drama and danger of waterfalls. To paraphrase the wise and wonderful TLC, I am sticking to the rivers and lakes that I am used to. That is how science advances.
Footnotes
- Still waiting, in case you were wondering.
- I don’t wear a lab coat. More like Nancy Drew in Yoga pant and a stained sweatshirt, but same difference.
- I am really old.
- Or is it bad after good? I can never remember which way it is supposed to go.
- How can you tell if an unexpected finding is a waterfall or classical conditioning? You can’t. But here are the four things I now look for: are the effects consistent across similar DVs; can we look back and find similar things in old datasets; do we have a sound theoretical explanation for the surprising findings; and, finally, can that theoretical explanation lead to other hypotheses that we can look at in the data. Only if a good chunk of that works out will we move on to collect more data. And yah, “good chunk” is not quantifiable. Sometimes Nancy Drew has to follow her instincts.
- Elaine Paravati. She rocks.