many people have written about how bem's esp paper was one of the major factors that triggered the latest scientific integrity movement in social/personality psychology. that is an interesting story. but that is not the bem paper i want to talk about today.'when you are through exploring, you may conclude that the data are not strong enough to justify your new insights formally, but at least you are now ready to design the "right" study. if you still plan to report the current data, you may wish to mention the new insights tentatively, stating honestly that they remain to be tested adequately. alternatively, the data may be strong enough to justify recentering your article around the new findings and subordinating or even ignoring your original hypotheses.'this alone is enough to make me stop assigning this chapter, but there are other worrisome bits. specifically, bem advocates exploring the hell out of your data ('analyze the sexes separately. make up new composite indexes [...] reorganize the data to bring them into bolder relief.'), which i have no problem with at all, if people are transparent. but then he seems to be telling us to present these exploratory results as confirmatory. he talks about the distinction between the context of discovery and the context of justification, but then completely blurs the line a few paragraphs later. he says:actually, science does care. and scientific integrity does require something along those lines. not that you tell us about all your wrong ideas, but that you not claim that you had the right idea all along if you didn't. if science didn't care, pre-registration and bayesian statistics would not be enjoying the popularity they are today. **'contrary to the conventional wisdom, science does not care how clever or clairvoyant you were at guessing your results ahead of time. scientific integrity does not require you to lead your readers through all your wrong-headed hunches only to show - voila! - they were wrongheaded.'let's start with this: bem recommends writing 'the article that makes the most sense now that you have seen the results' (rather than 'the article you planned to write when you designed your study'). it is pretty clear from the rest of this section that bem is basically telling us to invent whatever a priori hypotheses turn out to be supported by our results. indeed, he says this pretty explicitly on the next page:'but it is precisely because the narrative mode of representation is so natural to human consciousness, so much an aspect of everyday speech and ordinary discourse, that its use in any field of study aspiring to the status of a science must be suspect. for whatever else a science may be, it is also a practice which must be as critical about the way it describes its objects of study as it is about the way it explains their structures and processes.'i have come here today to discuss bem's chapter, 'writing the empirical journal article' (2003) that i - and i suspect many others - used to assign to every undergrad and graduate student taking our psychology research methods classes. there are many extremely wise points in that chapter. but there are also many pieces of advice that seem entirely antiquated today. if the bem chapter is no longer the gold standard for how to write an empirical article, what is? (see also: laura king's article for the spsp dialogue (pdf, p. 6)).i was reminded of the complexity of this question when a historian friend of mine suggested i read 'the question of narrative in contemporary historical theory' by hayden white (1984). i will share a few quotes with you:'a discipline that produces narrative accounts of its subject matter as an end in itself seems methodologically unsound; one that investigates its data in the interest of telling a story about them appears theoretically deficient.' *what struck me about these passages is that white is struggling with the tension between science and storytelling. this is precisely the tension that has led me to stop assigning bem's article to my research methods classes. bem is a pretty strong advocate of the narrative/storytelling approach to writing empirical papers. i don't completely disagree, but he is not conflicted enough about this approach for my tastes.first, i want to say that there are many things i love about bem's chapter. (his explanation of 'that' vs. 'which' changed my life.) he is an amazing writer and there is a massive amount of great advice in the chapter. but there are problematic parts.if it weren't for that last sentence, that would be great advice. either run a new study (and pre-register your hypotheses!), or publish the current study but explain that the analyses were exploratory and should be taken with a grain of salt. but why recommend ever 'recentering' your article around the p-hacked, exploratory results as if they were predicted a priori? maybe i am reading too much into what he wrote, but in combination with the other bolded quote (above), i think my concerns are justified. certainly i would not blame an undergrad or graduate student for interpreting him to be saying this. in fact, at the end of this section he writes:'think of your dataset as a jewel. your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it.'this can no longer be considered an appropriate metaphor. it is just too blatantly unscientific. the question is not what is the best setting (or best story you can tell), but what is the presentation that most advances scientific knowledge, is most true to the evidence?that is a tough question to answer. a beautifully carved story that omits a bunch of important details is not the answer, but a dry list of brute facts does not necessarily do the most to advance scientific knowledge, either.it is the duty of scientists to overcome our natural tendency to fit everything to a narrative, and to try to describe things as they actually are. yet, as a reader of psychology articles i do not want just the brute facts. it's just not realistic to expect other scientists, much less anyone else, to pay attention to our research results if we do not put any effort into creating a coherent story around them. we can't leave all of the interpretation and meaning-making up to the reader.the tension here is between sticking to the (messy) facts, and doing some polishing and interpreting for the reader so she doesn't have to wade around a giant pile of unstructured findings. i will give bem credit, he seems aware of this tension. later in the chapter, he writes:
'[...] be willing to accept negative or unexpected results without a tortured attempt to explain them away. do not make up long, involved, pretzel-shaped theories to account for every hiccup in the data.'so he acknowledged that the results don't need to be perfect (though he seems more worried about the fact that the story you would need to tell to explain the imperfections would itself be ugly and tortured, rather than about the fact that it's important to disclose the imperfections for the sake of scientific rigor).
back to the question: how do we balance the mandate to present the facts in all their messy glory and the need to give some structure and coherence to our observations?
one easy step is to clearly and honestly label exploratory analyses as such. ideally these would be followed up with confirmatory studies, but sometimes exploratory results are exciting enough (and/or hard enough to collect new data about) to publish on their own.
another (related) step is to stop claiming to have had a priori hypotheses that we didn't actually have. however, this step is trickier because editors and reviewers still want to see the 'correct' hypothesis up front. there are some good reasons for this - it is kind of a waste of time to read an entire intro building up to a hypothesis that turns out to be false. one handy solution for this is to select research questions that are interesting whichever way the results come out - then the intro can present the two competing hypotheses and why each result would be interesting/informative.*** i am told that in some areas of research this is just not possible. all the more reason for everyone to become a personality psychologist. come join us, we never know which way our results will turn out!so what is a better metaphor than the jewel cutting metaphor? more broadly speaking, what is the right way to teach our students to write empirical journal articles?i don't know. i'm asking you.* these quotes are both from page 1. i stopped understanding the essay after about page 4. in my defense, the footnotes are in french.** in the academic world, 'popularity' just means it gets nerds excited. dan ariely's ama was more 'popular' on my facebook feed than any of the completely outrageous basketball upsets this weekend. seriously. sorry spartans.*** do not attempt this in a grant proposal. i did once and was told to come back with a directional hypothesis. i flipped a coin and resubmitted the grant. it was funded. (eventually. i think it took four tries.)(photo: my dog, longing for answers.)(actually she is longing for the attention of a newfoundland off to the left.)