Enter your Email:
Preview | Powered by FeedBlitz

« Guest Post by John Doris | Main | why i am optimistic »

Comments

Nick Brown

When talking to experimental psychologists, I find that they are almost universally receptive to the idea of having 5x fewer psychologists and 5x more resources per study. Perhaps they all imagine that they, personally, would all make the cut (cf. Kruger & Dunning, 1999).

Anyway, fewer articles and bigger samples means fewer PIs. Just sayin'.

Luke

Bit worried that your N>200 suggestion ignores the experimental design and the predicted effect. That would be an absurd N for, e.g., some visual search study where everything is within-sbjs and few individual differences are expected.

Chris Crandall

Is this an example of Simpson's Paradox? A skeptical reader might say "personality psychologists have, as a major goal, parameter estimation. As a result, to meet those goals, they must obtain use larger N's. Social psychologists, on the other hand, do not usually care about careful parameter estimation--they have other theoretical fish to fry, and as a result, do not seek so assiduously to generate a narrow confidence interval."


From the article "This was most clearly illustrated in considering the three sections of JPSP. The overall NF for the Attitudes and Social Cognition section was 79, whereas the overall NF for the Individual Differences and Personality Processes section was 122. This was the case even though previous meta-analyses indicate that the typical effect sizes examined in these subdisciplines are comparable."

This is *exactly* what one would expect if parameter estimation is a quite often major goal of personality research, but not very often a major goal of social psychology research.

simine

thanks for the comments!


Nick - i'm not sure i follow. why would there have to be fewer PIs? there is still just as much research being conducted (e.g., same number of total subjects run), they just get published in fewer, better papers. standards for hiring and promotion would have to change, but i'm not sure why replacing every set of six small studies with two big studies would mean there would be fewer PIs.


Luke - absolutely. i tried very hard to avoid all nuance (hence the 'blah blah blah'). within-subjects designs are exempt. and i'm sure there are other exceptions as well. and of course hard and fast rules are always ridiculous. the reason for my overly blunt approach is that although i think hard and fast rules are a problem, an even bigger problem is getting so wrapped up in the nuances and exceptions that you get paralyzed and decide not to do anything. in social/personality psych, a sample size of 200 is very often reasonable and/or necessary. more nuanced rules would be better in many ways, but i think that using 200 as a default and asking authors to justify smaller samples is a good start. and i hate to see the perfect be the enemy of the good.


Chris - yes, i think the point you make is likely one of the reasons that personality studies often have larger Ns. and perhaps we need larger Ns. however, putting personality psychology aside (we're still far from where we need to be anyway), the Ns in social psych are still way too small even if you don't care about parameter estimation. if you're concerned about type II error and about false positives (through file drawers and other QRPs), these sample sizes are too small even for just determining the direction or existence of an effect.

Nick Brown

Simine, I agree, if all we were doing was taking articles where the PI currently decides to run six underpowered studies and replacing those with two studies with higher N, then "statistically significant" results might also mean something, and PI employment levels are unaffected. But think of all the single-study, N=60 articles that would no longer be possible because the budget, or number of grad students who can be co-opted to conduct the experiments, doesn't extend to N=200.

Maybe three colleagues can amalgamate their resources and run a single study instead of three, and then all be "co-PIs" on a hybrid study that doesn't really represent any of their true interests, but I don't see that being very popular. In my extensive experience of working in bureaucracies of various sizes, I have found that giving up status is about the last thing that people are prepared to do, even if the alternative is giving up some of the resources that are needed to do the job properly. (Why that should be, even among very intelligent people, would probably make an interesting study.)

A further problem is that if fewer articles are being published (hooray!) for us to wade through each month looking for the power problems, it means that fewer "scientific findings" are being made (especially if we continue to not publish and/or ignore null results). I'm not sure if the whole self-sustaining industry of funding, researchers, journals, press releases, media, and institutional prestige is ready for the implications of that. I suspect that the number of studies being conducted has a near-lawful relationship with the number of people with an economic interest in those studies taking place, such that a reduction in one implies a near-proportional reduction in the other. As someone with no dog in this fight (I'm a retired non-academic), I have no problem either describing that or thinking about its implications, but I realise that that probably doesn't apply to most people who are likely to be involved in this discussion.

Rubenarslan

Nick – molecular genetics does pretty well with forming consortia. And if you cannot find two colleagues to agree on a a suggested study design maybe the study shouldn't be done?

All that aside, almost everybody I've encountered in science so far has access to __too much__ data and too little time. So maybe running fewer, better studies would let more of the collected information leave the file-drawer.

Aaron Weidman

Hi Simine, this is a great post, and a really cool paper from you and Chris on the N-Pact Factor. But I'm really curious--how do you foresee (realistically or ideally) the N-Pact factor paper being used? I can think of several possibilities, but certainly there are more.

- Editors implement official sample size policy changes at the journals you coded, or other journals in psychology.

- Reviewers or editors cite the paper to justify rejecting a paper with a small sample, but otherwise sound theory and methods.

- Young eager graduate students (and faculty of course) feel a bit more empowered to question sample size-related decisions when attending talks or informally discussing research at conferences.

- Hiring committees use the paper, and subsequent NF data, to help determine the quality of a job candidate's publication record (e.g., lots of pubs at high NF journals, vs. lots of pubs at low NF journals). This would perhaps be the most drastic use.

I'd love to hear your thoughts!

Zerschmetterling

"*** sample sizes do not decrease all kinds of mistakes (i.e., systematic error is not reduced), but they reduce random error and they don't increase other kinds of error."

well. consider p. 28 in

Ramscar, Michael, et al. "The myth of cognitive decline: Non‐linear dynamics of lifelong learning." Topics in cognitive science 6.1 (2014): 5-42.
http://psych.stanford.edu/~michael/papers/Ramscaretal_age.pdf

where a higher sample size was associated with a clear trend in subjects' performance. i generally agree with you, but bare in mind that there *is* a difference between screening 20 subjects and 200, which shouldn't just be ignored as it can have an effect.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)