i recently applied for the editor in chief position at Psychological Science. i didn't get it, but i got far enough to be asked to write a vision statement, responding to eight prompts. it was a fun exercise to think about what i would've liked to do had i been editor in chief of Psych Science, so i thought i'd share my vision statement here. one of the main reasons i was very interested in this position is because Eric Eich and Steve Lindsay have done a great job, as editors in chief, of making the journal more and more credible as a source for interesting and rigorous psychological science. i hope the journal keeps moving in this direction.
this version has been lightly edited to fix some, but surely not all, typos. (in my defense, Srivastava is hard to spell.)
VISION STATEMENT
- Overall Vision
What would be your overall vision for the journal?
Psychological Science is thriving, and a priority for the new editor will be to uphold its high standards and strong reputation, and keep it on the impressive trajectory it is on. Key features of Psychological Science are its breadth, its high standards with respect to rigor, relevance, and transparency/openness, and the short report format. One of my top priorities would be to preserve these distinctive features of the journal. I believe I am in a strong position to do so, having been an editor at a broad range of journals (including discipline-wide journals such as Perspectives on Psychological Science, AMPPS, and Collabra: Psychology), as an expert in research methods and leader in the open science movement (including co-founding the Society for the Improvement of Psychological Science), and as outgoing Editor in Chief of another selective, short-report journal (Social Psychological and Personality Science).
I expect that the biggest challenge in the next five years will be the move, across the sciences, towards open access journals and preprints. My close ties to the open science community give me a clear sense of what these challenges will look like. In my view, the open access movement will put a lot of pressure on traditional journals to make a compelling case for the added value they provide. I believe that Psychological Science is in a uniquely strong position to do so.
For example, Psychological Science has embraced many advances in journal publishing, such as moving away from journal impact factors and relying on more informative metrics. In addition, Psychological Science has a well-deserved reputation for transparency in its editorial practices and policies, thanks in large part to the publication committee and editors’ open communication through the APS website and social media.
Most importantly, Psychological Science has a strong track record of providing high quality peer review by active researchers with specialized expertise. Unlike mega journals that often cannot find editors or reviewers with appropriate expertise, Psychological Science can match every article to an editor and reviewers who are willing and highly qualified to evaluate the research. Few journals can boast such a broad and deep pool of qualified experts who are very invested in the journal. As a result of this unique strength, Psychological Science is able to attract high quality submissions and do a careful, thorough job in selecting rigorous and relevant articles for publication.
In short, Psychological Science will continue to be in a strong position to compete in the publishing landscape if it continues to:
1) adopt progressive and innovative policies, and be accountable and transparent about the journal’s practices and policies,
2) deliver fair, high quality peer review,
3) maintain high standards for the rigor and relevance of published articles
Psychological Science will have to continue making changes to stay competitive in this fast-changing landscape. I see three areas in particular where Psychological Science can build on its strengths:
- Breadth: The areas of cognition and perception, and, to a lesser extent, experimental social psychology, have historically been well-represented at Psychological Science. By reaching out more to other subdisciplines, and welcoming non-experimental research, Psychological Science can better serve the entire discipline of psychological science. I describe my vision for doing this in the sections 2 and 4 below.
- Openness: Psychological Science is one of the leaders in open and reproducible science, and will need to keep moving forward to maintain that position. As a thought experiment, I imagine a group of meta-scientists doing another project like the Reproducibility Project: Psychology (RP:P, OSC, 2015) in ten years, and I ask myself: what can Psychological Science do to ensure that the results of such a project will increase the scientific community’s confidence in the journal? I describe my vision for doing this in sections 3 and 6 below.
- Inclusivity: The aim of a prestigious journal like Psychological Science is to publish the best research in the field regardless of who conducted it. This means making an active effort to encourage authors from all institutions and parts of the world to submit high quality work, and committing to providing a fair process regardless of the authors’ identity. I have been an advocate for various policies that aim to do this, such as blinding or de-emphasizing authors’ identities during the peer review process, and making editors more accountable for their decisions. I describe my vision for achieving greater fairness and inclusivity in sections 2, 4, and 8 below.
In describing my vision for Psychological Science throughout this statement, I present a lot of new ideas. However, none of these are changes that I would insist on implementing – I am open to discussion and compromise on any of these, and would value the feedback of the APS publication committee, the editorial team, and the APS community more broadly. I present these ideas in order to give the search committee a sense of the kinds of ideas and energy I would bring to the position of EiC, but I do not mean to imply that I expect to implement all of them.
- Breadth and Diversity
Psychological Science covers the entire spectrum of the science of psychology. How would you attract the best and most relevant articles and authors across all areas of psychological science? How would you ensure that the diversity of perspectives and voices present in psychological science are represented in Psychological Science?
Breadth
One important way to signal that Psychological Science welcomes submissions from all areas of psychology, using a broad range of methods, is to reflect that breadth in the composition of the editorial team and editorial board. My goal would be to assemble an editorial team that is diverse with respect to sub-discipline and methodological approach, as well as demographic characteristics. I address how I plan to achieve that diversity in sections 4 and 8 below.
In my view, one of the biggest areas for improvement at Psychological Science is to make authors from sub-disciplines outside of cognitive psychology and experimental social psychology feel that Psychological Science is for them. As APS has grown, more and more scientists from developmental, industrial/organizational, clinical, personality, and biopsychology have joined, but their representation within the pages of Psychological Science still lags. Related to this, Psychological Science has historically published a disproportionate number papers using experimental, hypothesis-testing approaches. While these are a core part of psychological science, I believe the journal should also be a home for the best and most important work that uses correlational, longitudinal, and descriptive approaches. While these approaches have their limitations (as do experimental hypothesis-testing approaches), some very exciting work is happening using these techniques, and I would like to see Psychological Science be the outlet of choice for those researchers’ best work. This can best be achieved through representation on the editorial board, and open communication emphasizing the journal’s scope.
Diversity
There are many obstacles to equal representation in science, and journals play a vital role in leveling the playing field and providing an arena where science is judged purely on its merits, without attention to hierarchy, status, or other personal characteristics of the authors. We know, from psychological science itself, that there are many biases that threaten the goal of fair evaluation and equal representation. In my experience, and based on my reading of the evidence (e.g., (Ross et al., 2006; Tomkins, Zhang, & Heavlin, 2017), the biggest threat to fair peer review is status bias. Status bias takes many forms, and can disadvantage people based on age, career stage, network, fame, prestige of institution, and geographic location, among other factors. There are steps journals can take to minimize status bias, and I propose to implement several of them.
First, while keeping reviewers blind to authors’ identities is increasingly difficult with the growth of preprints and other online communication, I nevertheless believe it is important for journals to send out blinded versions of manuscripts for peer review. At the very least, review requests should not include any information about the authors’ identities or institutions, and the manuscript sent to reviewers should not include a title page with authors’ names and institutions. While I do not think it makes sense to ask authors to go to great lengths to create a blinded version of their manuscripts, simply asking authors to provide a version of the manuscript without authors’ names and affiliations should be easy to do. In addition, I would include a statement in the instructions to reviewers asking them not to seek out the authors’ identities before they submit their review. Simply communicating to reviewers that we feel authors’ identities are irrelevant and should not be part of the review process may be enough to keep most reviewers from seeking out such information. The evidence regarding the effectiveness of blinding is mixed – some studies finding a positive effect and some finding no effect – but I have yet to see any evidence or argument that it harms the integrity of the peer review process. Most importantly, we should be open and transparent with reviewers about what information we think is and is not relevant to their evaluation, and giving them information about authors’ identities implies that we expect them to use this information. Instead, I believe we should be communicating with reviewers that they should avoid this information and base their review purely on the value of the manuscript.
Another important step Psychological Science can take to increase the breadth and diversity of submissions is to collect and analyze data about submissions and decisions. Examining the types of manuscripts submitted could point to gaps in Psychological Science’s reach, and comparing the pattern of submitted vs. accepted manuscripts could reveal some biases in the peer review process that need to be addressed. This kind of analysis is something I would be happy to have my university-provided research assistant help with (see my statement on “Support and resources”). Having this information would allow the Psychological Science team and APS to allocate their resources more effectively to increase representation among the underrepresented groups, and address any biases identified.
A journal as prestigious as Psychological Science faces a unique challenge in convincing authors that their work is good enough to be published in its pages. I have heard many psychological scientists doing excellent work – especially early career researchers and researchers from outside the US, Canada, and Western Europe – say that they did not submit their work to Psychological Science because they thought it had no chance. One message I would work very hard to get out is that no one should self-select out of submitting to Psychological Science simply because of their career stage, geography, or status – the quality, fit, and importance of the work are the sole criteria that should affect researchers’ decision to submit. I would spread this message on social media, at conferences, in an editorial, and on the journal website.
- Open science and reproducibility
The journal has been at the forefront of open science and reproducibility. In general, what do you see as the role of Psychological Science in the open science and reproducibility efforts? How would you build on the standards put into place for improved research and methodological practices?
As Editor in Chief of Social Psychological and Personality Science (SPPS) for the last four years, I have been uniquely impacted by the policy changes at Psychological Science. The trickle down effects of these policies were vivid to me when I received submissions that were very likely rejected from Psychological Science – many had open data, open materials, and pre-registrations. Seeing how willing authors were to adopt these practices – even though SPPS had no policies or badges regarding these practices – was a testament to the power of Psychological Science’s nudges. Moreover, because Psychological Science took the lead on these policies, I saw other journals take similar risks. Thus, the impact of Psychological Science’s policies reaches far beyond its own authors and reviewers. If a journal can be a role model, Psychological Science is a role model for journals in psychology and beyond.
Thus, Psychological Science’s continued leadership on this front is vital not only for its own success, but for the field of psychology as a whole. An emphasis on openness and reproducibility has now become part of Psychological Science’s – and indeed APS’s – brand, setting it apart from APA journals, and from other prestige journals (e.g., Science, Nature, PNAS) that have not made as much progress on this front. I remember the meetings I attended in 2012 with Eric Eich and others, led by Roddy Roediger, during which we planned some of the policy changes that soon went into effect. Psychological Science took a risk being such an early adopter of policies promoting open and reproducible science, and I believe that risk has clearly paid off. Not only is consensus emerging that these practices are a clear improvement for science (Birke et al., 2018), but Psychological Science’s reputation seems stronger than ever.
What next? The standards for open and reproducible science are shifting rapidly. Steve Lindsay has done a tremendous job of keeping pace with these changes, evolving the existing policies and introducing new policies (e.g., Registered Reports for replication studies). I would follow very much in Steve’s footsteps, making iterative, incremental changes to the policies that aim to stay a step or two ahead of mainstream practices in the field. The goal of my policies would be to nudge researchers to take steps to become increasingly open and rigorous in their research, focusing mostly on the low hanging fruit (i.e., practices that require little effort and for which user-friendly infrastructure exists), while making sure not to implement changes that disproportionately disadvantage some types of research or researchers.
In discussion with the publication committee of APS, I would suggest we consider the following policy changes. No single change is a dealbreaker for me, and I would of course be receptive to feedback regarding the feasibility and potential drawbacks of these changes:
- Move towards making sharing of materials and data the default.
In Lindsay's most recent editorial (2017), he strongly encourages authors to make their data and materials available, both during the review process and after publication. Communicating this norm is important and I believe it will go a long way to getting most authors who can legally and ethically comply to do so. However, I think the field is ready for a slightly stronger norm – that if there are no legal or ethical barriers to sharing data and materials, they should almost always be shared. I can imagine some exceptions due to extreme hardship, and a lot of details that would need be worked out (how raw do the data need to be? which materials need to be shared?). I would welcome the opportunity to discuss with the publication committee what we can do next to continue to work towards making open data and materials the default.
- Optional open review – making the content of reviews public.
One of the latest innovations to sweep through scientific publishing is open/public review – where the content of reviews (but not the identity of reviewers, unless reviewers choose to sign their reviews) is made public upon publication of a manuscript. I envision that within five years, this will be the norm at the top journals. In the meantime, a stepping stone that is very low risk is to make open review an option that authors can select at the time of submission. If authors select open review, reviewers are told that their reviews will be published if the manuscript is published (and reviewers can still choose whether to sign their reviews or not). The benefits of open review are that readers of the published manuscript can see what issues were raised during the peer review process, and can benefit from the reviewers’ and editor’s insight. This also makes the journal more accountable because if peer review standards drop (e.g., the reviews are glowing and uncritical, even though there are obvious flaws in the manuscript), readers can detect this trend and bring this up with the journal or society. There are some concerns that this option may make it harder to find reviewers willing to review manuscripts for which authors have chosen open review, but there is no evidence that this is the case, and some evidence that this does not happen (Chawla, 2019). Indeed, because reviewers can still choose to remain anonymous, there seems to be little cost to reviewers.
- Visualizing and describing data.
A small and easy step to make towards greater transparency is to ask authors to provide basic descriptive statistics and, when applicable, data visualizations that provide a transparent view of the raw data. One way to do this would be to ask AMPPS to solicit a tutorial on best practices for visualizing data for common research designs, and then include this tutorial in the instructions to authors, asking them to follow these best practices as much as possible.
- Pre-registration.
While pre-registration took a bit longer to catch on than open data, I believe we are about to see an explosion in submissions with at least one pre-registered study. This will come with new challenges. First, I believe there are two aspects of pre-registration that should be separated: 1) is a pre-registered plan available? and 2) do the main conclusions from the manuscript follow from detailed procedures and analysis plans described in a pre-registration? This change would help to develop clearer communication around the various meanings of “this study was pre-registered.” Knowing whether or not a plan was pre-registered is important, and separate from knowing whether or not the conclusions in the manuscript follow directly from the pre-registration.
A manuscript could have #1 but not #2 (i.e., the authors are transparent about the fact that they deviated from their plan and so their conclusions do not follow directly from their plan; or, the authors pre-registered an exploratory analysis and so their conclusions do not follow from a detailed plan that constrained opportunities for flexibility) and this would still be laudable. Indeed, Psychological Science can play a leading role in setting the norm that studies that deviate from the pre-registration can still be worthy of publication (as can non-pre-registered studies) – the key, as with many other aspects of research design, is calibration between what the study design and analyses allow, and the strength of the claims made in the paper.
In addition, the scientific community will come to expect peer review to verify claims of pre-registration, and I believe this is something that Psychological Science should do before awarding badges or publishing work presented as pre-registered. Of course there will be some subjectivity in verifying that 1) a useful pre-registration exists, and 2) it constrained the researcher’s decisions and was followed, but I am confident that a reasonable verification plan can be implemented. I am open to various ways of approaching these changes.
- Registered Reports.
I would like to explore the possibility of opening up the Registered Reports submission option to novel studies. I realize this would require expanding the editorial team’s capacity, but I hope it would be doable. Registered Reports are an excellent way to give authors an avenue for publishing work that is methodologically sound and tackles an important question, regardless of the results. I believe some of the best new psychological science is being conducted under the format of novel Registered Report studies, and Psychological Science risks missing out on publishing that work if this option is not expanded to novel work.
- Openness from the journal.
Finally, I believe journals should practice transparency themselves. Psychological Science has been ahead of the curve on this front, thanks in part to Steve Lindsay’s openness to feedback and engagement with the community on social media and at conferences. I have been impressed with his humility and transparency, and I believe his actions have enhanced the reputation of the journal. If appointed, I would continue in this vein, and aim to make the behind-the-scenes practices as transparent as possible (while realizing some information cannot be shared, of course). I would also endeavor to be receptive to feedback and learn from my mistakes. (I once saw Lindsay write “I feel like an idiot” in response to an author who appealed a decision when Lindsay had made an error in a decision letter. I was impressed. We are all idiots sometimes, and a good editor is prepared to admit when that happens.)
Related to this, I would encourage Psychological Science to track and share metrics that are important and valued by the scientific community (e.g., see my proposal to track submission and decision patterns). I would be happy to work with SAGE and APS on this, and to contribute some of my university-provided research assistant’s time to this.
- Editorial Board
What sort of editorial board would you assemble? What areas of science would you want represented by the editorial board in order to span the many disciplines within the field and uphold the journal’s high research and methodological standards? How would you ensure that the editorial board reflects the diversity of psychological science?
In my experience as Editor in Chief of SPPS, the best way to assemble a strong editorial team is to seek input from many people with different experiences, perspectives, and networks than mine. By seeking input from a broad range of advisors, I was able to assemble a team of Associate Editors and Editorial Board members that is diverse with respect to areas of expertise, methodologies, geography, and career stage (post-tenure for Associate Editors), while maintaining high quality control. In addition to seeking input from diverse perspectives, I also used a data-driven approach for identifying potential editorial board members – I asked SAGE to compile data for me regarding the number of reviews, timeliness, and reviewer score for every reviewer in the last few years. This ensured that people I and my network may not think of would get recognition for their valuable work.
Another avenue for increasing diversity and discovering untapped talent is to solicit nominations, including self-nominations for positions on the editorial board. I have not done this in the past, but have seen others (e.g., Roger Giner-Sorolla, at Journal of Experimental Social Psychology) do this successfully. This would give an opportunity for people to be considered even if they lack the connections to make it onto my radar.
To get concrete, I would aim for the following rough quotas for the editorial team: 50% female, at least 30% from outside the US, at least 25% non-experimental scientists, and multiple AEs from each of the following core sub-disciplines: cognitive, social, personality, industrial/organizational/management, clinical/abnormal, developmental/lifespan, psychophysics, and comparative psychology. In addition, I would ensure representation of interest areas that are closely related to or cut across these core areas, including: affective science, educational psychology, quantitative psychology, marketing, applied psychology, communications, meta-science, data science, and evolutionary science. Orthogonal to subject area, I would also ensure representation of various methods and approaches, including: experimental, correlational, longitudinal, neuroscience, psychophysiology, field studies/ecological, and big data/machine learning. I would also require that all AEs are highly competent to evaluate research methods and statistics in their area of expertise, and I would continue the practice of having a team of statistical advisors for the journal.
One challenge of assembling an editorial team is balancing the need to go beyond one’s personal network with the desire for quality control. Appointing Associate Editors and Senior Editors requires a great deal of confidence that they will uphold the high standards and values of the journal. To ensure this, I would examine each candidate’s own scholarship for evidence of their expertise and standards, and I would write a detailed invitation letter outlining my expectations so that potential AEs can decline if they are not willing or able to meet these expectations.
- Special issues
What special sections/issues would you pursue?
I am open to revising my view on this, but my sense is that Psychological Science does not need to publish special issues or sections. I edited three special issues in my four years as EiC of SPPS (one on new advances in research methods, one on the science of recent geo-political events, and one on underrepresented populations), but I chose to do this because SPPS is a relatively new journal, and I wanted to signal our values and priorities through the use of special issues. Psychological Science is established and very visible, and I believe the community has a clear sense of what it stands for. One exception would be to use special issues to try out a potential policy change, as described in section 6 below.
- Changes
Might there be aspects of the journal you would wish to change?
In addition to the incremental changes proposed in sections 2 and 3 above, I would like to put one big change on the table for consideration by the publication committee. As with the changes I have proposed above, I am not wedded to this idea, but I would be excited to pursue this possibility if the publication committee is open to it. One possibility would be to try out this change as a pair of special issues first, and decide how to proceed based on their success.
I believe Psychological Science could better achieve its mission by publishing articles in two sections: one for more speculative work that is groundbreaking but quite uncertain and one for important incremental advances that are relatively definitive. I have not yet identified good labels for these two sections, but for the sake of brevity, and consistent with Srivastava’s (2011) essay, I will use the label “groundbreaking” for the first section and “definitive” for the second section (though “definitive” is certainly too strong). I am confident we can come up with better labels. (This distinction is similar to the distinction between the “context of discovery” and “context of justification” in philosophy of science.)
I have been writing and lecturing about the “credibility revolution” in psychological science for seven years, and in that time I have come to believe that the problem we are facing is not as simple as a “replicability crisis.” The problem is not that many findings are failing to replicate, the problem is that many of these failed replications were unexpected (e.g., classic findings that are taught in undergraduate classes and promoted in the media). A science as young as psychology should expect to produce some false leads – that comes with the territory of exploring and taking risks. If we become too afraid of publishing irreplicable science, we will become too risk-averse. However, to be a credible science, we must be aware and open about when we are taking risks and when we are on more solid ground. Much of the debate about the replicability crisis is about whether we should tolerate a higher Type II error rate for the sake of lowering the rate of Type I errors. But this is a false dichotomy. We can do both, so long as we clearly signal when we are aiming to maximize discovery and lower Type II errors (“groundbreaking” results) and when we are aiming to maximize certainty and lower Type I errors (“definitive” results). Thus, my proposal is to explicitly flag articles as being in one category or the other.
Advantages: First, this approach would be more transparent and would increase the “truth in advertising” of the Psychological Science seal of approval. In my experience there are typically two reasons why a paper gets accepted in a selective journal: either it is an exceptionally definitive test of a question with theoretical importance (in which case it is usually the next in a long line of studies on a given topic), or it is an exceptionally innovative and novel study (in which case it is rarely definitive). Srivastava (2011) made a strong case that it is very rare for a study (or small set of studies) to be both groundbreaking and definitive. In my experience as an editor, I usually have a clear idea of whether I am accepting a manuscript because of its groundbreaking-ness or definitive-ness. Creating two sections would allow editors to communicate to readers the value they see in the paper, and avoid having the paper misinterpreted (“Psychological Science published this, they must think it’s definitive!”).
Another advantage of this approach is that it would free up editors to take risks on truly innovative manuscripts that we know have a decent chance of turning out to be wrong. Of course that risk always exists, but there are times when we know a manuscript is particularly risky, but we believe it should be published because of the importance of the ideas, or because the preliminary data are valuable but very difficult to collect, or because publishing the inconclusive finding will stimulate important new research. One example of this type of paper is Olson, Key, and Eaton (2015, Psychological Science). In my experience, even when such papers are written in a very careful and calibrated way, it is very easy for the press and even other scientists to draw overly strong conclusions from the paper. As an editor, I would feel more confident accepting these types of paper for publication if I knew that it would be very clear to readers that we deemed it important but far from definitive.
Related to this, another advantage of clearly labeling these two types of articles is that we can better protect the journal against reputational damage if and when studies published in Psychological Science fail to replicate. Imagine that the Reproducibility Project: Psychology (RP:P) were repeated in ten years, and the rate of successful replications is 80% in the “definitive” section and 35% in the “groundbreaking” section. The average replicability rate would be around 67%, which may be lower than some would like for a top journal. However, the fact that the journal was appropriately calibrated, and that the papers in the “definitive” section were highly replicable, would buffer against these criticisms. Moreover, the low replicability rate of papers in the “groundbreaking” section would reflect the risks we knew we were taking.
Disadvantages: The main disadvantage of creating these two sections is that, in reality, research falls on a continuum from discovery to confirmation, and forcing articles to fit in one section or the other is somewhat artificial. There will surely be some articles that fall right in the middle of this continuum. This is a serious concern, but one that I think can be addressed. If authors select which section to submit to, they can frame their paper to emphasize, for example, the definitive aspects of their paper more, while still reporting the groundbreaking results, or vice versa. This way, the section label (“groundbreaking” or “definitive”) would apply to the central claims of the paper, but readers could still learn about the other evidence presented. Moreover, because Psychological Science publishes only short reports, I think the “in-between” papers would be rarer there than would be the case at journals that publish longer papers with many more studies.
Another potential disadvantage is that the “definitive” section would be filled with studies that make very rigorous but very small incremental contributions (what Cutting (2012) called “brick in the wall” studies). This can easily be addressed by clearly communicating to authors, reviewers, and editors that even “definitive” studies need to represent significant advances (e.g., resolving theoretical debates, or speaking to important practical problems).
- Lessons from past editorial experiences
What lessons or principles have you developed from your past editorial experience(s)?
I have been very lucky to work with excellent editors in chief at the journals I have edited for, including Laura King, Rich Lucas, Bobbie Spellman, and Dan Simons, and many excellent associate editors at SPPS. I have learned a great deal from them and from my hands-on experience. I have also had extensive conversations with Steve Lindsay about his experience at Psychological Science, specifically. These experiences have prepared me well for the very challenging job of Editor in Chief of Psychological Science. I could fill a book with the lessons I have learned, so I will focus on the top five here.
- Be a decider.
The most common mistake I see handling editors make is that they treat their job as a vote-counter rather than a decision-maker. In my view, the purpose of reviews is to catch as many of the important (positive and negative) points about a manuscript as possible. The editor then makes a careful, considered decision on the manuscript taking all of these points into account. However, the editor’s final judgment may very well contradict one or more of the reviewers’ evaluations.
There are two problems with passing the buck as “vote-counting” decisions do. First, the editor needs to be accountable for their decision, and communicate to the author the reasons for that decision. Simply saying “see the reviews” avoids taking responsibility. Second, the reviewers may very well contradict each other, and the editor should give the author guidance about their take on these contradictions. Even when there are not contradictions, it is helpful for authors if the editor communicates which points were most central to their final decision. Third, sometimes, reviewers are wrong. Even when all reviewers agree, it may be the case that the correct decision is one that goes against the reviewers’ bottom-line evaluation. I have not often gone against a unanimous set of reviews, but it has happened occasionally and I believe an editor has a responsibility to consider this option.
- Treat a revise & resubmit decision as a contract.
One of the more frustrating experiences for authors is when manuscripts go through many rounds of revision with new points – or sometimes even new reviewers – brought up each round. This is sometimes necessary, but it should be rare. Particularly at a journal like Psychological Science where most manuscripts are short and clear, and where a quick turnaround is expected, a revise and resubmit (R&R) decision should, most of the time, be a straightforward process that results in an accept or reject decision soon after the revision is submitted. Specifically, I attempt to write R&R decision letters such that it is very clear to authors what they would need to do to get an acceptance. In these cases, I can then usually make a decision on the revision without sending the revision back out for review.
This approach depends on the editor having the appropriate expertise to judge the revision without outside input. Thus, to encourage this approach to R&R decisions, I would need to make sure that my editorial team has a lot of breadth, and that I assign manuscripts to the most appropriate handling editor. I have devised strategies for maximizing the fit between manuscript and handling editor at SPPS that I plan to implement if I become EiC at Psychological Science (e.g., asking each editor to send me keywords that match their expertise, and asking them to tell me when I assign them something outside their core area of expertise). When it’s not possible to get a strong fit between the editor and the manuscript, or when the revision includes major changes or changes that require very specialized expertise to evaluate, it will sometimes be necessary to send revisions out for review, and/or go through multiple rounds of revision.
- Admit mistakes.
One important lesson I have learned is that being an editor will mean confronting the limits of my competence, very often. As editors, it is important to be humble about our abilities and expertise, and willing to correct our mistakes. This means seeking advice from others, being transparent about the process, and keeping an open mind when challenged. Of course, author appeals can take up a lot of time, and it is sometimes important to be firm and shut down conversations that have become unproductive (e.g., authors continuing to appeal, without good justification, after an editor has carefully reviewed the decision). However, it is equally important to acknowledge that even excellent editors make mistakes, and even top journals have policies that may not be optimal or may have unintended side effects. Thus, as EiC of Psychological Science, I would encourage open communication with the APS community, and be open to making changes or reversing my positions.
- Avoid conflicts of interest.
I have seen firsthand many opportunities for conflicts of interest in the peer review process. One common problem I have run into is authors recommending reviewers who are clearly conflicted. I have tried to discourage this by making the criteria for conflicts of interest prominent in the submission portal, and giving authors feedback when I notice this behavior.
Another tricky issue is how much editors should publish in their own journals. As editor in chief of SPPS, I have had a private policy of not submitting to my own journal, and I would continue that policy if I were to become EiC at Psychological Science. I do not have the same policy for Senior Editors or Associate Editors, however, as it is easier to minimize the conflicts of interest when handling their manuscripts. As an APS board member, I successfully lobbied for a policy change at Psychological Science such that manuscripts submitted by Senior or Associate Editors would not be handled by other editors at Psychological Science. I am proud of this policy, and would continue it as EiC.
- Provide feedback to the authors, even in the case of desk rejection.
One of the most frustrating experiences one can have as an author is getting a form rejection letter that does not give any specific reasons for the decision. This happens commonly with desk rejections, but also happens with rejections after review, when the editor does not specify which of the reviewers’ comments factored into her decision. I believe editors have an obligation to give readers at least one or two of the main reasons for their decision, and I feel that generic reasons like “not a good fit” or “better for a specialized journal” are not sufficient. As EiC of SPPS, I wrote a substantive decision letter for every manuscript I rejected, including desk rejections. This was often appreciated by the authors, as indicated by the 73 emails I’ve received in which authors expressed gratitude for desk rejection letters (other editors tell me it is uncommon to get thank-you emails in response to desk rejections). More importantly, this makes the editorial decision process accountable – if an editor misreads a paper or makes a mistake, the authors have a chance to appeal. I have rarely reversed my decision after an appeal, but it has happened a few times, and I feel better knowing that if I do make a mistake, the authors have a chance to catch it.
I realize that the volume of submissions, and especially desk rejections, at Psychological Science will make it difficult to continue this practice. However, I think it would be feasible to compose desk rejection letters that are a hybrid between generic form letters and personalized letters. For example, I could imagine a system in which the editor selects from a menu of reasons that then imports pre-written paragraphs into the decision letter. This would give authors a better sense of the reasons for the decision, without dramatically increasing the burden on editors.
When a manuscript is rejected after review, I believe the editor should likewise specify which reasons factored most heavily into their decision. Once again, this could be very brief, but needs to be slightly more specific than a generic form letter telling the author to refer to the reviewers’ comments.
- International scope
We would also like you to reflect upon increasing the international scope of this journal, for example by considering the following questions: How will you increase the number of submissions from authors outside of the US? How will you ensure international representation on the editorial board of the journal?
First, I anticipate that I will be accepting a job as Professor in the Department of Psychology at the University of Melbourne, Australia, starting around July 2020. [I have since accepted the offer.] Thus, if I were appointed EiC of Psychological Science, my geographical location would help with the internationalization of the journal. I am also French and Iranian (in addition to American), which may help increase visibility of the journal in those countries. Anecdotally, I have the impression that the rate of submissions from Iran at SPPS increased substantially during my tenure.
Second, I travel extensively to give talks and attend conferences around the world, and I will continue to do so after I move to Melbourne [...]. I would use this platform to raise awareness about Psychological Science and encourage authors to consider submitting there.
Third, the policy changes I proposed above to reduce the emphasis on author identity and status would very likely help encourage more submissions from authors outside the US (who may rightly feel that they are disadvantaged in a system that makes author identity salient; Ross et al., 2006), and ensure a fairer process for those submissions, resulting in more published articles by authors outside the US. With more published articles from outside the US, potential authors from outside the US may be more likely to believe that Psychological Science is an appropriate outlet for them, and that they will get a fair hearing.
Fourth, in addition, at the most recent APS board meeting, I suggested that APS invest some of its resources in providing authors with language/writing services. In my experience at SPPS, a major dilemma with internationalization is that, for some international (and US-based) submissions, the quality of the research is very high, but the quality of the writing is very low. In these situations, it is difficult to ask reviewers to volunteer their time to evaluate a manuscript that is very hard to understand. One solution to this problem is to offer subsidized (ideally free) language/writing services to authors if their manuscript passes a preliminary quality check by the editor. This would also demonstrate in a very vivid way that APS and Psychological Science put the income from journal subscriptions back into the community, and that internationalization and inclusivity are a top priority. While this service would cost APS money, I believe some of that cost would be offset by new members in non-English speaking countries who now feel that Psychological Science is a more viable outlet for their work.
Fifth, to ensure international representation on the editorial board, I will use the approaches described above in section 4 (reaching out to others for input, using a data-driven approach, and possibly nominations/self-nominations). I will also seek data from other journals regarding who has published and reviewed for them, to broaden the pool beyond those who are already in the Psychological Science database.
References
Birke, D., Christensen, G., Littman, R., Miguel, T., Levy Paluck, E., & Wang, Z. (2018). Open Science Practices are on the Rise Across Four Social Science Disciplines. Talk given at the annual meeting of the Berkeley Initiative for Transparency in the Social Sciences. Slides retrieved from https://osf.io/kvbnh/
Chawla, D. S. (2019, February). Rare trial of open peer review allays common concerns. Nature News. Retrieved from: https://www.nature.com/articles/d41586-019-00500-7
Cutting, J. (2012, November). Reflections on five years as editor. APS Observer.
Open Science Collaboration (2015). Estimating the reproducibility of Psychological Science, Science, 349, aac4716.
Ross, J. S., et al. (2006). Effect of blinded peer review on abstract acceptance. JAMA, 295, 1675-1680.
Tomkins, A., Zhang, M, & Heavlin, W. D. (2017). Reviewer bias in single- versus double-blind peer review. PNAS, 114, 12708-12713.
Srivastava, S. (2011, December). Groundbreaking or definitive? Journals need to pick one. SPSP Connections.