Oct 27 2011 Published by under reviews and reviewing

A reader who is working on some reviews for a high-impact Journal of the One-Word Name wants to know how to avoid being the kind of horror-story reviewer that writers of, and commenters on, blogs like to describe in scathing, gory detail.


Do you or your readers have any advice on what to do to not become one of those anecdotes, beyond the obvious stuff like don't steal ideas?  What is reasonable, as far as requests for additional data, when reviewing for a journal with essentially unlimited space for supplemental materials and very high standards?

Also, this journal asked me to review multiple papers by different groups working on similar problems.  They often publish multiple papers on the same topic in a single issue, with some accompanying commentary, to make it a theme issue.  I have been explicitly asked to compare the papers to each other, to ensure that similar work is reviewed at a similar standard.  This is not something that I've done before. Any thoughts?  To me it seems straightforward, and maybe even fairer than most processes, because it's more likely that similar work will meet similar standards, but I've heard horror stories about weird things happening when these journals want a theme issue.  Maybe there are some fairness issues that I'm overlooking.

I suppose that the simplest way to avoid becoming anybody's horror story is to recommend publication, because then the authors will have no reason to complain, but that approach has some rather obvious problems.

Indeed.. Let's assume that this person is semi-joking about the last comment. Clearly you have to give the best and most honest judgment you can, based on what is in the paper(s) under review.

And that's the key to the whole thing: Give your best judgment. Be critical, but polite and constructive. No matter what the journal.

As to the issue of proposing a lot of new research: Editors are ultimately the ones to blame for this, not (just) the reviewers. I could propose that the authors of a manuscript I am reviewing do 2 more years of intense data collection on the most expensive and inaccessible machines in the universe before the paper would meet my  standards, but the editor doesn't have to take that seriously.

Editors can ask authors to explain why such requests are unrealistic/unnecessary, or can use their own judgment and say "I know that Reviewer 2 proposed that you do a series of expensive and time-consuming new experiments/analysis (or whatever), but you can ignore that comment."

Or editors can concur with these recommendations by reviewers, in which case, you can try to argue with them (politely and briefly) or you can just take your awesome paper to another journal.

But back to what a reviewer can do:

When I review a manuscript that does seem to have a gap that could/should be filled, I think very carefully about how strongly I word my recommendation about new work. Options are:

Unambiguous/strong statement: This work is unpublishable without the following ....

More ambiguous but still quite strong: This work would be greatly improved and the conclusions much more believable if you did the following...

Passive-aggressive in a mild way: Although it would have been useful/better if you had [done this and that], I think that your interpretation/conclusion is quite/mostly reasonable given the data/analysis presented.

Nicest: I am not suggesting at all that you do this because I think the manuscript is publishable with the existing dataset, but I wondered if in future research on this topic if you would be able to do [this other interesting thing that would help answer some additional important questions].

The issue of supplying supplemental material is also a major concern for authors. You need to provide sufficient documentation of your work, but at some point it becomes absurd if most of the content of the paper is in the supplement, other than some cryptic text (that can't be understood without the supplementary info) published in the main body of the article. Reviewers should only request essential supplementary material that is not already provided, following the norms of their field for archival material.

In the end, it's the editor's call on whether to use or ignore the reviewers' comments about adding more material to the paper and/or doing more research to include in the paper. All you can do as a reviewer, if you want any hope that your time and effort will be worthwhile, is to write a thorough, constructive, interesting review that helps improve the paper and helps the editor weigh the various review comments and make a good decision. [I have not reviewed a series of papers on a theme before, but perhaps others can chime in on that topic.] This is true whether you are reviewing for Journal of the One-Word Name or Journal of the Most Obscure Topic in the World.

13 responses so far

  • DrugMonkey says:

    Editors can ... use their own judgment and say "I know that Reviewer 2 proposed that you do a series of expensive and time-consuming new experiments/analysis (or whatever), but you can ignore that comment."

    Of course, they cannot do this if they do not have the experience of running a research program. Which is one of the problems with non-scientist professional reviewers. They are under the Glamour of what they imagine to be the top scientists. So they just accept and reinforce the problem, and then hide their inability to make proper editorial decisions by saying "well the reviewers set the standard".

    Also, since the allegiance is exclusively bound to the corporate bottom line and competitive advantage against other One-Word-Journals, the professional editor cares not one whit about the advance of science for its own sake.

  • neurowoman says:

    Agree with DM, I doubt very much an editor (especially at one of these journals) would preemptively tell an author to ignore any reviewer comments regarding experiments.

    Prospective reviewer, please please, if you are going to ask for more experiments, distinguish between fundamental controls that may be missing and a story that is seriously incomplete versus wouldn't-it-be-nice-if experiments. There is a temptation to puff up claims and conclusions when submitting to hot journals, and sometimes the data doesn't quite match up. Push authors to temper or qualify their claims, not do the experiments needed to match them. And if the tempered conclusion isn't sufficient for hot journal, reject. Enough of this business of including 15 person-years worth of experiments in a single paper and 15 pages of supplementary material... my opinion.

    by the way, I've received a review of "looks great, publish" in one of these journals, and it was very nice!

  • RespiSci says:

    Ah, reviewerzilla! How we all complain about reviewers when our paper is being reviewed and yet do we turn into one when reviewing others?

    I definitely concur with Neurowoman about the need to distinguish between missing controls and asking for "nice-to-have" additional experiments. I think that many reviewers feel that their responsibility is to propose more or future studies. This is wrong. Your purpose is to determine whether they research supports the claims proposed by the authors. You don't need to start to design their entire research program.

    I also second Drug Monkey's comment about editors who may rely heavily on the reviewer's comments. I have had the experience where it was evident that one reviewer had clearly missed the point of the research and as such demanded additional experiments that had nothing to do with our experimental test but rather focused on the 'gold standard control' we used (and whose effects have been previously published and were cited within our manuscript). The editor rejected the paper, although the other 2 reviewers clearly understood the manuscript had only minor comments and edits proposed. The editor in this case, assumed that the reviewer who wrote the most must have understood things better.

    As for the series of papers for a theme issue, I would want to hold all papers to the same standard as long as it is scientifically possible. (for example, one paper may focus on in vitro research with the mechanism of action clearly elucidated for their cell system, and the other is an animal model which may have limitations....or vice versa). If one paper isn't up to snuff yet with perhaps could be with rewrites then propose the changes and maybe it would be worth a month or two delay to have them published together in a theme issue. However, if it is clear that the science isn't solid, and additional experiments are needed, don't lower your standards just to have all the papers published together. It isn't fair to the readership.

  • The reviewer who wrote to FSP says:

    First, thanks for posting this, FSP.

    As to this:
    Which is one of the problems with non-scientist professional reviewers. They are under the Glamour of what they imagine to be the top scientists.

    I'm flattered if the editor thinks that I'm a top scientist. Next time I submit a grant I should recommend this editor as one of the reviewers. Trust me, I'm far from being in the elite circle of my field.

  • It depends says:

    Beware of after the fact "we already knew that" attitudes. Often an idea is in the air, and people kind of know its there, but there's a big difference between being aware of something and actually having a properly developed theory and set of experiments to back it up. So yeah, you might already know that compound X interacts with bacteria Y, but if you never ran the experiment, developed a model how this interaction takes place and submitted the paper it doesn't count so don't mention it in your review.

    Paul Krugman says that several of his papers that lead to the Nobel Prize were first rejected with reviews of "this is wrong" and then on a second try "ah this is right, but we already knew it".

  • Drugmonkey says:

    The notion of "a complete story" is one of the most corrosive delusions in all of scientific publishing.

  • profguy says:

    I have seen reviews that say "this might be ok for society journal but is not exciting enough for one-word journal" which, reading the rest of the review, really means "I would be too jealous because I work on this topic and don't have a paper in one-word journal so no one else should either".

    Or, add a bunch of BS criticisms which get the paper rejected (even if the other reviews are good) and then you don't have a chance to tell the editor why the criticisms are BS.

    It's very frustrating, esp. when one then reads a lot of crap papers in the same journal - low quality, but high "newsworthiness". I think we all have our bad peer review stories and yet can probably agree that the system works about as well as it can on average in most decent journals. But the one-word ones are a special case. I don't mind that they exist, but it frustrates me that they have as much power and influence as they do.

  • I abhor the tendency to put the content of the paper into "supplementary material", which should only be used for things like raw data and unpublishable auxiliary analysis.

    I routinely reject papers that have hidden the meat in the supplementary material, and I urge other reviewers to do so also. I make it clear that a rewrite in which the important material is in the main body of the paper would make the paper acceptable (if that is the case).

    Part of the problem here is journals that have strict page limits (or enormous page charges) for the main article, but no limits on supplementary material. This tempts people to put an entire thesis into the supplementary material and try to publish the abstract as a paper. This does no reader any favors.

  • Anonymous says:

    Why is it so bad to have a long supplementary material? I'm worried to hear it: this morning I submitted to a glamour mag a manuscript whose supplementary material is maybe six times the length of the main text. The main text summarizes the argument and presents the key evidence/figures, and the supplement provides the mounds of sensitivity analysis (no effect of a, b, c,...) and tedious details behind the more straightforward (but nonetheless previously undemonstrated) claims in the main text. There are harsh page limits for the main text, and I thought most readers would prefer the distilled version anyway. Putting it all in the main text would've made sections of the article too boring for words. I probably could've written the manuscript as two articles, but I prefer more complete stories. Isn't the literature fragmented enough?

  • Dan says:

    I agree with much of the advice other have given. The maxim that I try hard to use when reviewing is:

    Don't try to make this into the paper that you would have written.

    In other words, if you feel like the claims the authors make are unsupported, then definitely suggest experiments. But if you just think it would have been cooler if they had done experiments to address another question, then keep it to yourself.

  • David says:

    I'm an editor-in-chief of a newish journal that is not one of the ultra-high impact all-stars (but we are quite competitive in our field), so maybe my experience doesn't match up with that for Nature, Science, or the equivalent, but....

    Yes, editors do periodically tell authors that they can ignore Rev. X's call for experiments and A,B, C, when the editors feel -- as suggested by FSP -- that the experiments are either unnecessary to rigorously make the point of the paper, or that they are things that would have been nice, or will in fact be nice for the 2nd, 3rd, nth round of investigations. No one can do the perfect experiment (in some fields it is much harder than others) and science progresses by steps, some of which are of modest size.

    FSP has given a spectrum of responses by reviewers and such a spectrum is also used by editors. If the work submitted has big holes it may be necessary to say either do more experiments whose data end up closing these holes or forget it. Sometimes it is enough to encourage the authors to point out the limitations of their data, but stand by their material within those limitations or constraints.

    As usual, FSP is right on target. Give your best judgment -- they chose you to be a reviewer because they think you know something about the topic, so your best judgment is valuable [not perfect though], and remember that you also can make confidential comments to the editor that are not seen by the authors (every journal I have reviewed for at any rate). This is where you can give all your misgivings or considerations. These help the editors gauge where you are coming from and how to mix your opinions in with those of the other reviewers.

  • Catherine says:

    As an editor myself, and indeed one of the professional editors that people like DrugMonkey love to hate, I second David's comments. At the journal I work for, we routinely tell authors which of the 8 zillion experiments requested by referees are critical, which are useful but not required, and which can be ignored. It would be extremely helpful if referees followed FSP and neurowoman's advice, and focused on suggesting experiments that are required for the conclusion of the manuscript, but not random interesting tangents that would - almost undoubtedly - make the paper more interesting, but really belong in the next paper(s). Editors certainly should use their brains, but it's disingenuous for referees to pretend they don't have any responsibility in the process.

  • The letter writer says:

    Well, one of the papers was so flawed that I wouldn't be able to recommend its publication anywhere. The other one has some great stuff that should be published. I did have to ask for more data to support one key point, because without that data we won't know if they've done something that is a product of very special circumstances or is in fact (as they claim) a very general phenomenon. However, I think it will be easy for them to provide this additional data.