Archive for the 'reviews and reviewing' category

Fuzzy COI

May 21 2012 Published by under reviews and reviewing

A reader who is/was acting as guest-editor for a special issue of a journal wrote to ask some questions about whether s/he could solicit manuscripts from certain colleagues, advisors (past/present) etc. My opinion: s/he could solicit manuscripts from colleagues etc. but not act as editor for manuscripts involving them. Another editor should handle those cases. I know some journals don't worry so much about conflicts of interest of that sort, particularly in small fields in which everyone knows everyone, but I think it is best to avoid such real and perceived conflicts of interest (COI) with advisors, close colleagues, and so on if at all possible.

The question got me thinking (again) about some of the fuzzier types of COI. Although funding agencies and journals may have detailed definitions of what constitutes a COI, there are some situations that may not be *official* conflicts, but maybe sort of are, depending on the situation/people. What are these, and what to do about them?

If you have an official, unambiguous COI, you should not do a review/edit the manuscript, review the proposal etc., but if you have a sort-of-maybe (fuzzy) COI, you can reveal it to the program directors (for example, NSF provides a little box for this very thing in its review forms) or in a confidential message to an editor. What are some examples of these?

For discussion purposes, here is a partial list of situations in which I had a 'connection' of some sort to an author or proposal PI -- perhaps not a strict COI but still a connection that in some cases maybe may need to be revealed (or not, as the case may be).

- Some of the undergrads in the classes/labs I taught as a grad student teaching assistant are now professors. I have encountered a few of them professionally over the years. I don't consider this a conflict of interest, although there are some ways in which the TA-student interaction has affected my opinion of these people. In one case, I declined to review a manuscript because the primary author had been a very obnoxious and high-maintenance undergraduate. I thought my personal dislike might interfere with my review, so I declined the review. I didn't give a reason, so in this case I managed my own COI.

- Some of the undergrads I have taught as a professor are now professors. If they just took a class from me and I didn't advise them in research or have any particular professional interaction other than as teacher and student in a class I might reveal the connection if it seemed relevant, but it wouldn't stop me from doing reviewing/editing unless (as in the case above) I had some particular unobjective opinion. That opinion isn't necessarily negative. For example, if an undergrad dove into a raging river to rescue drowning kittens, I would have a very high opinion of that person and might be unable to be objective about their scientific work.

- Some of the undergrads I advised in research are now professors. I have been sent their papers and proposals to review etc. In one case, the NSF program director, whom I consulted, said that I should do the review if I felt I could be objective and not do it if I felt I couldn't. S/he said that I should mention the possible COI in the confidential box for revealing such things, if I felt so inclined.

- Some of my husband's collaborators, former students, and postdocs show up in my particular corner of the Science universe from time to time, although we are in different subfields. For example, not long ago I was sent a proposal to review by someone with whom I had no known COI, and only once I got deep into the proposal did I realize that I did in fact have a major COI with this proposal. If the proposal was funded, my husband would benefit financially, even if indirectly. Having a Significant Other in the same field opens up COI possibilities all over the place. When one of us has served on an NSF panel, the other one has to provide a list of COIs so that the panel-spouse will not deal with his/her COI-in-laws. Some of those COIs might seem fuzzy (for example, if I have no idea my husband is working with a particular person), but in fact these can be quite unfuzzy.

- and then there are these miscellaneous ones that plot at various locations on the COI fuzziness spectrum: I have been sent manuscripts/proposals by members of my PhD committee (revenge opportunity?!!), former grad students I have helped advise (formally or informally) at other universities, someone who married a friend of mine from college (I introduced them!), former summer interns, and close science friends with whom I have never collaborated. I have declined to review/edit in each of those cases except one: that involving my former committee members, and in those cases I tried to decline but the powers-that-be were quite insistent that they wanted my reviews and would take into account my sort-of COI.

As we get older and our networks of collaborators and science friends and former students expand, opportunities for COI can increase dramatically with time. Eventually it may get so that we can only review or edit things by the 12 people we have never even heard of, in which case we might then have to fight against the unfair and unobjective thought that "If I haven't heard of them, they can't be any good." Well, I know people who think that way, but I am not there yet and hope I don't go anywhere near there.

Meanwhile, in terms of managing the plethora of COIs that I encounter in my career, I will continue to do what most of us do: make it up as I go along. OK, I will do a bit more than that: I will attend the required but useless 'ethics' training sessions, get advice from respected colleagues, and try to do the right thing, or at least what seems to be the most right thing, or the less worse thing. (And no, I don't think turning down any and all review requests and quitting as a journal editor is a reasonable option.)


10 responses so far


Oct 27 2011 Published by under reviews and reviewing

A reader who is working on some reviews for a high-impact Journal of the One-Word Name wants to know how to avoid being the kind of horror-story reviewer that writers of, and commenters on, blogs like to describe in scathing, gory detail.


Do you or your readers have any advice on what to do to not become one of those anecdotes, beyond the obvious stuff like don't steal ideas?  What is reasonable, as far as requests for additional data, when reviewing for a journal with essentially unlimited space for supplemental materials and very high standards?

Also, this journal asked me to review multiple papers by different groups working on similar problems.  They often publish multiple papers on the same topic in a single issue, with some accompanying commentary, to make it a theme issue.  I have been explicitly asked to compare the papers to each other, to ensure that similar work is reviewed at a similar standard.  This is not something that I've done before. Any thoughts?  To me it seems straightforward, and maybe even fairer than most processes, because it's more likely that similar work will meet similar standards, but I've heard horror stories about weird things happening when these journals want a theme issue.  Maybe there are some fairness issues that I'm overlooking.

I suppose that the simplest way to avoid becoming anybody's horror story is to recommend publication, because then the authors will have no reason to complain, but that approach has some rather obvious problems.

Indeed.. Let's assume that this person is semi-joking about the last comment. Clearly you have to give the best and most honest judgment you can, based on what is in the paper(s) under review.

And that's the key to the whole thing: Give your best judgment. Be critical, but polite and constructive. No matter what the journal.

As to the issue of proposing a lot of new research: Editors are ultimately the ones to blame for this, not (just) the reviewers. I could propose that the authors of a manuscript I am reviewing do 2 more years of intense data collection on the most expensive and inaccessible machines in the universe before the paper would meet my  standards, but the editor doesn't have to take that seriously.

Editors can ask authors to explain why such requests are unrealistic/unnecessary, or can use their own judgment and say "I know that Reviewer 2 proposed that you do a series of expensive and time-consuming new experiments/analysis (or whatever), but you can ignore that comment."

Or editors can concur with these recommendations by reviewers, in which case, you can try to argue with them (politely and briefly) or you can just take your awesome paper to another journal.

But back to what a reviewer can do:

When I review a manuscript that does seem to have a gap that could/should be filled, I think very carefully about how strongly I word my recommendation about new work. Options are:

Unambiguous/strong statement: This work is unpublishable without the following ....

More ambiguous but still quite strong: This work would be greatly improved and the conclusions much more believable if you did the following...

Passive-aggressive in a mild way: Although it would have been useful/better if you had [done this and that], I think that your interpretation/conclusion is quite/mostly reasonable given the data/analysis presented.

Nicest: I am not suggesting at all that you do this because I think the manuscript is publishable with the existing dataset, but I wondered if in future research on this topic if you would be able to do [this other interesting thing that would help answer some additional important questions].

The issue of supplying supplemental material is also a major concern for authors. You need to provide sufficient documentation of your work, but at some point it becomes absurd if most of the content of the paper is in the supplement, other than some cryptic text (that can't be understood without the supplementary info) published in the main body of the article. Reviewers should only request essential supplementary material that is not already provided, following the norms of their field for archival material.

In the end, it's the editor's call on whether to use or ignore the reviewers' comments about adding more material to the paper and/or doing more research to include in the paper. All you can do as a reviewer, if you want any hope that your time and effort will be worthwhile, is to write a thorough, constructive, interesting review that helps improve the paper and helps the editor weigh the various review comments and make a good decision. [I have not reviewed a series of papers on a theme before, but perhaps others can chime in on that topic.] This is true whether you are reviewing for Journal of the One-Word Name or Journal of the Most Obscure Topic in the World.

13 responses so far

Cite Me

Jun 14 2011 Published by under publishing, reviews and reviewing

A reader wonders:

When reviewing a manuscript submitted to a journal, is there any good way to recommend to an author that they add a citation of your own work?

This issue is wrapped up with that of reviewer anonymity, so there are a couple of sub-questions here:

- If you are concerned about anonymity but you really really think your paper(s) should be cited, can you disguise your suggestion (to the author, but not the editor) as being from a disinterested and totally objective observer?

- Even if you don't care about being anonymous, how do you suggest that your paper be cited and not come off as a self-promoting citation-monger (assuming you even care what people think)?

To get the discussion rolling, I have encountered the following cite-me situations just in the past couple of months:

1. I was reviewing a paper that used what I thought was an unnecessarily convoluted approach to a particular topic. What they did was OK.. but if they had used my elegant method (the topic of a paper published in the last few years), the paper would be better.

In this case, I decided not to suggest that they use (and cite!) my work. What they did was not a major flaw of the paper, and I considered the issue in question to be more one of style and clarity. Of course, style and particularly clarity are important for papers, but the problem was not so grave in this case that I felt compelled to suggest that they cite my paper. I mentioned only that Method A was unnecessarily complex (with some brief elaboration of why), but left it to the authors (and editor) to agree or disagree with that, and find a different method if they chose (mine or someone else's).

2. In my role as editor of another journal, I was handling a manuscript on a topic on which there are very few published articles, but one of those few published articles happens to be from my research group. The manuscript under review did not cite our paper, and in fact didn't even cite any of the other recent papers on this topic, but instead cited only some 20+ year old, tangentially-related studies. Hooray for not forgetting about old papers, but why ignore highly relevant work published in the last 5 years?

Even trickier than making a cite-me comment in a review is making this comment as an editor. Reviewers suggest; editors decide, so we have to be very careful. I think if the author had cited some of the other recent studies but just not our paper, I would have let it go and merely been a bit puzzled as to why an obvious and relevant paper was not cited. As it was, I thought the lack of any citations of the most relevant literature severely undermined the paper, particularly in the introduction and discussion. Without being too heavy-handed (I think/hope), I suggested that the author consider the literature on Topic X a bit more broadly, and gave a few more specific suggestions of topics (but not particular papers) to consider. Even a brief search on a few keywords will lead to my paper and a few others.

3. Also in my role as editor, I handled two recent manuscripts in which two different reviewers took two different approaches to stating that it would be appropriate for an author to cite their papers. Both reviewers were not shy about making their identities known -- in fact, they each considered it central to having their review comments taken seriously by the authors.

One reviewer was very emphatic that the manuscript under review was fatally flawed without citation of his published work. I agreed that it was surprising that his work was not cited, and that the paper would be better for the citations (and the accompanying contextual discussion), but I think "fatal flaw" was an exaggeration. Unfortunately, "fatal flaw" did apply to the data/methods, a situation perhaps indirectly related to the incompleteness of the citations.

The other reviewer very politely and tentatively and circuitously said that although he hated to suggest that his own work be cited, the authors might want to take a look at his 2006 paper and an earlier paper, and they would then see that their statement that no one had ever before proposed Idea Z or used Method Y to do X was in fact not correct. Again, I agreed with the reviewer that a more correct and complete citation of the literature, including these specific papers by the reviewer, was appropriate.

In fact, in most cases that I have seen as editor, I have agreed with the cite-me comments of reviewers. From what I have seen, it is rare for a cite-me review comment to be frivolous and obviously craven. I am sure it happens, but I think it may be more common for there to be other citation lapses, such as:

- authors who cite their own work heavily and perhaps not very relevantly;

- mis-citation of papers (example);

- non-citation of relevant papers (another example).

So now we are back to the original question. If I think that citation of a not-yet-cited paper of mine is useful to the paper under review, I won't stress out (too much) about appearing like a jerk and will politely mention the paper(s) that seem relevant and explain my reasoning. If I care about staying anonymous in the review, I may not bother to mention the missing citation, or -- if I feel strongly about it -- I could mention it only to the editor.

Of course the whole reason why we are discussing what might seem like a trivial issue is the increasing reliance on citation numbers as a measure of scholarly "quality". Numbers like the h-index now regularly appear in tenure and promotion files and letters of recommendation. If a paper that could/should be cited (but is not) in a paper under review is one with citation numbers just below the cut-off for your h-index, it can set off an internal struggle of the sort mentioned in the original question.

If you have asked yourself this same question whilst reviewing a manuscript, what did you do?

18 responses so far

Militantly Ignorant

Feb 15 2011 Published by under reviews and reviewing

Consider these two examples of a certain type of reviewer:

A few years ago, I wrote a paper that added some new information and discussed new ideas about a phenomenon that was discovered by others decades ago and that has been much discussed in the literature. This phenomenon is related to the observation that purple kangaroos can leap extremely high. Before the initial discovery, it was thought that only green kangaroos could leap extremely high, but now it is well established that both types of kangaroos can do this. The early inferences, which were quite compelling, have been confirmed by observation.

In my paper, I wrote a few context-establishing sentences in the introduction, mentioning the high-leaping by purple kangaroos [citation] before moving on to set up the particular focus of the paper. One reviewer of the paper wrote in their review "What is the evidence for high-leaping purple kangaroos?" and went on to express great doubt that this ever occurred.

The reviewer was unable to get over his/her shock and disbelief about the purple kangaroo phenomenon and recommended rejection. The paper was initially rejected, but was ultimately published.

Another example: A proposal involving a recently developed but well-known (and trendy!) method -- the kind that you could only not know about if you had not read any journals and not gone to any conferences in the past 5 years -- got this review comment: "I have never heard of [that method] so I am not sure if this research is [doable/worthwhile]." The grant was awarded anyway; lucky for us the other reviewers were up on the topic and liked our ideas.

Such comments are not rare, although I thought these particular incidents were extreme. This post is not, however, a rant about how some editors and program directors must look under rocks to find certain reviewers (perhaps that is what it takes to find enough reviewers in some cases). Instead I want to muse about other aspects of the phenomenon of Hard-Core Ignorant Reviewers.

I know the answer to the obvious question:

Don't these reviewers know they are ignorant? No, they don't. Anything they don't know is not worth knowing, or doesn't exist.


Why don't these reviewers know they are ignorant? This is a rhetorical question. Nevertheless, I wonder if these people are never told that they are ignorant by anyone, or whether they have repeated evidence (director or indirect) of this but ignore this, as they do many other things. Both are likely. In the examples described above, the reviewers did not hesitate to admit their ignorance in their reviews, and they interpreted their lack of knowledge as a problem with my work. These people are very comfortable in their ignorance.

Which leads me to my real question: Can someone become like these reviewers, or is it an inherent trait that is evident early-on, or at least by mid-career?

Worrying about this would have been unimaginable to me in my academic youth, but as I get older and more established, I see more examples of situations in which I previously would have been held to a higher -- perhaps even an impossible or unfair -- standard of proof for statements I make or ideas that I propose. So perhaps encountering reviews from the hard-core ignorant serves a useful purpose of keeping me from becoming one of them. Maybe it keeps me on my toes and prevents complacency (?). This is a hypothesis. Feel free to reject it.

For me, the prospect of becoming like these militantly ignorant reviewers is one of those "Shoot me if I ever get like this" kinds of things. Or at least tell me. But would I listen?

19 responses so far