Observations about the Review Process
Out of the blue I get an email from my old friend Roger Martin who says “Steve, I thought perhaps you could talk about what you expect from reviewers in the process – how to evaluate papers, how to write reviews, and how to improve at those tasks. This has always been a hot topic at the NFC because participants are coming out of Ph.D. programs in which they’ve been taught to be harsh critics of papers but they get to do that with no repercussions – so they often need guidance on how to transition to be constructive reviewers of research.”
Never being one to walk away from a “hot topic,” I accepted the gig. Then I had to figure out what I would say. I started by reflecting on our “instructions to reviewer” letter that I had updated when I became Editor (in-chief) of Contemporary Accounting Research (CAR). It says:
In this technological age, it is difficult to protect the double blind review process. While knowing the author’s identity does not preclude you from serving as a reviewer, please do not actively attempt to determine the author’s identity. Furthermore, should you perceive a conflict of interest on your part (e.g. you have a competing manuscript), please let the editor know as soon as possible to determine if he/she wishes to reassign the review. CAR promotes a constructive, thorough and responsive (i.e. timely) review process, so whether the news is good or bad, please word your review as if you are the recipient. . . . . . .
Remember that a timely review encourages the authors to be timely reviewers themselves resulting in a virtuous cycle of timely reviews for all, whereas late reviews can reinforce a destructive cycle of even later reviews as authors “get even” with the “system”. Thank you again for your support for CAR by agreeing to review this manuscript.
There are some pretty strong statements in this letter to reviewers. It reminds reviewers to respect the double blind review process to the extent they can given our technological age where working papers are dispatched with a click of a button. It asks for conflict of interest declarations where the reviewer has a competing manuscript so that the Editor can make an informed decision about whether the reviewer should continue with the manuscript. Note, we do not say the reviewer cannot continue because of a competing manuscript but we want to know up front if there is an issue. Then we emphasize constructive, thorough and responsive (i.e. timely) review process where we ask the reviewer to write their review as if you (i.e. the reviewer) are the recipient. Finally we remind reviewers that by our choices we support a virtuous cycle where timely reviews beget timely reviews or we reinforce a destructive cycle where reviews get later and later as reviewers seek to settle old scores against “the system.”
This is the background against which I began my blog in June 2010 shortly after I became Editor (in-Chief) of CAR. To the extent possible I wanted the blog to be an informal means for the Editor to communicate with CAR’s extended readership including its large Editorial Board, authors, potential authors, and reviewers. It also was meant to serve young faculty who had questions but did not want to appear “silly” by raising them at the various doctoral and new faculty consortium that are held by virtually every national and international academic accounting association and indeed within the American Accounting Association by many, if not most, sections and some regions.
So what follows is an edited version of my blog entries about the review process that have been written over the past nine months. I opine on many things in my blog, but this is one of my “pet” concerns. The Blog entries have been edited for spelling and grammar and in some cases for length. Also, I have re-ordered the entries so that they make for a coherent linear read.
The review process
Reviews – not very mysterious
I was surprised at the level of knowledge folks at the American Accounting Association (AAA) had about reviews and the review process. I guess their advisor did not think this was important enough to tell them about in detail. So here is a quick tutorial about selecting reviewers at CAR:
1. Editor (in-chief, i.e. me) determines whether he will keep the manuscript himself or parcel it out to a more knowledgeable member of the Gang of 24 (the twenty-one Associate Editors (AEs) and the three Consulting Editors (CEs)). Indeed, it is possible an Ad Hoc Associate Editor might be appointed. These Ad Hoc AE’s are often former CAR Editors and AE’s with particular knowledge of the subject matter.
2. The paper’s assigned editor/AE picks up to four reviewers who are contacted in the order indicated by the editor/AE. Note there is NO involvement by the Editor (in-chief) in selecting reviewers for the AEs. Staff does monitor for excessive use of some reviewers and alerts AE of same.
3. Each AE/editor has their own basis for choosing reviewers. They are asked to consider having at least one member of the EB as a reviewer. They are also asked to consider have one specialist reviewer who knows the area in depth and, as CAR is a general interest journal, one reviewer that is known in the field but may not have as great of depth as the specialist. AE’s are free to ignore this advise as that is all it is, advise.
4. Most AE’s I know do read the reference lists but ignore acknowledgements (what purpose would it serve to know who the author highlights as reading the paper in this day and age of SSRN when hundreds of people are likely to have read it) in their selecting of reviewers. However, AE’s also tend to consider 3. above as well.
5. Hopefully the choice of 4 reviewers was enough as occasionally you will get turned down by any or all of them. Then it is on to looking for more reviewers.
6. At CAR Editorial Board members are expected to do three to six reviews of new manuscripts a year plus revise and resubmits. An EB member who continually turns down reviews will be removed from the EB.
So that is it in a nutshell.
From submission to review
One question I often get asked is how does a manuscript get processed?
First, the author has to go through the joy of learning how to submit via our Editorial Manager system. As most new authors do not read the tutorial on how to do it, they make it harder on themselves then it actually is.
Second, my assistant Nancy prints out the manuscript (MS) for me (yep I am old-fashioned in that I like to do my editing on paper) and I carry it around for up to 72 hours reading it, figuring out whether there is an appropriate Associate Editor or not, deciding whether to keep it myself or to send it to an Ad Hoc Associate Editor. Generally, I am trying to minimize the latter as we have a broad team of AE’s who along with our consulting editors have great breath and depth. However, the manuscript workload is on the uptick in some areas, hence I might call on well qualified ad hoc AE’s.
Nancy then assigns the manuscript to the AE/Editor and each then picks their first choice and second choice reviewers. Generally this happens with another 72 hours, often much less. Nicole, the CAR editorial assistant, then contacts the reviewers via email the same day and finds out if they will do the review. Here is a point where there is slippage in the process – how long will it take a reviewer to respond? Are they ignoring the email, did it not go through, etc. Generally we give reviewers up to a week to respond and then go on to the next choice. Up to two weeks after you submit, or as few as four days after you submit, the paper is in the hands of reviewers who then have 45 days to carry out the review.
And that’s how your manuscript gets to the point of being reviewed.
Review time to you
So what’s the rest of the story? When I left off, we had accounted for about 50 to 55 days assuming the reviews are on time at 45 days and up to ten days to get reviewers assigned. The final step is the editor/associate editors who normally take about a week to ten days to reach a decision.
So if the world is aligned correctly, the number of days a manuscript should take to process is somewhere in the low 60’s. Given that a large minority of reviewers do not start their review until it is “late” this adds anywhere from 10 days to 60 days to the review process. Hence, the low 70’s in days under submission are more likely but one can see how it can take over 100 days if the reviewers are slow, the AE is slow either in assigning or finalizing their decision, or (something that rarely happens) I am slow in assigning the MS to an AE.
So, in the best of all possible worlds where everyone does as promises we can achieve a turnaround time of 60 days (the most frequent response in the web survey). More realistically 70 to 80 days (which is the current modal response on the web survey) is likely but most journals are a long way from that.
Can we do better?
Following up on my last entry about review times, can we do better than 100 days consistently? Sure, in the past CAR has had 75% of manuscripts reviewed and returned to authors less than 100 days but that is too big of a tail (and unfortunately it has been larger than that in three of the last four years).
Some say (i.e. Journal of Accounting and Economics (JAE) and Journal of Accounting Research (JAR)) that paying reviewers is the way to go. CAR already offers an incentive, two reviews on time earns you a free submission to the journal, which has a cash value of $150. Note our submission fee is much lower than the competitors except for Accounting, Organizations and Society (AOS). So I guess one way to make the incentive to review on time a larger one is to charge more per submission. Hmmmmmm. . . . . . I’ll have to think about that one.
What I am hoping to do is to set some targets that are achievable but still a stretch (i.e. see a management accounting textbook for goal setting). Something like 75% of all manuscripts in 75 days, 80% in 80 days and 95% of manuscripts in less than 100 days.
I still want a couple of more months of data before I decide and time to consult with the AE’s. After all I am an empiricist not a theoretician.
Acceptance rate stories
Rejecting acceptance rate tales
Contrary to popular wisdom (i.e. “tales”), at least Association journals (and AOS as we discovered in San Francisco at the 2010 AAA Annual Meeting), do not monitor rejection rates with some sort of quota in mind that must be reached each month, quarter or year. Rejection rates, or to put a positive spin on a small number, acceptance rates, are something we observe at the end of the reporting period when we make our accountability reports. So if you asked me today what my acceptance/rejection rate is, I have no idea. I can tell you what CAR’s was in 2009 back to the early 1990’s because we always made public this data at CAR. But beyond year-end reports, there is no management of rejection rates as the authors with the aid of the reviewers seem to readily take care of that themselves. As I said in San Francisco at one of the editor’s panels, “you are us and we are you.”
Do Editor’s manage acceptance rates?
NO! Authors and reviewers do it for us.
You may think that is a flippant response in an industry where 85% plus rejection rates are the norm in top journals and in the elite journals acceptance rates are less than 10%; but it is true. If an editor adopted a policy of only allowing revisions where BOTH reviewers agreed that the paper should be revised, he or she would have a very very small journal. (See the size of journal issues during Robert Magee’s The Accounting Review (TAR) editorship or Anthony Atkinson’s Journal of Management Accounting Research (JMAR) editorship; both of whom used variants of that decision rule so I am told reasonably reliably.)
More often than not the editor (or associate editor) gets one revise and resubmit and one reject (this combination is the most common source in first round reviews of eventually published papers in journals that have two reviewers) and has to make a decision about which review to follow.
Those reviewers who truly believe the paper should be revised need to give serious thought and communicate clearly those thoughts to the editor and the authors about what set of conditions constitute a strong likelihood of the paper being suitable for publication. All too often I get revise recommendations without any suggestion as to what it will take to make the paper publishable. Only rarely is that lack of direction justified and then only in cases where the authors’ have left substantial ambiguity in the paper about what exactly they did. This observation makes the circle complete leading to seeing how authors help self-manage rejection rates.
So, NO, I do not need to manage acceptance rates! I do not need to act in the role of a gatekeeper as authors and reviewers do it for me. Indeed, I only calculate acceptance rates once a year for my annual report to the Canadian Academic Accounting Association (CAAA). And you can bank on that.
Civility in all things
One of the major promises I made both to myself and others when I became CAR Editor, that is, above all we strive to be a civil journal that respects differences in opinions about subject matter but acknowledges that quality of a project does not reflect quality of the individual researchers behind the project. In other words, we may think your research paper “sucks” but you still “rock.” Blunt but true.
The vast majority of AE’s, all the CE’s and most of the Board members buy into that mantra that we need to strive for civility. However, mistakes will always be made, words will be taken as meaning something personal rather than as a comment on the research itself, some will go overboard in their rejection of research ideas etc.
So I encourage everyone to chill and give the benefit of the doubt. None of us want to consciously devalue anyone, however some research ideas are better than others both in thought and in execution (and sometimes both). Often the recipient of bad news can think it is the rejection of them as a person rather that the particular research project as not being a strong contribution to the literature.
As an experienced editor (when I was associate editor under Gord Richardson I edited almost 150 papers plus with ad hoc AE work for Auditing: A Journal of Practice & Theory (AJPT), JMAR and CAR it’s getting up there) I find some reviews rather frustrating to read. This happened last night as I was working on a decision letter that was much overdue as the previous Associate Editor was somewhat tardy.
To wit, the stream of consciousness review. Let’s be clear, a review is not really finished until the reviewer has separated the “wheat from the chaff” among his or her comments. Some like to say, well I write only “wheat.” My response, get your colleagues to read one of your reviews and get their feedback. You will be unpleasantly surprised.
So please from both an editors and an author’s perspective take those extra few minutes (perhaps as little as ten) to cut and paste your thoughts in order and denote them as major or minor. There will be many blessings on your head for doing this act of charity.
Writing reviews that move editors
What are characteristics of good reviews?
- they are focused, no more than 4 to 6 pages in total
- they separate the “key” issues from the minor factors
- if there is a “fatal flaw” they identify it in the first round
- they follow an organizational scheme that the authors can refer to if they need to make responses to the reviewers in a revise decision
- if the reviewer is going to suggest to the editor a revise decision, that the reviewer clearly communicates to the authors in the review what hurdles they need to overcome
What is a”fatal flaw”? It differs by paradigm theory testing versus theory development versus theory interpretation in the field but I believe there are some general ideas that one can put forward about “fatal flaws”:
- the theory posited does not make logical sense even if there is statistical significance in the testing
- the variables employed are not good proxies for the theoretical constructs, either dependent or independent variables
- there are control variables or covariates that have not been employed and appear to be impossible to obtain but are vital to the analysis
- the field work is too limited to tell much of a story (i.e. for field research) either through a positivistic or interpretive lens
- the analytical model makes such assumptions that there is no possibility of them existing in any rational economic framework or as a behavioral conjecture being put into such a framework.
Steve Kachelmeier Senior Editor of TAR has an interesting statistic that he cites, less than 25% of manuscripts are rejected on the grounds of a “fatal flaw” in the research. Most of the rest are rejected based on “insufficient contribution.” I have some thoughts on why this is so later.
Lack of consensus in reviews
One of the things that I find most fascinating about the review process is the lack of consensus there is among academic accounting reviewers as to what is publishable research. Even on papers that have gone three and four rounds it often remains up to the Editor (or Associate Editor as the case may be) to break the “tie.”
How can it be that after two rounds of re-writing, several workshops, conference presentations and comments from colleagues that a typical second round paper at a major journal goes through still faces this disparity of opinion?
Is it the lack of systematic reviewing by reviewers? I am not certain, but I feel that this may be at least part of the reason. Setting aside the issue of “contribution” for the moment, one would think that academics would be able to agree if appropriate theory guided the research, reasonable proxies or operationalizations were made, and that correct econometric models or statistical analysis was used! (Note, I set aside the math based analytical models and qualitative methodology research for purposes of this discussion.)
All too often I read as an Editor, and I see as an author, what I call “stream of consciousness reviews” where the reviewer does not read the entire paper before starting to type in his/her review comments. You know the kind of reviews I mean, ones that linearly follow the paper, do not separate major from minor points and even at times shows that the reviewer has had a “gestalt” moment where he/she understands something that eluded them earlier.
I think every paper needs one clean un-interrupted read before the review writing starts. Certainly we would expect no less from our students. Ensuring that we consider the next page of “Reviewer Suggestions” and give the paper one clear reading before we start to critique might go a long way towards obtaining a greater degree of consensus in the review process.
“Reviewer Suggestions (version 3.0)”
We are in the process of developing what we hope will be useful reviewer suggestions. In particular, these suggestions are designed to aid those who are relatively new at the reviewer role or new at reviewing for CAR. Please consider the following suggestions to be a beta version, which we will be actively revising based on feedback.
In general, we suggest that high quality reviews clearly delineate the most serious problems or concerns with the paper and if possible, make constructive suggestions about ways that these concerns could be addressed, if any. Minor issues should be identified as such in a separate listing. Numbering or using the alphabet to denote different issues that the author needs to respond to in a revision is always helpful. Remember, even if you are recommending “reject”, the review should be set up so as to facilitate author revision in case the other reviewer and the editor send the manuscript out for revision. Also note that just because the manuscript has been sent out for revision despite your “reject” recommendation, does not mean that the ultimate disposition will be “accept” or that the Editor is disregarding your opinion. The “revise and resubmit” process allows the reviewers to see each others’ (and the Editor’s) reaction to the paper, hence leading potentially to revisions in how each sees the paper.
We suggest that reviewers not automatically dismiss a study because it is a “replication”. Often the boundaries of our knowledge are narrower than we recognize and a contribution can be found in a study that delimits such conditions etc. Further we suggest, just because there is a competing working paper on the same subject as the paper you are reviewing, or you know one to be under review at another journal, it does not mean the current paper should recommended for rejection on the grounds of incremental contribution to knowledge. Competing working papers and papers under review at other journals are just that, under consideration for publication. Finally, if the primary concern with a paper is “lack of contribution”, we suggest that the reviewer consider the extent to which this perception reflects personal taste as to how the research question should be addressed.
In thinking about your recommendation to the editor (i.e. “accept with minor revisions” that you leave to the editor to supervise implementation, “reject,” “revise and resubmit”), we suggest that one way to frame your thoughts is by considering whether a plausible set of conditions exist such that diligent and responsive authors could make revisions and publish the paper in CAR. So in making your private recommendation (and we stress that it is inappropriate to reveal that recommendation to the authors as part of the review) to the Editor we suggest that you provide comments to the Editor as to whether you believe there is a path of revision that will deal with your concerns and result in a paper publishable in CAR. If you cannot foresee such a possible set of conditions and you are recommending “revise and resubmit” please explain to the Editor why you believe this is the best way to proceed with the manuscript. In other words, consider how you would react as an author to receiving a similar review report that does not provide some suggestions for directions that could lead to publication.
Negative outcomes – two types
The “fatal flaw” rejection
Next I want to deal with the two major types of rejections, “fatal flaw” and “insufficient contribution.” As I noted previously that Steve Kachelmeir at TAR tracks this and concludes that by about a 3:1 margin “insufficient contribution” is the main reason papers get rejected at TAR.
I would like to deal with “fatal flaw” rejections first. If you receive one as an author first you need to be able to recognize it. This by itself can be difficult as some reviewers are indirect in their reviews. Essentially, you are looking for hints that the reviewer is saying that the paper does not “measure what it purports to measure” or “tests what it purports to test.” Second, you may want a second opinion but after that STOP sending the paper out for review as you are wasting your time in rewriting it and the editors and reviewers time in editing it.
I committed this “sin” once early in my career, I thought I had found something and every journal I sent it to told me the same thing, I had not measured what I thought I had measured. But it took me five journals before I would give up the quest to publish this paper, mainly because there were no more journals I could think of who might publish the piece and that I would get some credit at my U for publishing there.
Bottom line, “fatal flaws” are normally “fatal” to the paper. When fatal flaws are pointed out, however indirectly by reviewers, you can bet that a very high percentage of the time they are “fatal.” You may want to get a second opinion, but if the result is the same let the paper die a quick death rather than lingering for years being a constant draw on your life support system (including time to do better work) and the life force of the academy, its reviewers and editors.
Contribution insufficient: Return to sender
One of the most dreaded editor’s letters that authors’ receive is the “insufficient contribution hence reject” one. Surely the authors did not labour on something for hundreds of hours collectively that was intended not to be a contribution to the academic community. Yet there it is in black and white “INSUFFICIENT CONTRIBUTION.”
Why? One reason for “insufficient contribution” rejections are due to papers being too practically oriented (among other problems). How can research be too practically oriented in a practice oriented discipline like accounting? I think the difference lies in the fact that CAR and similar journals are interested in basic research about a practical discipline, accounting, whereas “too practically oriented” accounting articles are ones that use rigorous methods to study problems that only a narrow niche of practicing accountants/managers/investors are interested in and/or could be studied by less rigourous methods with no loss of content.
In many cases the use of elaborate methodology does more to hide what the research has found than it does to illuminate as rigourous methods were invented to study basic research problems. So why carry out a carefully controlled laboratory experiment or an econometric study when a simple (but well executed) survey and associated descriptive statistics would answer the question? Such a research approach will likely not lead to a CAR publication(NOTE: this does not mean that rigourously motivated survey research does not belong in CAR, I have published it there as well as in JAR and AJPT), but it sure would be faster for the authors and likely lead to the same result in the end, publication in a practice oriented journal.
So that’s my first comment about “insufficient contribution”. More as I encounter them along the way.
Conclusion and the “Golden Review”
So that’s my thoughts so far on my journey as an Editor about the review process. Certainly the standard two reviewer double blind review process has problems with it. But in many ways it is like what former British Prime Minister Winston Churchill is reputed to have opined on about democracy: “It has been said that democracy is the worst form of government except all the others that have been tried.” I do not see a great deal of agitation to move towards a single reviewer system beyond those journals who have it already nor do I see a call for “unblinding” the review process to the extent it is currently blinded. Just calls for greater respect of blind review. Finally, my view is that we all need greater respect for and adherence to the “Golden Rule” no matter how it has been expressed over the past 4000 years and make it part of the “Golden Review”:
- The Tale of the Eloquent Peasant, Ancient Egyptian: “Do for one who may do for you, that you may cause him thus to do.” (circa 1800 BCE).
- Socrates, Greece: “Do not do to others that which would anger you if others did it to you.” (circa 5th century BCE)
- T’ai Shang Kan Ying P’ien.Taoism: “Regard your neighbor’s gain as your own gain, and your neighbor’s loss as your own loss.” (circa 4th century BCE)
- Plato, Greece: “May I do to others as I would that they should do unto me.” (circa 4th century BCE)
- Mahabharata, 5:1517 Hinduism: “Do naught unto others which would cause you pain if done to you.” (circa 4th century BCE)
- Analects 15:23 Confucianism: “Do not do to others what you do not want them to do to you.” (circa 4th century BCE)
- Epictetus: “What you would avoid suffering yourself, seek not to impose on others.” (circa 100 CE)
- Talmud, Shabbat 31a. Judaism: “What is hateful to you, do not to your fellow man. This is the law: all the rest is commentary.” (circa 200 CE with a much older oral tradition dating back to 7th century BCE)
- Luke 6:31, King James Version. Christian: “And as ye would that men should do to you, do ye also to them likewise.” (circa 1610 CE with originals dating to 100 CE)
- Imam “Al-Nawawi’s Forty Hadiths.” Islam: “None of you [truly] believes until he wishes for his brother what he wishes for himself.”