Despite the geographical differences conference season appears to be nearly the same all over the world. Europe (this year June but can be as early as April), Canada (June), USA (August), Australia-New Zealand (July), and others all hold their “national” conferences in the period May to August along with a large number of targeted conferences from those based on journals, to those based at universities, to broader functional area conferences like GMARS (Global Management Accounting Research Symposium) and ISAR (International Symposium on Audit Research).
So how to select conferences to attend and what should you do there as a junior scholar (the term I will use to denote junior faculty and PhD students). I will outline some general considerations and follow up with specific points over the next couple of weeks.
- Smaller conferences generally allow you better exposure. It sounds counter-intuitive but a conference of 120 to 300 people allows you to seek out and meet with both peers and senior faculty members much easier than the larger conferences like the AAA and EAA. Now of course, going to a smaller conference means much greater preparation and investigation to ensure that people you want to meet will be there.
- Functional area conferences generally have higher quality papers and players involved than association conferences or minor journal conferences. I cannot totally explain it, but based on my observations at GMARS, ISAR, EARNet, Illinois Audit Symposium, Illinois Management Accounting Emerging Scholars, Kansas Audit Symposium, etc these sorts of conferences seem to attract both senior people and emerging junior stars. The exact types of people junior scholars want to meet with. Further, the papers tend to be earlier in their development and hence closer to the cutting edge of research in the area.
- For PhD students, conferences with attached doctoral consortium are to be favored as long as you are able to attend the consortium. This makes the EAA Conference much more valuable for those students who are able to be invited to the doctoral meeting.
- “National” conferences are a great place to begin to understand the vagaries of the review process, have a chance to present and maybe, if lucky, get a little feedback on your work. They tend to be more open to new scholars as they have many more slots available and the threshold for acceptance is generally top 50% of papers – so the screen is low. But these conferences take even more preparation if you are going to benefit beyond the line item on your vita.
- Minor journal conferences are big gambles for junior scholars. If you have a paper accepted there, fine – go and enjoy. However, for junior scholars, unless you have a paper, it is not clear what you will get from the conference. Often senior faculty present are only those that are associated with the journal and attend only if it is convenient for them. The papers accepted have generally been rejected by multiple more senior journals so the value added from the conference depends critically on the insights of the discussant, not the paper presenter. Unless these conferences are low cost to attend (both in time and money) a junior scholar needs to consider the cost benefit tradeoff carefully.
So now that I have insulted a bunch of people by giving my candid assessment of the overall picture for conference season – next to come is what does a junior scholar do to prepare for a conference?
As BRIA Senior Editor I have been surprised at the number of submissions that I have been receiving that do not have the experimental instrument (or survey, questions used as probes in interviews, etc) attached to the paper submission. I think it is pretty clear from the website that it is required and for the life of me I cannot understand the resistance to providing the same. Indeed, my understanding is that almost all accounting journals that publish this sort of research require the submission of related research instruments.
The only time in my career as an editor where I waived this requirement lead to one of my most embarrassing moments as an Editor (in-chief). I was the Editor in charge of one of Jim Hunton’s papers at CAR (one of only three he published there thank goodness). The submission stated that the questions used in that paper were embedded in a proprietary survey done for the sponsor of the workshop where he collected the data. Hence, he could not provide me with a copy of the complete survey instrument. So to my shame, I accepted the submission. Further the paper was presented at the 25th Anniversary CAR Conference and published in the 25th Anniversary CAR issue, the only Conference issue in the life of the journal. While I can offset that shame by stating I was the only person/editor who publicly referred the entire corpus of Hunton research to Bently University that lead to the investigations/retractions, nonetheless as far as I was concerned I failed as an editor in not insisting on full disclosure.
So imagine my surprise when being recently asked to review a paper and requesting that the journal get a copy of the survey instrument I was fired as a reviewer as the authors did not want to provide it!!!! SHOCKING – have we not learned our lesson yet????? I complained to the senior editors at the journal who expressed some surprise and dismay at this happening (or at least on me calling them out over it happening – one can never be certain but knowing the editors I think it was genuine dismay). More anon.
What I am about to say is meant with the greatest of kindness but at the same time can be seen as a slap in the face. The well-known research on the millennial generation has shown that for many they have never been told “NO” or have been turned down for something that they sought. Now let me say this is an “on average” statement that does not apply to every member of that generation and to say it does would be to repeat the same problem most capital markets researchers have! (have to get a dig in somewhere here).
But “rejection” is part of the business of research. Not every “great” idea will work out leading to journals rejecting your research, especially if you try to put B level research into A level journals. But what prompts this entry is the rise in the number of appeals that I hear senior journals are receiving. In my years at CAR (six with Gord Richardson and three on my own) we rarely had appeals, maybe one or two during Gord’s six years and I cannot recall any in my three years. Yet I hear that multiple appeals a year are being received at the top journals these days. I think it has something to do with my observation above.
More direct evidence came my way recently with hearing from our PhD admissions folks a large spike in applicants asking about why they were rejected from admittance to our PhD program, some asking multiple times for clarification. The admissions folks told me it had been building up slowly in recent years but this year was at a peak! Again, experience with rejection is not common for this generation and hence disbelief that you are turned down from something you want causes an over-reaction.
So folks, chill out! Accept “you can’t always get what you want” as a famous song in the baby boomer generation says.
As our knowledge base increases, so does the need for the scholarship of integration and synthesis. Reviews by method are key for novice researchers and those returning to active scholarship after a period of time off for whatever reason. Reviews by subject matter are key for textbook writers (at least those that are evidenced based rather than standards based), regulators, standard setters and practitioners.
But standards for reviews must improve and editors need to enforce those standards. I recently saw a paper rejected by one journal and published in another, with all the problems that lead to its rejection in the first journal still present in the second. The biggest problem, you could not replicate the review based on the authors’ description. Maybe they did a thorough search of the literature maybe not. We simply cannot tell. They did not disclose what journals they included (or more importantly excluded albeit you could guess based on the journal cited), how did they search to find the articles (i.e. what were the inclusion/exclusion criteria), what was the population of articles they found and how did they narrow it down to what was included in the review. And that was simply the start of the problems.
Now if you are interested in a “biased” literature review based on the “great man” theory of selection of literature – these are the reviews for you. However, in most other academic disciplines they would get laughed out of the community and any journal that published such an “unscientific” review would be regarded with disdain.
The bottom line is there are standards for reviews just as there are standards for almost everything else. Editors need to obtain competence in this area just as in any others. Being a journal editor should not be solely a “learning on the job,” new editors need to be mentored and given feedback. They have to know it is okay to ask for help from their more experienced brethren. To do any less is to devalue the research that is being reviewed!!!!
At BRIA, we love methods based reviews of the literature! Be the topic auditing, accounting information systems, financial accounting, managerial accounting or tax, we review them all!!!
However, (with rare exceptions) the focus on method as the basis for selection of articles to review also limits the audience to researchers or those who want to be researchers (i.e. research based students). Regulators, standard setters and practitioners are rarely and rightfully not interested in what a single method has to say about a topic. They want evidence from all research methods that have relevant insights to their problem. So methods based reviews, almost by necessity, has to be tailored to other researchers.
What are the rare exceptions? Where the issue of interest is method bound. These are increasingly difficult to find as accounting researchers expand the set of methods that they employ. For example, fraud brainstorming could be considered a psychology based experimental topic but what about experimental markets, field studies, interview studies.. . . . Studies focused on market efficiency would historically be considered to be from those based solely on archival markets data. But what about experimental markets, simulation studies, questionnaire studies etc.
My message is simple, if you want influence your fellow researchers, do a methods based review. BRIA is a natural home for those reviews in our domain.
However, if you want to influence regulators, standard setters and practitioners, ensure your team can carry out a review of all the literature in an area – not just a subset!
One of the pioneers of positivist accounting research (not positive accounting research) has passed away in early February. Dr. Dopuch was a pioneer accounting researcher who was loved (and hated) for moving accounting research out of its armchair phase and into the evidence collection and evaluation phase. See his obit on the AAA page.
It is tough for one in the Radical Centre to evaluate his influence. He opened the doors to being an evidence based professoriate and in his early days he let “one thousand flowers bloom” in is editorship of Journal of Accounting Research. After all Anthony Hopwood, Joel Dempski, Gerry Feltham, Ray Ball, Philip Brown among many others made their early marks under his editorship.
However, later in his career he became increasingly focused on boundary protecting and declaring methods and methodologies as not being within the pale of accounting research in North America. The one area that I can highlight readily is the insidious effects of Gondes and Dopuch (1974) that for the most part ended, for almost 25 years, any serious examination of financial accounting from other than a markets perspective.
At the same time he was a revolutionary coming to his PhD in the last days of normative accounting research, fathering many new streams of research including capital markets, experimental economics, behavioral auditing, tax among many others. He promoted women academics on a scale that was unknown to most in his generation. His mentoring was legendary. As an individual he was a courteous gentleman of the old school of the likes we will likely see no more.
Rest in peace Nic.
One of my frequent correspondents pointed out to me the strange case of the incredibly unchanging tables with the description of the variables changing greatly! While it is not a journal I am overly familiar with, Econ Journal Watch, published a commentary by Alex Young on Bird and Karolyi’s 2017 Accounting Review piece documenting how the specification as described in the text had changed from the 2015 working paper but the tables were identical (to coefficients three decimal places).
There are two serious issues here (1) why did it take the Accounting Review Senior Editor over six months to reply to Young’s initial questions – indeed it appears that Professor Barth only replied in light of the fact that Econ Journal Watch was publishing the Young commentary. (2) why did Bird and Karolyi NOT respect the academic process and reply substantively to Young’s questions? Do we have a culture of ignoring requests for clarification of our work? Their non-response comment in Econ Journal Watch is exactly a reflection of an extremely poor culture that is open to debate!
Then there is the really substantive question – did the Accounting Review review process, blow it again? At the heart of the issue is not the table presentation, but the implication that Young claims to have documented, that further testing does not support the interpretation given the results. More than anything this latter point is at the heart of the problem – if egregious conclusions were drawn in the original study how can we claim to have a self-correcting literature if no one engages in this discussion – just to get the facts on the table, let alone figure out what the facts mean????
Clearly, “he haunts us still”. . . . . . .
JH I mean.