Musings on Accounting Research by Steve

A week in bria . . . .

Day 1

Clarifying what should be submitted in a BRIA replication study:

Manuscripts reporting on replications should clearly identify the study or studies being replicated. The purpose of such a study is to demonstrate the robustness and inferential value of prior findings by incorporating a broader use of the scientific method in our field. The manuscript should highlight any differences from the original research study (e.g., measurements, manipulations, participants, etc.) and how these differences inform the literature (e.g., validity/robustness of construct). Relative to an original research article, the introduction and hypotheses development sections should be substantially scaled back. The goal is that the text will be around 10 pages in final print form and the use of tables and figures should be limited as well. This suggests a double-spaced manuscript submission including tables of no more than 22-25 pages. While the review process will be the same as it is for main articles, please indicate in your submission that your manuscript is a replication.


Saturation is not spelt 12 – Myth 2

The second myth I encountered at my recent EARNet conference is that

2.  You cannot generalize from 22 interviews but rather you need 100’s!  or at least 30 per the central limit theorem – an argument I am not certain comes close to applying to a non-randomly selected purposive sample.

Here is the opposite side of the coin that “12 is enough” made by some.    Just as I poured cold water on the “12 is enough” myth in the previous post,  I also need to pour cold water on this one!!!!

Generalization as always depends on theory and its interplay with the data.  If the theory is based on expertise and there are only a limited number of such experts, 12 might be a substantial portion thereof in many audit applications.  If theory suggests (see for example my negotiation dyad study in 2008 in AOS based on 5 dyads) generalizability dependent on the existence of the phenomenon with hard to access participants (like real dyads in ACM negotiation) and it can be confirmed with other data then 10 might be enough to generalize.

One thing is certain, you do not need  100’s of audit partners to generalize to the audit partner population.  Heck,  even the US presidential pollsters had it right 48% Clinton 46% Trump (but because of the strange quirk of the US system (one person one vote it is NOT) Trump won the electoral college with 3 million fewer votes than Clinton).  These polls of the US population generally had 1200 to 2400 interviews – enough that one can generalize to 130 million Americans who voted and have a reasonably accurate breakdown on gender, ethnicity, etc.  So do not tell me we need 100’s of interviews of audit partners to generalize in qualitative studies unless this is purely an inductive study with no theorizing at all.  Even then I would find it hard to believe that audit partners are so heterogenous that more than 30 or 40 are needed.

Saturation is not spelt 12

There are two extreme reactions to field studies that I find especially bothersome.

  1. Based on a reasonably thorough study of the literature, a qualitative meta-analysis concluded that MOST studies reach saturation after about 12 cross-sectional interviews suggesting that somewhere between 15 and 20 interviews should be enough in a qualitative accounting field study!!!!

WRONG or right!  It depends.  In some studies that number will never be reached as there are not 15 to 20 people to interview in the position (see my work on Central Research Units in the 1990’s.  There is only one CRU director per the then Big 6 plus 1 and I interviewed 6 of them  and not surprisingly as we later learned, Arther Andersen would not cooperate.)

Other studies are still achieving new insights when they reach 20 or 30 or rarely 40 or 50 interviews.  Saturation’s first criteria is several interviews in a row with no new insights!!!!  The more homogenous the population, the smaller the number of interviews to achieve saturation.  The more heterogenous the population, the more interviews to achieve saturation.

So the correct answer class is RIGHT and WRONG depending on when saturation is really reached.

Tomorrow myth 2.

“No contribution” rejection???

At the EARNet doctoral symposium I was asked to comment on  rejection for “no contribution or lack of incremental contribution”!  To be honest, I have never had one, but that is because they are much rarer in the experimental and field study areas I suspect than in archival research.  Furthermore, when I do archival research I do it on mostly hand collected Canadian data, which while being somewhat limiting in where I can publish (CAR and international journals) makes it unlikely that “no contribution” can be found.

I suspect  most rejections in the social and behavioral world (i.e. experimental, field studies, and surveys) are more due to internal validity concerns (the easy ones to reject) or external validity concerns (the harder one).

Personally, my main rejections of my personal research (from top 6 journals) have been due to

  1. Not measuring what I think I am measuring on the independent variable – this happened in my foray into expertise of audit committee members.  Internal validity issue.
  2. Creating a new construct that is not in accord with conventional wisdom and not doing enough to convince readers of its validity – archival measure of negotiation taking place.  Internal validity concerns.
  3. Using too complex of experimental manipulations so that one small difference in parallel wording leads to speculation about alternative stories that I could rule out despite my extensive manipulation checks.  Many researchers are taught to use as few of word differences as possible in their manipulations.  However, when dealing with expert partners and other experienced people, you have to give several cues to trigger the mindset you are hoping to tap into.  Hard to do that in eight words or less.  This involves internal validity concerns versus external validity.
  4. Use of Canadian data when attempting to publish in American 3.  It can be done (i.e. you can publish non-American data in the 3) but it has to be carefully phrased and I am not so good at phrasing to American tastes.  This is external validity from an American exceptionalism view of the world (i.e. we can publish American data in foreign journals but do not try to publish your data in our journals).
  5. Using atypical methods like theory informed surveys and positivistic field research.  Hard to write a paper and teach a method at the same time.  It can be done, and I have done it, but it is hard to do.  This relates to internal validity – convincing reviewers that you have met the standards.

Note that all of these, while troubling, are justified concerns.  However, at JAR, many of my rejections have been received due to inappropriate reviewers – but what can you expect from a journal with at most two experimental researchers on its editorial board.  Indeed, my favorite JAR rejection, after the Katherine Schipper era, was “The author clearly knows nothing about auditor client management negotiation.”  That was at the time after I had co-authored 1 JAR, 2 CAR, 1 AOS and 1 AJPT paper on negotiations.  Guess all those journals had incompetent reviewers, eh??????

Bristol, Leuven et al

While experiments are rarely done in the UK, a more’s the pity at that, I presented an experimental paper (most of the work done by doctoral student Yi Luo) at Bristol last week.  Maybe they were overawed by my reputation, but they were very open to learning about the experimental method.

It is a pity that the method used most commonly in the English speaking world outside of archival markets studies is not deployed at nearly at all in the UK (note to my interpretivistic readers – I am limiting myself to the English speaking world with this comments Australia/NZ, Singapore, North America where there are active research communities – no slight to field research meant).  I am open to ideas as to how to change that!!!!!

Off to Europe (and to the UK)

Heading out this week to Europe (and just in case it does not consider itself part of Europe any more, the United Kingdom).  Visiting the University of Bristol, working on some research with a co-author, being part of the faculty at the EARNet (European Audit Research network) doctoral consortium, and presenting my research there.  It is a heavy trip, no a boondoogle.  Two presentations, four discussions, an editors panel and a round table plus my own research.  So while it might sound glamorous there is real work involved in preparing and while on site.

However, it will be great to get back to Leuven which I consider one of the best kept secrets in accounting research in the world.  A world class faculty in auditing and management accounting with some excellent doctoral students.  Looking forward to KU Leuven again.

Respect the process

For years it was an article of faith among American researchers that their three journals, The Accounting Review, Journal of Accounting Research and Journal of Accounting and Economics were the most rigorous, most insightful, most everything journals in accounting.  When you pushed them for evidence, the most common response was well look at the SSCI journal reports.  The Social Sciences Citation Index (part of the web of science which is a part of the Thomson Reuters empire) was, and in many circles is still, considered the ultimate in quality control for citations.  Year after year that index said the same thing, they were the most cited, with the odd year that Accounting Organizations and Society would displace one of the three (and consistently would displace one of the three if one used the five year citation index instead of the two year impact factor)..

Okay, so I got it then!  But today things have changed, the American Three are still contenders but year after year other journals are consistently getting more cites than one or two of these three.  Management Accounting Research being the top example. But instead of adjusting the top three list or admiitting that maybe there are more than three journals that matter, like the Australians and Europeans do, many American academics put the blinders on and say “we only look at citiations for journals we consider important and then rank them by citations.”

Hmmmmmmm, another example of American exceptionalism at work.  After all this must be FAKE