The application of professional scepticism isn’t something that can be observed. The solution lies in a better understanding of psychology and, in particular, cognitive bias
This article was first published in the February 2017 international edition of Accounting and Business magazine.
By the nature of the work, auditors need to be sceptical. No independent auditor can or should accept at face value everything they are told. Professional scepticism – defined in International Auditing Standards as ‘an attitude that includes a questioning mind, being alert to conditions that may indicate a possible misstatement due to error or fraud, and a critical assessment of evidence’ – is at the very foundation of the audit profession.
Yet in recent years incidences of regulators questioning auditors’ professional scepticism have increased. Take this comment from the Australian Securities and Investment Commission in 2014: ‘Our reviews of audit files showed, in our view, insufficient professional scepticism was applied, particularly in relation to fair value measurement, impairment testing, and going concern assessments. In particular, we found examples where auditors appeared to have: (a) been over-reliant on, or readily accepted, the explanations and representations of the management of audited entities without challenging matters such as key underlying assumptions; or (b) sought out evidence to corroborate estimates or treatments rather than appropriately challenging them.’
This is not the only example; regulators around the world are arguing that a lack of professional scepticism among auditors is a major issue in audit quality. If auditors were more sceptical, they reason, more misstatements would be uncovered.
But is that really the case? A soon-to-be-released ACCA report looks closely at the issue of professional scepticism, and its conclusions are fascinating.
The problem for the audit profession in trying to defend itself, the report points out, is that professional scepticism is a state of mind that cannot be directly observed. The sceptical state of mind feeds into the judgments that an auditor makes, which drive the auditor’s actions, which the auditor then documents. And audit oversight bodies tend to give a lot of weight to documented evidence – in which scepticism is invisible.
ACCA believes that the solution lies in a better understanding of psychology and, in particular, research into cognitive bias carried out by Amos Tversky and Daniel Kahneman. ‘The literature on cognitive biases is particularly helpful in understanding how all stakeholders in the financial reporting process use information to make decisions in practice,’ says the report. If all parties involved understand how cognitive bias can affect judgment, it adds, all stakeholders can work together to improve audit quality. ‘A system of standards and a regulatory process that does not take account of the psychology literature on human decision-making cannot be as effective as one that does.’
Tversky and Kahneman believed that human decision-making is affected by a number of cognitive biases (see first box), which have developed because they are evolutionarily beneficial; some serve as shortcuts to decision-making (a quick, less accurate decision is preferable to a slow, more accurate one), some allow us to make a decision when faced with uncertainty, and some smooth social interaction. In other words, we are all human, even auditors. And while some biases can be limited, it is simply not possible to eliminate cognitive bias entirely from the audit process.
The report does discuss whether technology and computer algorithms could reduce cognitive bias in the audit process – for example, by testing all transactions rather than a selection chosen by the auditor. ‘The involvement of computers does not guarantee a lack of bias,’ it concludes. ‘Indeed, it may codify bias. For example, the algorithms used may themselves introduce bias inadvertently due to the way they were coded, or there may be cognitive biases in the ways data is acquired, cleaned and queried, or reports interpreted.’
Instead, the report argues that better awareness of cognitive biases among all stakeholders in the audit process could lead to systems and processes becoming more resilient to them. Auditors need to be aware of the extent to which they may be subject to biases when designing and performing audit tests; standard setters need to make sure that they do not create systems that are susceptible to bias; preparers should aim to prepare reports that are transparent; and audit committees should question auditors throughout the audit to identify where cognitive bias may exist.
There are lessons in cognitive bias for everyone in the reporting supply chain. The report makes recommendations for auditors and standard setters that could integrate a better understanding of cognitive bias into standard setting (see box). It also highlights the concern that if regulators continue to criticise auditors for insufficient professional scepticism when in fact their concerns stem from their own cognitive biases, this may result in auditors being required to undertake procedures that don’t necessarily improve audit quality.
Liz Fisher, journalist
"A regulatory process that does not take account of the psychology of decision-making cannot be as effective as one that does"