1195224302


This article was first published in the May 2020 China edition of
Accounting and Business magazine.

Artificial Intelligence (AI) has the potential to change our lives beyond recognition. The capacity for machines to learn through exposure to examples and large sets of data could bring breakthroughs in medicine, reduce the cost of essential services and transform policing. The potential benefits for business, and the accountancy profession, are equally revolutionary – AI is already being used to rapidly and accurately complete routine tasks and spot discrepancies in data and transactions.

But AI brings its own challenges, not least the opacity of its output. The addition of a cognitive layer to automation brings insight, but it is not always clear how an algorithm has reached its conclusions. AI is often described as a ‘black box’ because the complexity, speed and volume of AI decision-making obscure the process; in order to gain insight into the factors that influence the output produced by an algorithm, we need to shine a light on its inner workings in a way that humans understand.

This challenge is at the centre of a new report from ACCA, Explainable AI: Putting the user at the core. It points out that, until now, the focus has been on refining the quality of the outputs of AI by developing more and more complex algorithms, rather than on explaining the answer. But as AI is maturing, the report says, explainability is becoming increasingly important ‘both for decision-making within a business, and post-fact audit of decisions made’. Auditable algorithms are those that are explainable; explainability, in other words, is a checks-and-balances mechanism for AI.

Explainability vs accuracy

The challenge is that there is a clear trade-off between the accuracy of AI models and their explainability. The most accurate algorithms tend to be the most complex, and this complicates explainability. But there are also specific challenges for the accountancy profession because AI is not fully autonomous – it is being used to augment, rather than replace, the human role. The opacity of AI means that professional accountants are less able to trust the technology and to be confident that it is being used ethically. Explainable AI helps to improve the understanding of AI and manage unrealistic expectations around the technology, and provides a level of comfort and clarity to those harbouring doubt.

A central problem is that explainability – or a lack of it – affects the ability of professional accountants to display scepticism; in a recent survey of members of ACCA and the IMA (Institute of Management Accountants), 54% agreed with this statement. ‘Professional accountants frequently refer to the idea of scepticism as a north star to guide their ability to deliver for their organisations,’ says the report. ‘Scepticism involves the ability to ask the right questions, to interrogate the responses, to delve deeper into particular areas if needed and to apply judgment in deciding if you are satisfied with the information as presented.’

Over-fitting issue

The report uses an example to illustrate how explainability might improve finance professionals’ understanding of the limitations of AI, and the quality of AI itself. The example used is a machine learning model for identifying suspicious transactions that need further investigation. ‘Over-fitting’ occurs when a model produces good results with historical data used to train the algorithm, but struggles with wider data sets – in this case, the model observed during the training phase that a high proportion of suspicious transactions occurred outside normal office hours.

As a result, the model attached a higher weight to the time-stamp of the transaction as a predictor for suspicious activity. When the model was applied more widely across all transactions, though, most of the out-of-hours transactions flagged by the algorithm turned out to be legitimate. Closer examination revealed that the training data comprised of transactions handled by the organisation’s core, full-time staff, while the wider trial involved all staff, many of whom were shift employees working outside normal office hours. The algorithm was not putting the time stamp in its proper context, thereby producing a large number of false positive results.

This is of course a highly simplified illustration, but analysing the time-stamp of the transaction relative to the contractual hours of the worker inputting the data would have produced a far more accurate result. As the report points out, rather than being only presented with data showing which transactions were suspicious, an explainable approach would highlight the components affecting the prediction and help the user to spot that time-stamps were over-represented in flagged transactions. ‘In the noise, volume and complexity of scaling a model with hundreds of features, details get lost or misinterpreted,’ says the report, ‘and finding the reasons might feel like looking for a needle in a haystack.’

The report makes a number of recommendations, including the importance of embedding explainability into enterprise adoption and the need to stay aware of evolving trends in AI.

Policymakers, it adds, should emphasise explainability as a design principle in product development. ‘There is the opportunity here for a virtuous cycle – one where explainable AI improves sales for the developer, value for the user and compliance for the regulator,’ it says.

As AI enters the mainstream, the report concludes, governance, risk and control mechanisms become even more important. ‘Human responsibility doesn’t go away, but explainability tools will be the support mechanism to augment the ability of professional accountants to act ethically.’

Liz Fisher, journalist