In our series of Smart Finance articles, EY’s Manik Bhandari describes the need for CFOs to become risk managers for artificial intelligence across their organisations
This article was first published in the June 2019 Singapore edition of Accounting and Business magazine.
Governments and businesses are looking to harness data analytics technologies to make more accurate and effective decisions across the entire organisation. At the 2018 ACCA and EY Smart Finance conference, CFOs discussed how real-time and accurate insights are enabling them to become more effective business partners.
While CFOs are better informed today, their time is still being monopolised in reviewing and approving routine decisions. Applying artificial intelligence (AI) technologies to data analytics systems could relieve humans of some decision-making duties. But there are risks as well as opportunities in implementing autonomous decision-making.
Unlike other technologies, AI mimics the learning function of the human brain, meaning it could be deliberately or accidentally corrupted and even adopt human biases, potentially resulting in mistakes and unethical decisions. The public is increasingly concerned about the control and use of AI systems falling into the wrong hands, particularly with the recent cyber attacks on Singapore’s public health database systems. Any AI system failure could have profound ramifications for security, decision-making and credibility, and may lead to costly litigation, reputational damage, customer revolt, reduced profitability and regulatory scrutiny.
In view of the potential impact of AI, the Singapore government’s Personal Data Protection Commission has published A Proposed Model Artificial Intelligence Governance Framework. The framework is guided by two principles: the AI decision-making process must be explainable, transparent and fair; and the AI solution should protect the interests of human beings.
Creating a framework for using AI and managing the risk may sound complicated, but it is similar to the controls, policies and processes already used for humans. We’re already evaluating human behaviour against a set of norms, and the challenge is to now design and deploy an AI solution that is aligned to the nation’s laws, the organisation’s corporate values, and societal and ethical norms.
This will impact the very strategic purpose of the system, the integrity of data collection and management, the governance of model training, and the rigour of techniques used to monitor system and algorithmic performance. While all technologies require supervision, the dynamic and learning nature of AI means that its behaviour will continue to evolve even after it has been implemented, demanding a new level of agility and vigilance in its governance.
This responsibility to implement checks and balances in an AI system will require the active oversight and input of the entire leadership, including the CFO. Based on our observations of our clients, the leading best practices of CFOs include setting up a multidisciplinary advisory board that will provide independent guidance on the ethical considerations in AI development, proposing governance and accountability mechanisms in the AI code of conduct, and rolling out regular and independent AI ethical, design and risk audits to test and validate the systems.
As organisations continue on their intelligence transformation journey, the CFO’s role will evolve beyond an adopter to a risk manager of AI technology. In the age of AI, CFOs will need to ask their business: What is intelligence without trust?
Manik Bhandari is Asean partner – data and analytics leader at EY.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organisation or its member firms.