This article was first published in the November/December 2019 International edition of
Accounting and Business magazine.

The processing of personal data ought to be lawful, fair and transparent and should meet the reasonable expectations of the individuals concerned. This ethical principle is not only behind the European Union’s General Data Protection Regulation (GDPR), which became law last year, but is also a guide to robust ethical corporate behaviour generally.

Organisations that come within GDPR’s jurisdiction need to be able to demonstrate they have a lawful basis for processing data – by obtaining the consent of the individual, fulfilling the terms of a contract or meeting the legitimate interest of the organisation. They also need to be able to explain in clear terms what the processing is about, including the logic behind any automated decision-making. Where the processing activity presents a high risk to individuals or society, then a data protection impact assessment should be carried out and appropriate measures put in place to help mitigate the risk.

As a rule of thumb, quite apart from any legislative requirements, data should be sourced and shared responsibly. If organisations are unaware of the provenance of data and unsure whether data is properly protected when shared with third parties, the risk of data breaches rises.

Global reverberations

In the wake of GDPR, the ethical use of data (or ‘digital ethics’ as it is known) has become a global public policy topic. Tim Cook, CEO of US tech company Apple, gave a keynote speech at an international conference of data protection regulators in Brussels last year on the topic, during which he said: ‘At every stage of the creative process, then and now, we engage in an open, honest and robust ethical debate about the products we make and the impact they will have. That’s just a part of our culture. We don’t do it because we have to. We do it because we ought to.’

Cook’s speech was widely reported at the time and pointed to an aspect of ethical data use that has caught the attention of regulators: just because you can process data, even in creative and innovative ways, that does not mean that you should. Elizabeth Denham, the UK Information Commissioner, referred to this point in a speech last year. ‘Fairness is where we crash against the questions of should we do this, rather than can we do this. Fairness is a legal principle in the GDPR, and we cannot expect it to be silent against should-type questions.’ She added, ‘Every time we make a ruling on the fairness principle, we reduce the real estate available for those who operate in the ethically questionable but legally acceptable realm.’

If the way that data is processed is considered ethically questionable, results in unfair outcomes for individuals, or has an adverse effect on society, then it may infringe GDPR’s fairness principle. And while notions of fairness can be difficult to frame, guidance is emerging. The European Commission issued a set of ethics guidelines for trustworthy artificial intelligence earlier this year. Key elements of the guidelines include transparency (processing decisions should be explainable), diversity and non-discrimination (unfair bias must be avoided), and accountability (design processes should be assessed and auditable).

The Council of Europe also published guidelines on artificial intelligence and data protection this year. These advise the adoption of a values-based approach when products and services are being designed, and recommend that applications give users meaningful control over data processing.

Where to start

So what steps can organisations across the world take to embed an ethical approach to data processing? Their activities should be reconciled with wider issues of corporate responsibility, and the approach taken should reflect the values of the business, which could include customer outcomes and the wellbeing of society. Many organisations publish statements on human rights, environmental protection and ethical supply chains. Some may wish to consider whether ethical data use and supply chains fall into the same category.

On a more practical level, those organisations operating under the requirements of GDPR should carry out a data protection impact assessment for high-risk data processing, which could include supplementary questions on outcomes for customers and society. The data protection officer (a mandatory appointment for public bodies and for certain types of data processing activity) could offer advice on ethical processing, as could a stakeholder group, which would provide direction on the approach to be taken by the organisation.

Finally, some measures of success should be in place to track the outcomes of the approach taken. For example, information could be held on the number of complaints relating to the fair use of data or whether the organisation has been subject to adverse regulatory or media attention related to its data use.

There is no one-size-fits-all approach to embedding data ethics, but the steps outlined here should help to address the ethical issues that the data-driven world continues to raise.  

John Bowman is a senior principal at IBM consultancy Promontory Financial Group, working in the privacy and data protection practice.