{{item.title}}
{{item.text}}
{{item.title}}
{{item.text}}
Partner, Financial Services Markets Leader, PwC Switzerland
The amount of data processed is growing exponentially. While the laws and regulations have largely adapted to cover how data must be handled, the ethical question of how it should be handled is coming into focus. More and more governments, regulators and customers are seeking transparency and good practice in terms of data ethics.
Between 2010 and 2020, the amount of data processed on a global scale increased from 1.2 trillion gigabytes to a staggering 59 trillion gigabytes (Forbes, 2021), reflecting a growth of close to 5,000%. And according to the predictions, this trend will continue in the coming years considering the fact that every business is creating, capturing, copying or consuming increasing amounts of data in some shape or form.
Naturally, there are a number of things to consider when treating, handling and processing data. A wave of new, and revisions to, data privacy laws and regulations swept across the globe in the late 2010s, forcing companies that handle data to revisit this subject and improve their governance and standards. Regulations like the EU GDPR introduced a set of principles and rules governing how personal data must be processed. They required companies to develop and maintain an understanding of how they collect which personal data as well as how they store and use it.
But it’s important to understand that the scope of such privacy laws only included personal data. On the other hand, the processing of non-personal data is barely regulated. And even within the regulator boundaries of privacy laws, there’s a certain amount of leeway for how data can be treated. Both of these types of data have one thing a common: while there are ways of how (personal or non-personal) data can be treated from a legal or regulatory perspective, the question arises as to how such data should be treated. This is where data ethics comes into play.
In essence, data ethics poses the following question: from a moral perspective, is it right what we do with the data we have? This central question affects almost everyone within a firm, but is of particular importance to fields like data science and data analytics. It also goes beyond the processing of data by people, and includes outcomes generated by automated processes and/or where artificial intelligence (AI) plays a role.
The topic of data ethics has gained traction throughout all industries in recent years, as data on individuals, automated decision-making and data analysis have become a feature of everyday life. More and more governments and regulators have created task forces, committees or initiatives to investigate this topic, including Switzerland and the EU Commission.
“The aim of data ethics is to promote responsible and sustainable use of data for the benefit of people and society and ensure that knowledge obtained through data is not used against the legitimate interests of an individual or group while identifying and promoting standards, values and responsibilities that allow us to judge whether decisions or actions are appropriate, ‘right’ or ‘good’.” (Central Digital & Data Office UK, 2020)
To make sure data is treated ethically, the formulation of guiding principles is important to establish a solid foundation on the basis of which policies, governance, culture and behavioural standards are developed. Unlike the GDPR, where the regulation lists a number of principles that need to be adhered to, there are currently no similar requirements regarding data ethics. But a number of stakeholders such as scholars, private institutions and professional services firms have formulated their own set of principles they consider to be suitable.
For instance, the Central Digital & Data Office of the UK has defined a framework based on three overarching principles: transparency, accountability and fairness. These principles are expected to underpin all the actions and aspects of a data processor. While this framework has been developed specifically for use by the public sector, it certainly provides suitable guidance for the private sector too and is well worth looking into.
The UK’s Central Digital & Data Office defined AI as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence. AI is constantly evolving, but generally it (a) involves machines using statistics to find patterns in large amounts of data, and (b) is the ability to perform repetitive tasks with data without the need for constant human guidance.
The principle of transparency requires you to make your actions, processes and data available for inspection, by publicly sharing the corresponding information in a complete, open, understandable, easily accessible and free format.
How not to do it! An existing female client of a global insurance company would like to add a new insurance policy to cover her new car. She submits the corresponding forms, but later learns that the insurance request is declined. After inquiring about the reason for the rejection, she isn’t given any further information or an explanation on how this decision was made.
Finally, there’s the principle of fairness. The main goal here is to make sure there’s no unintended discriminatory effect on any particular individual or social group. For instance, bias which may influence a model’s outcome should be eliminated, to make sure that the outcomes respect the dignity of all individuals, are non-discriminatory and are consistent with public interests, including human rights and social values.
How not to do it! Two male friends move together to the same town in Switzerland. They both want to start their own, separate businesses and independently apply for a loan at the same bank. Both friends share identical backgrounds in terms of financial standing, demographics and are seeking loans of equal value to start their venture. The only difference between the applications is the religion of the two applicants. To their surprise, they receive a different interest rate from the same client advisor of the bank.
Accountability seeks to ensure that effective governance and oversight mechanisms have been implemented. It’s intended to make sure that the company’s activities are in line with the stated objectives and meet the interests of the stakeholders concerned. In the event of errors or wrong doings, they can be clearly assigned to a person or area within the firm.
How not to do it! An employee of an upcoming fintech company is able to access and download a large amount of client data to a hard drive. Once the fintech company learns about the data theft, it isn’t able to determine who is responsible for it.
While there are certain aspects of laws like the Data Protection Act (Datenschutzgesetz, DSG) that provide some guidance with regard to data ethics, there’s no regulation or law in Switzerland dedicated specifically to this area.
In September 2019, a digital summit took place in Geneva with many high-profile representatives from the public and private sector, including former Swiss President Ueli Maurer and CEOs from companies like UBS, Credit Suisse, Swisscom and Migros. This summit formed the basis for the Swiss Digital Initiative (SDI), which is led by the SDI foundation.
The SDI foundation, formed under Swiss private law, defines its purpose as an effort to establish a long-term, sustainable process to safeguard ethical standards in the digital world, by bringing together academia, government, civil society and business to find solutions to strengthen trust in digital technologies and in the actors involved in ongoing digital transformation.
Much like the UK Data & Digital Office, the SDI has defined a number of key principles, which are: inclusiveness, awareness, transparency, agility and flexibility, responsiveness, sustainability, benevolence and accountability. In addition, the SDI together with its partners has developed a Digital Trust label.
The Digital trust label denotes the trustworthiness of a digital service in clear, visual and plain, non-technical language for everyone to understand, and is intended to provide consumers assurance of the digital service they consume. The label is granted after evaluating 35 attributes, split into four categories.
Germany established a ‘Data Ethics Commission’ in 2018, which is tasked with obtaining an understanding of the current situation with regard to data ethics and formulating central questions on the topic. The Commission has reviewed and discussed the subject in detail and in October 2019 it submitted a report containing the key questions and recommendations to the German Federal Council.
Similar to Switzerland, there’s no dedicated data ethics regulation or law in force today. But in 2021, the European Commission published a draft for a law on artificial intelligence, aiming to set the right conditions for trustworthy AI. While this draft contains several legal and technical aspects, it also considers the ethical perspective, by requiring companies to follow ethical principles. The legislative text specifically mentions the following topics: respect for human autonomy, prevention of harm, fairness and explicability.
'In the current draft, non-adherence to the new law will be subject to fines of up to EUR 30m or 6% of the (global) annual turnover, whichever is higher. ' So, it goes even further than the EU GDPR regulation does and, in case of sentencing, a company may face severe financial penalties. Similar to the EU GDPR, it’s expected that the new Law on Artificial Intelligence will be subject to extra-territorial application, which means it will apply to Swiss companies as well under certain circumstances.
'In the current draft, non-adherence to the new law will be subject to fines of up to EUR 30m or 6% of the (global) annual turnover, whichever is higher.'
When it comes to data ethics, there are a couple of aspects that should be considered. Below we’ll introduce a selection of potential considerations, including ideas on how they may be addressed.
It may very well be the case that data is being used in an unethical manner, without the company or the responsible person realising it. For instance, decision-making algorithms developed on the basis of an available or even out-of-date dataset may not properly reflect the company’s client population, or the wider population as a whole. This harbours the risk that the outcomes of the model always return decisions with an unintentional bias towards certain clients, prospects or groups.
Companies store and process a huge amount of data. In certain cases, a company may not maintain a clear understanding of all the data that’s being collected/stored and how the data is used. Or alternatively, it may not implement proper governance and safeguards. In such a case, it may be the case that data is being used in an unethical manner, without the company realising it.
Contrary to a mathematical problem, there’s no clear ‘correct’ answer to an ethical problem. There’s always a certain amount of ‘leeway’ regarding how an ethical problem can be answered, which makes it difficult to deal with, so governance, oversight and the underlying culture of an organisation are essential when making decisions.
Ethical standpoints can vary largely in different geographical areas. As historical and cultural aspects have a considerable effect on ethical views, a decision may be deemed ‘ethical’ in one geographical area but considered ‘unethical’ in another. Such cultural differences in the way data may be used or how models are developed therefore rely upon overarching principles and standards which the organisation establishes, thereby supplanting local and regional ‘norms’.
Unethical handling of data may also be caused by a lack of necessary technical knowledge, such as legal and regulatory expertise. Due to the high complexity and the very specific nature of the matter, external and independent advice may be necessary in order to obtain market insights and conduct benchmarking, in addition to ensuring adherence to all relevant requirements.
Having briefly looked at a few selected considerations, the obvious question to ask would be: how can these challenges be addressed?
In order to determine a corresponding strategy and roadmap to mitigate the associated risks, there are a number of related fundamental questions that may act as a starting point:
While all the above questions are highly relevant, the last one is often given less attention than it deserves! Inclusion of data ethics within the code of conduct reflects the company’s values and demonstrates that this topic is a priority.
The question of how data should be handled ethically is becoming increasingly important for customers as well as regulators. While there’s some guidance on the subject, it remains a complex one that’s still in its infancy when it comes to legislative texts and industry best practices. It’s crucial though that you closely track the latest developments and initiate your own initiatives to tackle the topic – given that a regulation is being drafted in this area and all signs point to severe fines. We believe it’s worth going even one step further and using the ethical treatment of data as a competitive advantage!
We also believe it’s essential to partner with a professional firm that can bring dedicated experts to the table, contribute the relevant expertise and is familiar with latest developments, the industry and the market. Considering our extensive experience in regulatory changes and our dominant role in the Swiss market, we’re confident that PwC will be the right partner for all your data ethics needs.
Mark Hussey
Director, Blockchain, DLT and Token Business Advisory Lead, PwC Switzerland
Tel: +41 79 549 0759