How can we control the spirits we summoned?

The use of artificial intelligence (AI) walks a fine line between value creation and responsibility.

Ralf Hofstetter

Ralf Hofstetter
Director for Trust & Transparency Solutions
PwC Switzerland    

Since becoming widely available in November 2022, OpenAI’s AI-based dialogue system ChatGPT has repeatedly made headlines – both positive and negative. While the language software has been equipped with protective mechanisms, it still repeatedly provides misinformation, discriminatory or template-like answers. 

ChatGPT tests the limits of technology

Just because machine-learning applications often far surpass human capabilities doesn’t mean they’re infallible. This is neither due to their limited technological potential nor to the supposed propagandistic intentions of the OpenAI research team. Rather, ChatGPT is currently in an early development phase. This is where the enormously high expectations of the users and the limits of the technology collide – with the consequences being highly publicised.

AI is as strong as the data basis

The recognition rate and thus the added value of AI speech software such as ChatGPT depends largely on the number and quality of the training data and processes. Errors or quality deficiencies in these data and unavoidable generalisations can affect the decision-making behaviour of the application and cause it to draw erroneous conclusions. In other words: machine-learning AI is only as good as the data it has been and will be trained with. ChatGPT draws on a pool of data starting in 2011. So that’s ‘only’ 11 years of knowledge and experience in data form. 

Regulation lagging behind progress

It’s understandable that calls for clear regulations for the implementation of AI are becoming louder and louder. There’s a lot going on in this area at the moment. Legislative bodies have already intervened in the digital marketplace with laws like the EU General Data Protection Regulation (GDPR) or the revised Swiss Data Protection Act (nDSG). Further government and private initiatives as well as various seals of approval for AI applications are under development. With the ‘Proposal for a regulation laying down harmonised rules on artificial intelligence’, the EU wants to strengthen confidence in AI. And with Algo.Rules, developers can make use of a practical orientation aid. 

Companies must take digital responsibility

Despite (self-)regulating initiatives, companies are bound by duty. After all, anyone who creates digital value must make sure that it’s in the best public interest. But economic efficiency and ethics often create tensions. For example, respect for privacy in the handling of personal data can diminish commercial success. Automated recruitment processes can affect the diversity of the workforce. Or the use of a chatbot can lead to disappointment because customers prefer to interact with other humans.

Between economic efficiency and ethics

Digital ethical solutions are neither black nor white – but rather a fragile balance between business value creation and responsible entrepreneurship. To maintain this balance, companies must align their actions with digital ethical values and develop an in-house code of values. Only in this way do they put their technological innovative strength at the service of people and commit to using technologies and data in a secure, sovereign, traceable and responsible manner.

A digital ethics strategy emerges from a company’s vision. It’s based on the business strategy and the corporate core values – beginning with integrity. With such an integrated strategy, a company can make sure that people, processes, products and technologies are operating within the boundaries of its digital ethics at every stage of the cycle. A modular framework facilitates the strategy process (cf. Figure 1), as the areas of responsibility can be adapted to the company-specific context. 

corporate ethics

Figure 1: Modular digital ethics can be integrated into corporate strategy and business processes in a targeted manner. Source: PwC Germany

Digital ethics audit benefits businesses 

While AI applications aren’t liable as products, they’re still subject to accountability. This is why it’s important for a company’s own digital ethics guidelines to stand up to scrutiny at all times. This is where the audit comes in. On the one hand, an audit team can examine the governance of the company and make sure that procedural and technological processes comply with the defined ethical principles, are implemented correctly and are disclosed in a comprehensible manner. With an audit accompanying the development, it can ensure the ethical strategy fit of an innovative digital product. Finally, auditors can monitor algorithmic decisions and thus control their susceptibility to error.

Designing digitalisation with trust

The ethical trustworthiness of self-learning AI begins with its development and data-based training. It’s therefore all the more crucial that a company defines digital ethical principles beforehand and bases its innovation process on them. With the credible commitment to digital ethical guidelines confirmed by an audit team, a company helps to shape the digital transformation in a responsible manner that’s in the best public interest. It prevents reputational damage and contributes significantly towards the sustainable trust of its stakeholders and society in innovations that add value, like ChatGPT.

#social#

Trust & Transparency Solutions 

PwC helps build transparency and trust to meet compliance requirements, stay competitive and enable long-term growth.

Learn more

Contact us

Ralf Hofstetter

Ralf Hofstetter

Partner, Sustainability Assurance, PwC Switzerland

Tel: +41 58 792 5625

Cristian  Manganiello

Cristian Manganiello

Partner, Digital Assurance & Trust, PwC Switzerland

Tel: +41 58 792 56 68