Navigating the risks

Harnessing AI and large language models responsibly in business

Philipp Rosenauer
Partner Legal, PwC Switzerland

Fatih Sahin
Director, TLS Artificial Intelligence & Data Lead, PwC Switzerland

The digital revolution is in full swing, with artificial intelligence (AI) and large language models (LLMs) such as ChatGPT standing at the forefront. These sophisticated technologies promise immense potential for businesses, including automation of mundane tasks, provision of personalised customer experiences and enhancement of strategic decision-making. However, along with these exciting opportunities, they also bring new and complex risks that companies must navigate.

One such risk is data privacy violations. LLMs are trained on enormous quantities of public data. While companies like OpenAI ensure not to use datasets containing personal data for training, potential privacy concerns remain. For instance, an LLM could unintentionally generate outputs that resemble sensitive or personally identifiable information (PII), potentially leading to significant privacy breaches and regulatory repercussions.

The second risk lies in algorithmic bias. Since LLMs learn from the data they're fed, biases in the input data can lead to skewed outputs. This presents the danger of perpetuating harmful stereotypes or unfair practices, which could tarnish a company's reputation, damage its relationship with customers and even result in legal implications.

The third key risk involves misuse. LLMs, if not properly controlled, could be exploited to generate harmful, false or misleading content. The impacts of such misuse could extend beyond individual companies to society at large, contributing to the spread of misinformation or harmful narratives.

Mitigating these risks necessitates a strong framework. Companies should establish a responsible AI governance structure encompassing a multi-disciplinary team. This team, ideally composed of data scientists, ethicists and legal experts, should understand the intricacies of the technology, its ethical implications and the associated legal landscape.

In the realm of data privacy, companies should ensure robust policies and practices. They should be transparent about the data being used and take stringent measures to prevent any PII from being incorporated into the models. Investing in AI auditing can help ascertain compliance with data use regulations, while techniques like differential privacy can provide an additional layer of data security.

To tackle algorithmic bias, businesses should carefully curate their training data, ensuring it's diverse, representative and free from prejudiced patterns. Moreover, companies should look into ‘explainability’ solutions to provide more transparency into the black-box workings of AI systems. By understanding why and how AI makes certain decisions, businesses can better ensure fairness and accountability.

Addressing misuse requires robust policies and continuous monitoring. Users should be educated about appropriate and inappropriate uses of the technology, while detection mechanisms should be in place to identify and promptly respond to any misuse.

To operationalise these mitigation strategies, companies need to:

  • Regularly conduct comprehensive AI risk assessments: Such assessments will help identify potential vulnerabilities and pave the way for strategic risk management plans.
  • Invest in comprehensive AI ethics training: By ensuring employees understand the ethical implications and limitations of AI, companies can foster more responsible use of the technology.
  • Develop and implement transparent AI policies: Clear guidelines on the use of AI technologies can significantly contribute to risk mitigation while building trust among employees, customers and stakeholders.
  • Establish a robust incident response mechanism: This mechanism should be capable of swiftly handling any instances of breach or misuse, minimising damage and providing learnings to prevent future occurrences.
  • Engage actively with external stakeholders: Regular interactions with regulators, non-profit organisations and AI ethics experts can keep businesses updated on the evolving landscape of AI ethics.

Implementing responsible AI practices is a challenging task that requires proactive and continuous effort. However, with a robust governance framework, rigorous policies and diligent action, businesses can maximise the benefits of AI and LLMs while minimising the associated risks.

Artificial Intelligence is a complex and hot topic. Do you have any questions? We are here to help you.

Talk to our experts!

Contact us

Feel free to contact us if you’d like to talk about the specific challenges you face and how we might help you overcome them.

https://pages.pwc.ch/core-contact-page?form_id=7014L000000HV0wQAG&embed=true&lang=en

Contact us

Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

Tel: +41 58 792 18 56

Matthias Leybold

Matthias Leybold

Partner, Cloud & Digital, PwC Switzerland

Tel: +41 58 792 13 96

Yan Borboën

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

Tel: +41 58 792 84 59

Fatih Sahin

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

Tel: +41 58 792 48 28

Sebastian Ahrens

Sebastian Ahrens

AI Center of Excellence Leader, PwC Switzerland

Tel: +41 58 792 16 28