Trust, supported by AI: human-centric auditing

Yan Borboën
Partner, Leader Digital & Trust Assurance and Cybersecurity & Privacy, PwC Switzerland

Morgan Badoud
Director, Digital Assurance & Trust, PwC Switzerland

As generative AI transforms industries, organisations must balance AI-related innovation with compliance and upskill their workforce to adapt to changing demands. By improving audits through summarisation, report generation, and expert assistance, AI can reshape workflows and create new efficiencies. However, its adoption requires a robust responsible AI governance to address ethical concerns and prevent bias. Ultimately, only collaboration between humans and technology will help unlock the full potential of generative AI.

trust in ai

The rapid advancement of generative AI (GenAI) is disrupting entire industries, transforming the way we work and gain insights. In practice, however, this technological revolution leads to a nuanced application picture, as the findings of PwC’s 27th Annual Global CEO Survey (Swiss edition) suggest.

While GenAI promises to increase profitability for its users, it is expected to have less of an impact on revenue growth. But pressure on organisations is mounting, with 61% of Swiss leaders anticipating intensified competition due to GenAI. Employees are also affected, as 70% would delegate work to AI to reduce their workload1. The survey shows that 53% of CEOs believe GenAI will significantly reshape their value creation processes, while 43% expect it to improve product quality. But only 16% of respondents have actually applied AI solutions within their organisations, highlighting a significant gap between understanding the potential of GenAI and realising its benefits.

Despite this technological upheaval, auditors need to remain committed to their core mission of delivering trust and value by focusing on what matters most to stakeholders such as clients, boards, regulators, and society: quality, speed, insights, experience and assurance. How can GenAI solutions take auditing into the future?

How to use GenAI in audits

GenAI can significantly enhance audit processes through four main use cases: creation, improvement, summarisation, and Q&A support. It can draft audit memos based on findings, review source code of key audit report, and create meeting minutes of interviews with auditees. GenAI also aids in reviewing and translating audit reports, summarising key client directives, and preparing executive summaries. Furthermore, it acts as a knowledge base for audit guides and methodologies and assists in comparing financial statements across different years.

However, caution is needed as the role of AI in audits isn’t formally approved by regulatory and professional bodies yet. Safeguards must be put in place to ensure that the use of GenAI in audits maintains quality and complies with regulations.

One thing is clear: people remain at the core of all audit processes. The human element – our skills, mindset, values and behaviours – cannot be replaced by technology. While GenAI excels at data processing, pattern recognition, and performing repetitive tasks, humans are needed to understand the business, exercise scepticism and judgement, provide insight, and build relationships. This cooperation between humans and AI is not about one replacing the other, but rather about each complementing the strengths of the other to achieve more.

Embracing GenAI: a call for new skills

Such a partnership – and a successful GenAI adoption – requires us to learn new capabilities and adapt our mindsets to embrace the opportunities that AI presents. 63% of Swiss executives say that GenAI will demand new skills from the majority of their workforce over the next three years2.

This upskilling includes specific capabilities such as increasing digital and data literacy, analytical skills, and mastering AI prompting (the process of giving precise inputs to a GenAI to guide conversations or extract information). But it’s also about fostering a comprehensive mindset among employees, such as intellectual curiosity, bias detection, agility, entrepreneurship, and – last but not least – empathy.

PwC: from learning hub to AI buddy network

At PwC, we have launched an AI learning hub for our entire workforce, covering AI awareness, ethics, and accountability through various mediums such as e-learning, videos, and podcasts. We have rolled out AI tools, both internally and externally developed, such as MS365 Copilot and ChatPwC, and integrated specific training in prompt engineering to maximise the effectiveness of these tools.

As a second step, we established the AI Circle to promote AI knowledge and adoption across the firm and with our clients. And finally, as we are convinced that adoption and innovation must come from the leaders, we have developed a concept of AI Buddy Network for our partners. AI buddies are experts who help our partners integrate and use GenAI solutions effectively.

Responsible AI development

Since ‘AI in trust’ requires ‘trust in AI’, it is paramount that all AI use cases are developed responsibly. GenAI in audit will necessitate a responsible AI framework to build trust among all stakeholders. The inclusion of AI in many work processes is progressing swiftly, and businesses must prioritise cloud adoption and robust data governance while focusing on responsible AI development. Thereby, the deployment of responsible AI is influenced by EU and Swiss regulations.

The EU AI Act, the first comprehensive legal framework for AI, applies globally to companies whose AI systems are used in the EU or affect EU citizens. It categorises AI systems into low, high, and unacceptable risk levels, imposing strict regulations on high-risk systems due to potential safety or rights concerns, and banning unacceptable risk systems such as social scoring. Providers of high-risk AI must undergo conformity assessments and maintain quality management.

AI governance: trust through frameworks

In Switzerland, the financial regulator FINMA identified challenges in the use of AI in November 2023 in its Risk Monitoring, firstly, emphasising the importance of clear governance and responsibility, with defined roles and sufficient AI expertise. Secondly, robust and reliable AI models must produce accurate results that can be verified. Thirdly, transparency and explainability are essential to ensure that the results of AI are clear to stakeholders, and lastly, non-discrimination must prevent unfair bias. ISO 42001, NIST Risk Management, and Germany’s AIC4, allow to define a proper risk and control environment to tackle legal requirements (such as EU AI Act or AI requirements highlighted in the FINMA Risk Monitoring 2023). Effective AI governance and frameworks are critical, including comprehensive policies, use case specific guidelines, security, legal and lifecycle related measures. Ultimately, the accountability for decisions lies with the individual, not the AI. It’s important to view GenAI as a tool, balancing its potential with human judgement – and auditors can pave the way for the responsible use of AI.


1 Microsoft WorkLab Work Trend Index

2 Swiss edition of ‘PwC’s 27th Annual Global CEO Survey’

Contact us

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

+41 58 792 84 59

Email

Morgan Badoud

Director, Digital Assurance & Trust, Geneva, PwC Switzerland

+41 58 792 90 80

Email