Governance is the foundation of responsible AI

If applied in a responsible manner, artificial intelligence (AI) offers vast opportunities. However, these benefits can only come from understandable and ethical AI that your stakeholders can trust. A sound end-to-end governance framework can ensure that your AI applications and systems meet their full potential.
Trust your artificial intelligence with proper governance

The digital transformation is progressing rapidly, as humans and machines collaborate more and more closely. AI applications are no longer a concept of the future but are becoming mainstream, with their staggering potential gaining more and more attention. This potential, however, goes hand in hand with risks – at the forefront being the issue of trust and explainability. AI models often provide data-driven decisions, but not the explanations behind these decisions; the answer to the «why» is missing. As we wrote in blog 1 of our series about responsible AI, «If AI isn’t responsible, it isn’t truly intelligent»; companies must build safe applications that stakeholders can understand and trust in order to truly capitalise on the opportunities of AI.

AI requires people to trust the recommendations of algorithms for business, such as the most suitable investment product or the most accurate sales predictions. Autonomous vehicles take this trust even further with passengers entrusting their lives to a machine. This is a glaring example of the reputational risk of AI, with the possibility of any malfunction or crash making headline news. This risk is present in other forms of AI as well, such as in customer engagement robots that acquire biases through intervention or even standard training. Therefore, it is of utmost importance to install a governance framework and governance processes right from the beginning of the development of an AI solution in order to minimise such risks.

Our Responsible AI Toolkit addresses the five key dimensions with regards to AI applications – governance, ethics and regulation, interpretability and explainability, robustness and security, and bias and fairness – and allows companies to systematically deal with the key issues at hand. However, the key to responsible AI is an end-to-end enterprise governance that serves as a base for all five of these dimensions. A comprehensive governance provides support at each step of your organisation’s AI implementation, anticipating risks and providing quality controls along the way.

AI governance needs to have answers to critical questions

Historically, governance functions have only had to deal with static processes. But one important characteristic of AI processes is that they are dynamic and adaptive – and thus AI governance must be as well. Such a system requires cohesive strategy planning across the organisation, as well as plans to best utilise existing capabilities within the current vendor ecosystem. In addition, it is imperative to consider model monitoring and compliance throughout the model development process.

At its highest level, AI governance should enable an organisation to answer critical questions about the results and the decision-making of AI applications, including:

  • Who is accountable?
  • How does AI align with the business strategy?
  • What processes could be modified to improve the outputs?
  • What controls need to be in place in order to track performance and pinpoint problems?
  • Are the results consistent and reproducible?

The ability to answer such questions and respond to the outcomes of an AI system requires a more flexible and adaptable form of governance than many organisations may be currently accustomed to. Whereas governance systems in the past have worked well with more predictable processes, the continued introduction of AI requires flexibility in governance in order to maintain accountability and clarity for all stakeholders. In addition to increased adaptability, the governance framework must also span the entire AI lifecycle and capture each single step and process in order to react quickly when needed.

PwC’s AI governance framework

Governance mostly is about adhering to regulatory requirements and company principles. But with regards to AI it is much more; it is the core function that enables a company to build AI solutions that customers and employees can trust and that are ethical.

PwC’s enterprise governance framework encompasses your whole AI journey, regardless of the industry you are in or the size of your AI solution:

  • strategy – industry standards, internal policies
  • planning – delivery approach, programme oversight
  • ecosystem – technology roadmap, sourcing, change management
  • development – solution design, data management, model building
  • deployment – integration, execution, evaluation
  • operating and monitoring

With PwC’s framework for AI, you can continue to build trust in your solutions and products by ensuring that there is accountability and intention behind each part of your AI journey.

Governance Framework (source: a practical guide to Responsible AI)

For your own copy of PwC’s practical guide to Responsible AI, please click here

Learn more about PwC’s Responsible AI Toolkit here.

 

Take our free Responsible AI Diagnostic

Contact us