Opportunities and Risks of Artificial Intelligence in the Financial Services Industry

Alexander Schultz-Wirth Partner, Leader Customer Transformation, PwC Switzerland 05 Nov 2018

The motto of the 5th Swiss International Finance Forum, hosted by NZZ, was «Collaboration – Courage – Trust». In a joint workshop featuring case studies, PwC and UBS addressed the opportunities and risks concerning the use of Artificial Intelligence in the financial industry.

Artificial Intelligence explained

New technologies are developing rapidly. Artificial Intelligence (AI) and blockchain will be key technologies with a significant influence on the financial industry over the next few years. All major banks but a few are experimenting with various methods of machine learning and are developing new solutions. But are the risks of these technologies sufficiently known?

Artificial Intelligence is defined as the theory and development of computer systems that perform tasks that normally require human intelligence such as hearing, speaking, understanding or planning. In AI, algorithms enrich machines with cognitive functions in order to enable them to perceive their environment and turn inputs into actions. 

AI is being used in companies in mainly four ways: as assisted, augmented, automated and autonomous intelligence. The term assisted intelligence refers to systems that assist humans in taking decisions or actions while augmented intelligence enhances human decision making and continuously learns from its interactions with humans and the environment. Automated intelligence allows the automation of existing manual and cognitive tasks that are routine. Autonomous intelligence in turn refers to systems that can adapt to different situations and can act autonomously without human assistance. All these different types of AI do not only offer opportunities for financial services companies, but also need to be addressed differently from the risk point of view. 

Understand risks to increase acceptance

After a prolonged period of stagnation in AI, the key driving forces have significantly gained speed over the last years. Today, staggering amounts of data are available for collection and analysis – within the constraints of the respective legal and regulatory frameworks. Enabled by cloud computing, storage capabilities have grown, and computer processing power has increased exponentially.

In the financial services industry, all domains and processes may be affected by AI – from customer service and engagement to investment and trading, cyber risk and security, regulatory affairs and compliance, to operations such as recruiting, contract analysis or IT support and infrastructure management. However, the maturity curve has not yet reached its peak, and there are still many years to enterprise readiness in most areas of AI. The recent hype about emerging technologies such as AI therefore sharply contrasts with today’s business reality.

In order to increase acceptance of this new technology, its risks and implications must be understood, especially in the highly regulated financial services industry. Innovations go hand in hand with new risks. The use of AI in banks entails performance risks, security risks and control risks as well as societal risks, economic risks and ethical risks. Those risks may impact both financial and non-financial risks, leading to reputational issues or financial losses. For AI to be employed in financial institutions, a framework has to be installed with respect to policies, key procedures, controls and minimum enterprise requirements, addressing the above mentioned risk categories. The application of this framework then needs to be calibrated to the criticality of the individual AI use cases.

Get inside the black box

One of the key concerns and barriers thwarting acceptance in the context of AI is the fact, that the technology itself – and the results it produces – is not always explainable. Its implications are manifold. From a business point of view, AI needs to be able to explain its decisions in specific applications, e.g. in Transaction Monitoring. Many AI algorithms are beyond human comprehension, and some AI vendors refuse to reveal how their programs work in order to protect their intellectual property. In both cases, when AI takes a decision, its end users will not know how this decision has come about. 

From the regulator’s perspective, the EU General Data Protection Regulation (GDPR), for instance, provides a «right to explanation». Users and clients can ask for an explanation of an algorithmic decision that was made about them. The Financial Stability Board (FSB) expresses concern that the lack of interpretability or auditability of AI and machine learning methods could become a macro-level risk. If AI-based decisions cause losses to financial intermediaries, there may be a lack of clarity around responsibility.

A look inside the black box of AI demands a degree of interpretability. This encompasses three core requirements: transparency to understand AI model decision making, explainability to understand the reasoning behind each decision, and the provability of the mathematical certainty behind the decisions. While interpretability can be less important for activities such as targeted marketing, it is imperative for services such as AI-driven robo advising.

Despite all the risks to address, we believe that the combined power of man and machine is better than either one on their own. The financial services industry can benefit from AI along the whole value chain. As such we recommend to embrace the power of AI in a responsible manner.

Key findings

  • Artificial Intelligence has become increasingly important. Staggering amounts of data, refined techniques, increasing storage capability and exponential computer processing power are the driving forces behind this development.
  • AI will have a significant influence on the financial services industry over the next few years.
  • AI is being used in companies in mainly four ways: assisted, augmented, automated and autonomous intelligence. Depending on its use, risks need to be addressed differently.
  • There is a gap between the hype about emerging technologies and business reality. To foster AI acceptance, the risks of AI need to be understood and addressed. We differentiate between performance risks, security risks and control risks as well as societal risks, economic risks and ethical risks.
  • To mitigate such risks we recommend to put an AI framework and governance in place that covers the policies, procedures, controls and minimum enterprise requirements, and that scales with the criticality of individual use cases.

 

Contact us

Alexander Schultz-Wirth

Alexander Schultz-Wirth

Partner, Leader Customer Transformation, PwC Switzerland

Tel: +41 58 792 47 97