How can you potentially detect if an AI system has been trained on a biased or incomplete dataset?

Whenever an organisation bases its decisions on an output of a machine learning model, it's extremely important to understand the underlying decision processes of the model. An organisation that is not able to explain its actions loses the trust of the public and its employees, and in certain cases may even run into regulatory issues.

By looking at the following fictional case based on true data, we demonstrate the potential consequences of machine learning models that have been trained on biased or incomplete datasets.

Investigating the gender gap in salary setting

The University of Demonstration Purposes (UDP) decided to make use of their existing data to automate salary estimation, with the aim of making this process more streamlined, systematic and fair. For this purpose, the HR department of UDP has implemented a machine learning solution that uses historical data to automate the calculation of salary raises for employees.

For the past three years, both Anna and John have been working as assistant professors in the Department of Medical Robotics at UDP. Besides giving lectures, their daily work mainly consists of working in the laboratory and writing papers. Anna and John are good friends and have done many projects together where their supervisor has praised them both for their excellent work. In general, the profiles of Anna and John are almost identical. Both started working at the university at almost the same time, and both have more or less the same publication rate.

Recently, both Anna and John got a raise for their excellent work. When they happily shared the details of their recent raise, much to Anna's surprise, she found out that John got a 10% higher raise than her, even though they share almost identical qualifications and have been performing equally well over the past few years. Although Anna was happy for John, she wanted to find out why she got a lower raise. She contacted the HR department of the university, urging them for an explanation.  The HR department took her request seriously and launched an investigation to understand the underlying decision processes of the algorithm they have in place.

Why did John get a higher raise than Anna?

PwC was hired to provide an independent third-party opinion of the university’s model automating the calculation of salary increases. For this assignment, PwC used its in-house developed “Machine Learning Black Box Illuminator" to explain the decision process of the university’s HR solution that Anna questioned.

During the assignment, PwC used the Machine Learning Black Box Illuminator to inspect the model using various advanced1 interpretability methods, and was able to reveal the underlying forces governing this model’s decisions. PwC noticed that gender played an essential role in deciding Anna's pay raise compared to John’s. 

The chart above demonstrates how the Machine Learning Black Box Illuminator reveals how the algorithm has weighted the variables when estimating the amount of the raise for Anna and John. Positive values indicate that the feature is pushing towards a higher raise, whereas negative values towards the opposite. In this case, it became clear that gender plays an essential role as – in Anna's case – gender has a negative impact on her raise, while it has a slightly positive impact on John’s. Presented with the results, the university took immediate action to improve and re-train the model to ensure that gender doesn’t impact salary increases. The university took one step further to re-establish trust among their employees by engaging PwC to verify the model on a regular basis.

Summary

  • If AI is to gain people’s trust, organisations should make sure they can account for the decisions that AI makes, and explain them to the people affected.
  • Making AI interpretable can foster trust, enable control, and make the results produced by machine learning more actionable.

We can help our clients build trust in Artificial Intelligence

PwC Switzerland is among the leading service providers globally when it comes to making machine learning models more reliable and improving decision-making quality.

We provide services that help our clients explain both overall decision-making and individual choices and predictions, tailored to the perspectives of different stakeholders.

Our strength is that we have people and tools in-house including our own proprietary Machine Learning Black Box Illuminator that we can trust because we developed it ourselves. And the Black Box Illuminator is part of something bigger: explore how PwC’s Data Science Machine can help you on each step of the AI journey.

1 Interpretability methods like Eli5SHAP, and LIME

Contact us