Trust and transparency
Even if a project manager is educated in the core principles of AI, the question still remains as to whether we can trust an advanced AI project management system and its underlying machine learning algorithm to do what they were designed to do. For an advanced AI in particular, it will be even harder to answer this question, since technologies like machine and deep learning are so hard to grasp, even for experts in that domain. Not only this, but the behaviour of an AI system is heavily influenced by the data it has been trained on, and any inherent bias in the data will be reflected in the deployed models. What’s even more striking is that many advanced-AI-based project management systems will potentially be used in zero-failure-tolerance engineering and construction projects (aircraft, automotive, life sciences, energy, military, etc.). This means that the question of trust is inevitable and central to any further efforts to spread the use of AI algorithms in such vital project environments.
Building trust in AI-based project management systems involves being clear about the algorithms selected and data used to train them. This means you need mathematicians and data scientists to independently assess the algorithms and data selections for bias, fairness and inclusion to ensure that the AI-based project management system is programmed and trained without human biases, and to prevent the AI system from evolving its own sentience and sapience to come up with its own biases. One way of increasing transparency will be to have independent AI algorithm inspections conducted by a third-party trust provider who specialises in AI and has the skills to assess such a complex environment.
Building trust in AI also involves creating a transparent accountability and responsibility matrix and ethical standards within the organisation that clearly outline what happens when an AI system fails at a certain assigned task. Who should be the entity responsible for an undesirable consequence that was caused by the programming code, the entered data, the improper operation or other factors?
Furthermore, given that the project management data used to train the AI algorithm often includes personal and private data (e.g. timesheets or HR records), security and privacy standards and requirements should be defined within the organisation. To prevent misuse and malicious use, the data stewards and the data owner responsible must manage this data properly. To keep data safe, each action on the data should be detailed and recorded by the AI-based project management software.
This means that project managers of the future who use advanced AI in their projects also need, in addition to their core project management knowledge, new skills and knowledge in mathematics, data science and compliance to be able to assess the risk and limitations of the dark side of AI.