Legal considerations when procuring AI tools

Lorem ipsum
  • Blog
  • 5 minute read
  • 07/11/24
Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

The rapid evolution of the artificial intelligence space has led to the emergence of numerous innovative AI products that are revolutionising the way we work. However, with the advancements in technology, organisations now face new challenges when procuring AI tools, as this brings forth a range of legal implications that require careful consideration. In this blogpost, we want to provide a framework for companies on what to consider when buying AI tools from third parties. 

It is crucial for companies to prioritise strategic alignment when adopting new AI applications. This means ensuring that the intended use seamlessly aligns with the existing technology strategies in place. Consulting with the relevant stakeholders within the organisation is essential to guarantee that the implementation of new AI technology aligns with the company’s focus and goals regarding AI. Equally important is ensuring compatibility with the existing technological ecosystem and infrastructure, enabling seamless integration with other applications and data sources to deliver a value-added impact. Additionally, having a comprehensive understanding of all costs involved, such as licensing and transaction fees, is necessary for making an informed decision.

Conducting an assessment of the AI technology supplier is crucial to evaluate the supplier’s reputation, experience and expertise in the field. Gathering information from the supplier, such as case studies, customer testimonials and references, can provide valuable insights into their capabilities and reliability. Furthermore, it is essential to weigh the pros and cons of the new AI application, considering factors such as its functionality, scalability and potential impact on existing processes. This evaluation process should also include assessing any potential risks associated with the procurement, such as data security vulnerabilities or regulatory compliance issues. Additionally, conducting a test phase or pilot programme within the organisation can help identify any unforeseen challenges or limitations before full-scale implementation. 

When conducting supplier due diligence, it is crucial to focus on the following topics: 

  • Information security is highly important when procuring AI technology. Have a clear understanding of the data that will be processed by the AI system and prioritise the implementation of robust security measures. Transparency from the supplier regarding the data they will be processing is necessary, as it allows for the verification of strong security protocols being in place. 
  • Ensuring an AI system supplier adheres to ethical standards is vital for mitigating reputational risks associated with partnering with an external party. Additionally, assessing the potential impact of the technology is important, especially considering its effects on stakeholders, customers and society. 
  • Organisations need to consider multiple operational risks that could have a negative impact on business practices. It is important to assess if the AI supplier has the capabilities and infrastructure to reliably support operational demands over time. This involves evaluating the potential reliability on the provider and alternative solutions to reduce the impact of possible dependencies. 

As some providers of AI models are currently facing lawsuits over their usage of proprietary data for training purposes, IP infringement risks extend to firms procuring these models. To reduce this risk, organisations should seek contractual assurance of indemnification for third-party claims and non-infringement. This is often already offered by larger AI firms, though another way to mitigate this risk could be the inclusion of features that flag and block outputs resembling training data. 

Besides the previously mentioned risks, firms also need to be mindful of critical contractual considerations when adopting new AI technology from suppliers. These issues need to be evaluated based on contract requirements, the nature of the data involved and the local regulations. 

  • Regarding the input, agreements for AI products should include terms that guarantee the company ownership of their data, including information used to prompt and fine-tune the technology. These agreements should also include data protection provisions that mitigate the risk of data being used or accessed without authorisation by external parties. The supplier should not be able to use the firm’s data, except for improvements solely benefiting the firm. Additionally, companies need to make sure that their contracts include solid security requirements and that these requirements also apply to any external vendors or suppliers that handle the company’s data.
  • If possible, organisations should have contractual ownership rights over the generated output. Suppliers are often hesitant to grant the IP rights due to the uncertainty associated with protecting an AI’s output under IP law and due to the possibility of identical output produced by multiple users. Therefore, suppliers should be disclaiming any rights to the output in the contract. Additionally, companies need to ensure that suppliers are applying confidentiality and security measures to AI-generated output, similarly to the input data. This needs to be established by the contract terms, specifying the prevention of unauthorised usage.
  • To mitigate legal risks and potential liabilities arising from a supplier’s non-compliance with AI regulation, companies should request a warranty of compliance. This warranty should be accompanied by an indemnity that holds the supplier accountable for any legal repercussions arising from non-compliance. Additionally, it is beneficial for firms to secure a commitment from the AI supplier to reasonably assist them in complying with relevant laws related to the use of the AI model. This ensures that the firm is adequately protected. 
  • The use of AI can raise ethical concerns as it has the potential to perpetuate and increase biases existing in its training data, resulting in discriminatory outcomes for various demographic groups. Moreover, the deployment of AI can give rise to worries regarding issues such as misinformation and privacy. To address these concerns, it is important for companies to require AI suppliers to adhere to responsible and ethical AI standards. These standards should cover aspects such as transparency, explainability, fairness and non-discrimination, and suppliers themselves should be obligated to comply with their AI policies. If the supplier requests specific commitments from companies, these requests should be carefully evaluated by the relevant members of the firm.

When procuring AI technology, careful attention must be given to its technical, legal and ethical dimensions, while also ensuring that expectations and responsibilities are aligned among all parties involved. In order to make well-informed and responsible decisions, firms should consider the insights provided in this guidance along with their specific intended use of AI technology and the relevant local requirements and regulations.

Contact us

Philipp Rosenauer

Partner Legal, PwC Switzerland

+41 58 792 18 56

Email