Search
The Legal Risks of AI Systems in Technology Services
Posted
In our previous post, we provided an introduction to the budding new technology of generative AI, or AI systems. As with the implementation of any new technology, widespread understanding of the risks generally lags behind the speed of the technology itself. When the technology industry began its push “to the cloud,” many customers were concerned about certain issues, including but not limited to giving up control of data, security risks, and performance issues. In response, sophisticated customers carefully addressed these types of issues in their contracts with cloud service providers.
A similar approach is likely to play out with respect to AI technology. The market will ultimately drive how AI risk is addressed, but at the moment, we see several risks and issues for AI adopters to consider carefully:
Confidentiality and Security of a Customer’s Data
A common term in commercial contracts generally, and technology service provider contracts specifically, is an obligation for each party to maintain the confidentiality of data or information provided as part of the engagement. In addition, and particularly for customers in heavily regulated industries, the most robust confidentiality terms are imposed on service providers that have access to or host a customer’s data.
Interfacing with a service provider using AI should be no different. With respect to contractual protections, customers should ensure that service providers providing AI agree to meet appropriate obligations (i.e., both traditional confidentiality terms as well as more robust technical security requirements) to protect the confidential nature of data and information. Customers should also look out for how “customer data” is defined and ensure that all data it provides the service provider is subject to the confidentiality and security obligations—including information derived from the data it provides as part of the engagement.
Risk can also be mitigated outside the contract. For example, customers should consider implementing internal procedures that limit exposure—such as restricting users from sharing personal or proprietary information, or requiring encryption or other means of security prior to the data ever reaching the AI system.
We recommend that customers review their current engagements with AI providers to ensure that (1) “customer data” includes all of the data, information, and materials a customer provides to the service provider, as well as all materials derived from such data, and (2) the confidentiality and security obligations clearly apply to all such data processed via an AI system.
Commercial Value of Customer Data
In addition to protecting confidentiality and security of data, customers should be careful about protecting the commercial, proprietary value of its data and the derivatives of such data. AI products use huge amounts of data to learn and improve their models. If a customer owns the input data, and such data has commercial value, then a customer may want to restrict how service providers use such data to improve their AI products. That said, the improved models provide much of the value of an AI product, so service providers will also likely negotiate this issue heavily, and such negotiations can be rather complex
Customers purchasing AI products should consider including express contractual terms where the customer retains ownership of all pre-existing materials. In addition, customers should establish a clear position as to how service providers are able to use customer data.
Third-Party Liability
As noted above, AI systems learn from a wide variety of data sources. Service providers selling and licensing these systems must have the appropriate rights and consents to use data from all of these sources. If an AI service provider has not secured the appropriate rights to use the information, an individual customer could be exposed to risks of infringement or misappropriation from a third party for the customer’s downstream use.
There is still a great deal of uncertainty around how the generative AI tools available for public use are handling scraping of proprietary information. Late last year, a class action lawsuit was filed against a number of AI system service providers asserting that the providers’ scraping licensed code to create AI-powered tools was a violation of licensing terms applicable to code repositories. The suit was dismissed for lack of injury and failure to state a viable claim, but the rumblings related to how these companies are leveraging data they scrape from “public” sources is worth noting.
Because of the lack of certainty, customers should insist that an AI provider bears the risk associated with the customer’s use of the AI system. A customer should consider including indemnity obligations that cover third-party claims associated with IP or privacy violations, and ensure liability for such claims is not limited unduly by a cap on damages.
Irrespective of the above risks, contracting for technology services that include AI systems implies that we can trust AI systems to effectively perform the tasks we want them to. In the next installment of this series, Trusting AI Systems, we will explore the risks around the efficacy and accuracy of AI systems.
For questions on the legal risks associated with AI systems, and addressing such risks in your contracts with service providers, Pillsbury’s global sourcing and technology transactions professionals can assist.
Related Articles in the AI Systems and Commercial Contracting Series
AI Systems Adoption: Finding a Balance in Regulated Industries
Earning Your Trust: The Need for “Explainability” in AI Systems
Artificial Intelligence Systems and Risks in Commercial Contracting