The Competition and Markets Authority (CMA), the UK’s competition regulator, announced this month that it plans on publishing an update in March 2024 to its initial report on AI foundation models (published in September 2023). The update will be the result of the CMA launching a “significant programme of engagement” in the UK, the United States and elsewhere to seek views on the initial report and proposed competition and consumer protection principles.
The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
A decision of the High Court of the United Kingdom earlier this year is an important reminder that the limitation of liability clause remains a crucial piece of any high value or complex contractual arrangement. The importance of such a clause seeking to restrict a party’s financial exposure in the event of a lawsuit or other claim is that, when enforceable, it can “cap” the amount of potential damages incurred. The issue considered in the High Court’s decision was whether a party could rely on a single liability cap rather than being subject to multiple liability caps for multiple claims. The decision hinged largely on the wording of the contract clauses and serves to remind us of key considerations when drafting limitation of liability clauses.
The UK and U.S. Governments have now formalized the UK-U.S. Data Bridge. The U.S. Attorney General designated the UK as a “qualifying state” for the purposes of the Executive Order 14086 on September 18, 2023, and the UK regulations implementing the Data Bridge are scheduled to take effect on October 12, 2023. From October 12, 2023, the Data Bridge will therefore operate as an extension of the EU-U.S. Data Privacy Framework (DPF) to enable the unrestricted movement of personal data between the UK and certified U.S. entities. For more information about the DPF, see our earlier briefing here.
The use of generative AI tools, like ChatGPT, are becoming increasingly popular in the workplace. Generative AI tools include artificial intelligence chatbots powered by “large language models” (LLMs) that learn from (and share) a vast amount of accumulated text and interactions (usually snapshots of the entire internet). These tools are capable of interacting with users in a conversational and iterative way with a human-like personality, to perform a wide range of tasks, such as generating text, analyzing and solving problems, language translation, summarizing complex content or even generating code for software applications. For example, in a matter of seconds they can provide a draft marketing campaign, generate corresponding website code, or write customer-facing emails.
Innovation has historically been driven by companies in regulated industries—e.g., financial services and health care—and some of the most intriguing use cases for generative AI systems will likely transform these industries.
At the same time, regulatory scrutiny could significantly hamper AI adoption, despite the current absence of explicit regulations against the use of AI systems. Regulators are likely going to focus on confidentiality, security and privacy concerns with generative AI systems, but other issues could arise, as well. Companies operating in key regulated industries appear to be anticipating regulatory scrutiny, which is why adoption of the newest generative AI systems will likely be slow and deliberate. However, in some cases, AI systems are being outright banned.
AI systems seem like an exciting, effective new tool. But, as we have seen with Google’s recent struggles with accuracy, and Microsoft’s trouble with sentient, unhinged chat bots, not all of the kinks have been worked out with these tools.
In our last post, we discussed the legal risks, and related contractual mitigants for entering into agreements with AI vendors, but perhaps a more pressing question is whether one can trust AI systems in the first place.
In our previous post, we provided an introduction to the budding new technology of generative AI, or AI systems. As with the implementation of any new technology, widespread understanding of the risks generally lags behind the speed of the technology itself. When the technology industry began its push “to the cloud,” many customers were concerned about certain issues, including but not limited to giving up control of data, security risks, and performance issues. In response, sophisticated customers carefully addressed these types of issues in their contracts with cloud service providers.
Though the use of artificial intelligence has grown steadily during the past decade, the recent release of OpenAI’s generative AI system, ChatGPT, has led to a precipitous increase in attention and publicity accompanying the rise of powerful generative AI systems.
With these generative AI systems come mounting issues and concerns around the use of AI systems by technology service providers.
On February 8, 2023, the U.S. Department of the Treasury released a report citing its “findings on the current state of cloud adoption in the sector, including potential benefits and challenges associated with increased adoption.” Treasury acknowledged that cloud adoption is an “important component” of a financial institution’s overall technology and business strategy, but also warned the industry about the harm a technical breakdown or cyberattack could have on the public given financial institutions’ reliance on a few large cloud service providers. The Treasury also noted that “[t]his report does not impose any new requirements or standards applicable to regulated financial institutions and is not intended to endorse or discourage the use of any specific provider or cloud services more generally.”