The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
The use of generative AI tools, like ChatGPT, are becoming increasingly popular in the workplace. Generative AI tools include artificial intelligence chatbots powered by “large language models” (LLMs) that learn from (and share) a vast amount of accumulated text and interactions (usually snapshots of the entire internet). These tools are capable of interacting with users in a conversational and iterative way with a human-like personality, to perform a wide range of tasks, such as generating text, analyzing and solving problems, language translation, summarizing complex content or even generating code for software applications. For example, in a matter of seconds they can provide a draft marketing campaign, generate corresponding website code, or write customer-facing emails.
Innovation has historically been driven by companies in regulated industries—e.g., financial services and health care—and some of the most intriguing use cases for generative AI systems will likely transform these industries.
At the same time, regulatory scrutiny could significantly hamper AI adoption, despite the current absence of explicit regulations against the use of AI systems. Regulators are likely going to focus on confidentiality, security and privacy concerns with generative AI systems, but other issues could arise, as well. Companies operating in key regulated industries appear to be anticipating regulatory scrutiny, which is why adoption of the newest generative AI systems will likely be slow and deliberate. However, in some cases, AI systems are being outright banned.
AI systems seem like an exciting, effective new tool. But, as we have seen with Google’s recent struggles with accuracy, and Microsoft’s trouble with sentient, unhinged chat bots, not all of the kinks have been worked out with these tools.
In our last post, we discussed the legal risks, and related contractual mitigants for entering into agreements with AI vendors, but perhaps a more pressing question is whether one can trust AI systems in the first place.
In our previous post, we provided an introduction to the budding new technology of generative AI, or AI systems. As with the implementation of any new technology, widespread understanding of the risks generally lags behind the speed of the technology itself. When the technology industry began its push “to the cloud,” many customers were concerned about certain issues, including but not limited to giving up control of data, security risks, and performance issues. In response, sophisticated customers carefully addressed these types of issues in their contracts with cloud service providers.
Though the use of artificial intelligence has grown steadily during the past decade, the recent release of OpenAI’s generative AI system, ChatGPT, has led to a precipitous increase in attention and publicity accompanying the rise of powerful generative AI systems.
With these generative AI systems come mounting issues and concerns around the use of AI systems by technology service providers.
When it comes to artificial intelligence, a lack of transparency in process and bad data to begin with are two of the issues most hampering the embrace by the boardroom. In AI: Black boxes and the boardroom, colleagues Tim Wright and Antony Bott examine how the resulting lack of trust can make companies wary of the AI technology despite its many potential benefits, and some basic steps one can take to alleviate those concerns.
Famously dramatized by the disembodied voice of HAL in Stanley Kubrick’s 1968 film 2001: A Space Odyssey, artificial intelligence has been the subject of humanity’s existential angst for decades. Although Elon Musk warns that those fears may be justified, one of the biggest pushes for advancing artificial intelligence to-date has been to market it for purposes of day-to-day corporate efficiency. Nearly every IT vendor is seeking to make a name for their proprietary AI tool by offering AI as a Service to the majority of businesses who are not developing AI in-house, but who want to leverage the benefits of AI’s automated decision-making, data analytics and cost-savings. In the business context, AI has yet to pose the threat of refusing to “open the pod bay doors,” but customers are faced with the challenge of exposing the vendor’s AI to data amassed by their entire enterprise, thereby allowing the algorithms to learn from and evolve based on information that may be private, proprietary and heavily regulated. The most common solution to this conundrum is for customers to contract for ownership of all machine learning models, bots and other outputs that result from the AI’s presence in the customer’s environment and processing of customer data. But what are the implications of this “all for one” approach to ownership of the fruits of machine learning? What, if any, innovations in AI are lost when lessons learned are retained by a single entity?