AI Systems Adoption: Finding a Balance in Regulated Industries
Innovation has historically been driven by companies in regulated industries—e.g., financial services and health care—and some of the most intriguing use cases for generative AI systems will likely transform these industries.
At the same time, regulatory scrutiny could significantly hamper AI adoption, despite the current absence of explicit regulations against the use of AI systems. Regulators are likely going to focus on confidentiality, security and privacy concerns with generative AI systems, but other issues could arise, as well. Companies operating in key regulated industries appear to be anticipating regulatory scrutiny, which is why adoption of the newest generative AI systems will likely be slow and deliberate. However, in some cases, AI systems are being outright banned.
Despite the lack of a concrete and coherent regulatory regime, various governmental entities have begun the process of roughly outlining some guidance. The federal government recently organized the National AI Initiative, which provides for a “coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security” and aggregates input and resources from federal agencies. In addition, the U.S. Department of Commerce, via NIST, recently released an AI Intelligence Risk Management Framework, in order to “equip organizations and individuals … with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time,” which may signal the risks and best practices the federal government considers as best practices when leveraging AI systems. Aside from this general guidance, each industry has only begun exploring the benefits and risks of AI adoption.
AI and machine learning is not new to the health care industry. In 2021, the World Health Organization published a report on the ethics and governance of artificial intelligence for health, which cited various uses of AI including: (i) replacing clinicians and human decision-making in diagnosis and record analysis, (ii) health research and drug development; (iii) health system management, and (iv) public health surveillance. Though there is not presently a national U.S. regulatory regime geared specifically towards AI, the FDA and HHS have developed preliminary guidance and strategy. (This guidance predates the onslaught of the newest AI systems.)
Similarly, the financial services industry is no stranger to AI, and financial regulators have begun exploring some of their concerns with the use of AI in the industry. (We have provided further details on this topic here and here.) But mostly, the guidance predates the use of more advanced AI systems, and many of the industry leaders are left balancing an unknown regulatory landscape with a growing impetus to incorporate AI systems quickly. Without a clear set of legal requirements, many financial institutions are proactively developing a set of protective standard terms that they include in both their AI-specific agreements as well as generally in their IT engagements.
Of course, existing legal and regulatory constraints around privacy, intellectual property, discrimination, and data protections still apply and may need to be taken into consideration in the use of AI products.
Given the lack of general consensus on AI-regulation and uncertainty regarding the manner in which the existing legal and regulatory regimes will be retrofit to accommodate AI systems, actors in regulated industries that begin incorporating AI systems into their IT environments may want to consider some of the following best practices as preemptive measures to mitigate the risks associated with using AI systems:
- Consider piloting, rather than entering into long-term agreements to leverage AI systems, but beware of agreeing to standard, non-negotiated or online terms.
- Create a walled-off development or testing environment for the business and operational people to “play” in until they can come up with concrete use cases. This sandbox environment approach may be a technological mechanism to reduce the risk of the AI system receiving sensitive date or information.
- Ensure that the definition of data owned by the customer includes derivatives of data provided by the customer, including AI derivatives.
- Scrutinize contractual terms around ownership of feedback to ensure they are not unduly broad. Suppliers frequently assert ownership over improvements to their technology as a result of customers’ input or feedback, but the language permitting such ownership could be used to end-run around ownership of the “improvements” to an AI system based on customers’ inputs.
- Ensure internal policies clearly indicate whether and how employees and contractors can use AI systems. If Microsoft, a significant stakeholder in ChatGPT, is telling its employees not to put sensitive information into ChatGPT, other companies likely want to heed that advice as well.
- Develop standard contract terms that hedge the risk of bias, reliability, transparency and explainability. These may include clear scope descriptions, representations, and warranties and indemnities.
- If custom terms have not been negotiated and vetted for a particular AI system, review the terms and conditions available to determine how data is protected.
With all of this in mind, it ultimately appears that relative caution with implementing AI systems, and particular focus on protecting the organization’s data, can be balanced with curiosity and flexibility to try new technology, to effectively enable organizations operating in regulated entities to practice responsible use of AI systems.
Related Articles in the AI Systems and Commercial Contracting Series
Earning Your Trust: The Need for “Explainability” in AI Systems
Artificial Intelligence Systems and Risks in Commercial Contracting