Close
Updated:

Legal Definitions of AI: Considerations and Common Threads

By now, we all know what AI is. Some of us use ChatGPT as our search engine, confidant, secretary, travel agent, and much more. Others, at least, are acutely aware that AI exists, because everyone else is talking about it, possibly making money from it, or losing their jobs to it.

But when it comes to drafting a contract that accounts for the risks and issues related to AI, regulating the provision and use of it, or governing our organizations’ AI journeys, do we know what exactly we’re talking about?

From a legal perspective, “AI” is often defined too broadly and inarticulately, leading to unintended consequences, unnecessary obligations, and ambiguity. This post explores best practices for defining AI in legal settings, the risks of overbroad definitions, and how companies can strike a balance that appropriately reflects their needs and risk tolerance.

Why Getting the AI Definition Right Matters
Defining AI inappropriately in contracts, particularly within service or SaaS agreements, can create significant operational issues and legal risks. Some businesses (and many people) view AI as any system that automates decision-making, but this overgeneralization can sweep in tools and technologies that don’t carry the same risks or complexities as true AI systems. In a contractual context, this can lead to:

  • Unnecessary Compliance Burdens: Overly broad definitions may subject simple automation tools to complex obligations. For example, an automated rule-based system, like an email filter or a static report generator, could be classified as AI under a broad definition, triggering unnecessary audits, liability clauses, or other compliance procedures.
  • Increased Contractual Liability: Vague definitions of AI can also expose businesses to unnecessary liabilities. When AI is not clearly defined, parties to the contract may have different interpretations of the scope of services or the responsibilities for AI-related risks, such as algorithmic errors or data misuse.
  • Missed Opportunities for Tailored Risk Management: Without a precise definition, businesses might miss the opportunity to craft specific risk management strategies. Tailored provisions that address AI-specific risks—such as the use of machine learning algorithms in decision-making—can be overlooked if the contract language lumps all technologies under a broad AI banner.

In the context of laws and regulations, the definition of AI obviously determines the scope of what is regulated. Overbroad definitions can unintentionally capture traditional software systems or rule-based automation, creating compliance obligations where none were intended. For example, under the EU AI Act, classifying a system as AI could trigger stringent documentation, transparency and risk-management requirements. If a company or product is inaccurately swept into this regime due to an imprecise definition, it may face unnecessary regulatory burdens or enforcement risk. Conversely, overly narrow definitions can leave genuinely high-risk systems unregulated, undermining the policy goals of AI legislation.

AI risk governance within organizations also hinges on a clear and accurate definition. Internal policies, model inventories, audit protocols and board-level reporting often use “AI” as a trigger for enhanced scrutiny. If that trigger is too vague, governance frameworks can become either over-inclusive—bogging teams down in needless bureaucracy—or under-inclusive, allowing high-impact systems to operate without sufficient oversight. A precise, context-appropriate definition helps risk, legal, compliance and technical teams align on what systems matter most and how they should be managed. It also enables proportional governance that scales with actual risk.

Drawing clear distinctions allows leadership to discern whether there’s actually additional risk to be accounted for. Just as we’ve all heard of AI, we’ve all heard of the long list of risks that make it distinct from general technology. Infringement concerns related to AI inputs and outputs; biased, discriminatory, inaccurate or unethical outputs; privacy concerns; and the danger of third-party claims based on the foregoing are in some senses unique to only a subset of AI—particularly, a type of machine learning that entails dynamic learning to create unique outputs. Defining that subset correctly ensures that we are unlocking the right processes, restrictions and mitigations. The kind of AI that we’re concerned about, generally, tends to exhibit emergent behavior, produce outputs that may be difficult to explain or predict, and can replicate or transform data that might be copyrighted, confidential or otherwise sensitive.

(For a more detailed explanation of the risks associated with the subset of AI mentioned above, please see our earlier articles on Legal Risks of AI Systems and on the need for “explainability” in AI Systems.)

Internal vs. External AI Definitions in Contracts
One of the best practices for defining AI within an organization is distinguishing between internal and external applications of AI. While it’s tempting to apply a single broad definition across all legal settings, different business contexts call for different approaches.

Internal Definitions for In-House AI Tools
Internally, businesses can afford to define AI more narrowly. When a company develops or customizes its own AI tools, it has a clearer understanding of the technology’s functionality, inputs and decision-making parameters. In this case, an internal definition might focus specifically on systems that involve dynamic learning or machine learning components. For instance, an internal contract could define AI as “any system capable of dynamic learning from unstructured training materials that can, as a result of its learning, perform autonomous decisions based on data inputs.” This narrower definition allows a company to focus its resources and compliance efforts on higher-risk systems while excluding more routine automation tools that don’t pose significant risks.

External Definitions for Outsourced or Third-Party AI Tools
In outsourcing and service agreements, the definition of AI should be broader. When relying on external vendors, businesses often don’t have complete visibility into the inner workings of the AI tools being deployed. Therefore, it is prudent to define AI more expansively to cover a wider range of technologies, ensuring that the business is protected from potential risks associated with third-party AI systems. It’s important not to go too broad, though.

For example, an external definition might include “any system or tool that processes data autonomously and generates decisions, recommendations or predictions with minimal human intervention.” This ensures that even if a third-party system doesn’t involve advanced machine learning or deep neural networks, it still falls within the contractual scope if it performs decision-making functions based on data analysis.

Common Threads in AI Definitions: Best Practices for Clarity
Various industries and organizations have published definitions of AI that can serve as a guide for drafting contracts. For example, the EU AI Act (the first comprehensive regulation on AI by a major regulator anywhere) and OECD AI Policy Observatory (the AI hub for the intergovernmental Organization for Economic Cooperation and Development) each provide a thoughtful definition of “AI System”:

  • EU AI Act: “AI System” is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
  • OECD: “AI System” is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

While these definitions offer a starting point, they must be refined for contractual use. Here are key elements to consider when drafting AI definitions in contracts:

  • Autonomy and Decision-Making: Focus on systems that involve some level of autonomous decision-making or data-driven recommendations. This autonomy often leads to unique outputs rather than predictable results. Routine automation processes that follow pre-programmed rules without learning or evolving should be excluded to avoid unnecessary compliance burdens.
  • Dynamic Adaptation: Include tools that can modify their behavior based on new data inputs, such as machine learning models or adaptive systems. Oftentimes, this adaptation, development of neural networks, or learned logic, is done behind the scenes and is not transparent to either the developer or the user. This highlights the specific types of AI that require closer scrutiny and tailored risk management.
  • Data Processing and Output: The definition should include systems that process data to make predictions or recommendations. True machine learning requires large amounts of data to function correctly, and the degree of human oversight necessary to organize that data may impact the risks associated with the tool. As such, clarity is key in distinguishing between basic data analytics tools and sophisticated AI that influences business decisions.

Proposing a Balanced AI Definition for Contracts
Based on these common threads, businesses should aim for a flexible, context-specific definition of AI in their contracts. Here’s a proposed balanced definition that can be adapted for various contractual contexts:

  • For Internal Purposes: “AI refers to any machine learning tools that are designed to use predictive analytics (e.g., artificial neural networks) to analyze large data sets and generate unique outputs (e.g., text, images, speech) based on automated decision making.”
  • For External Purposes: “AI refers to any system or tool that autonomously processes data or generates predictions, recommendations, decisions, or expressive material, with minimal human intervention, and where the system’s behavior may evolve based on new inputs.”

This approach ensures that AI definitions are not overly restrictive internally but remain broad enough externally to capture third-party tools that may introduce risk.

Conclusion
As AI technologies continue to reshape the landscape of business-to-business services and outsourcing, getting the definition of AI right is critical. A well-crafted AI definition can help mitigate risks, clarify responsibilities and prevent unnecessary burdens from being imposed on systems that don’t pose significant risks. By adopting a balanced, context-specific approach, businesses can ensure that they’re protected without stifling innovation or over-complicating their operations.


RELATED ARTICLES

The Legal Risks of AI Systems in Technology Services

Earning Your Trust: The Need for “Explainability” in AI Systems

Contact Us