Articles Posted in Artificial Intelligence

Posted

Providers have recently moved towards enabling AI agents to maintain persistent context and memory across interactions rather than treating each request as an isolated event. The environment makes it easier for enterprise AI systems to be designed to remember data and materials input and output from the tool.

Continue reading

Posted

Any time a new technology emerges that includes the ability for an AI model to autonomously “reach out and interact with the world,” legal and operations teams take notice. And rightly so, the legal and operational implications of any AI autonomy deserve careful consideration. MCP connectors are powerful precisely because they reduce friction between AI models and databases, and between each prompt and the resulting action, resulting in a proliferation of agentic AI. But reduced friction cuts both ways. The same architecture that makes MCP efficient also makes it a meaningful source of risk that requires appropriate governance.

Below, we examine the key legal and operational risks that an organization should consider before and during any MCP connector deployment in its AI portfolio. It is worth noting that risks inherent in generative AI use more broadly persist when an MCP connector conduits to an LLM, but the below analysis focuses on the risk specific to the new MCP connection infrastructure.

Continue reading

Posted

We all remember the first time we beheld the majestic power of generative AI. It plans vacations! It drafts my emails! It writes my essays! … then you accidentally include “Would you like me to soften the breakup message I drafted for you to be less confrontational?” in the text you send to your now ex- and highly offended partner, and you realize quickly the glaring limitation that a large language model (LLM) has on making you more productive. The model could give you the words, but it couldn’t act on them to fix your problems. And so, agents came along, which we thought would fix the inefficiency of copying and pasting a text response. But technically, these tools were hard to scale because every connection was custom-built, one at a time. Want Claude to talk to Slack? Build a custom bridge. Want ChatGPT to talk to Google Drive? Build another custom bridge. In reality, these tools weren’t scaling in the way we thought would drive efficiency. Your dreams of building an autonomous breakup robot were just not coming to fruition.

That is until Anthropic came up with a solution. Enter the Model Context Protocol (MCP), a standardized language that allows integration of LLMs into existing data source and application structures.

Continue reading

Posted

The European Commission has published its regulatory proposal for the EU Digital Omnibus, a package of amendments seeking to streamline EU rules on data protection, artificial intelligence and digital regulation in an effort to improve EU competitiveness. For more information on the background to the Digital Omnibus, see our earlier briefing here. The Digital Omnibus is split into two regulations, one targeting the AI Act and another targeting other EU digital regulations.

Continue reading

Posted

“Let a thousand flowers bloom” used to be Johnson & Johnson’s strategy to generative AI innovation. In short order, nearly 900 projects sprouted across the company.

But subsequent internal review revealed that only 10–15% of those projects produced 80% of the value. In response, J&J pivoted. It narrowed its focus to high-impact use cases, and scrapped the rest. These remaining efforts were tightly aligned with business strategy, execution quality and adoption.

Continue reading

Posted

The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

Continue reading

Posted

By now, we all know what AI is. Some of us use ChatGPT as our search engine, confidant, secretary, travel agent, and much more. Others, at least, are acutely aware that AI exists, because everyone else is talking about it, possibly making money from it, or losing their jobs to it.

Continue reading

Posted

As we covered previously, President Trump has made clear that the U.S. is focused on increasing investments into building, scaling and speeding the development of AI infrastructure and data centers in the U.S., and Big Tech is responding in kind.

Continue reading

Posted

The AI Action Summit brought together a wide-ranging assembly of influential figures to discuss the future of artificial intelligence (AI) governance, risk mitigation and international cooperation. The attendees included government leaders and executives from multinational and emerging companies. The event was held on February 10 – 12, 2025, in Paris.

Continue reading

Posted

The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the Act), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a comprehensive overview of the Act, see our earlier client alert here.

Continue reading