Search
EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays
Posted
The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.
As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions.
AI Literacy Requirements
Effective February 2, 2025, Article 4 of the AI Act mandates that providers (i.e., entities that develop AI systems for the EU market under their own name) and deployers of AI systems (i.e., those using AI systems under their authority) ensure a “sufficient level of AI literacy” among personnel. This requirement applies to all AI system (not only high-risk AI systems).
On May 7, 2025, the European Commission published detailed FAQs clarifying the scope of this obligation. According to the guidance, AI literacy encompasses not only a general understanding of AI capabilities and limitations, but also the ability to assess legal and ethical implications, interpret outputs critically, and apply appropriate oversight. Particularly relevant to all types of businesses, the guidance provides that organizations using generative AI for business functions (e.g., marketing copy, translations) must ensure that users are trained on associated risks, such as hallucinations.
The obligation extends to all personnel interacting with AI systems (i.e., not just employees), including contractors and service providers. While the AI Act does not prescribe a specific curriculum, the European Commission suggests that AI literacy initiatives should reflect the organization’s role (e.g., provider or deployer), the nature and risk profile of the AI systems involved, and the technical competencies of staff. Merely relying on user instructions or passive documentation would generally not be considered sufficient and bespoke policies, procedures and training may be required.
Training should be tailored to specific roles and responsibilities and integrated into broader risk management and compliance systems. The guidance further emphasizes that this obligation applies, and will be enforced, proportionally, meaning those deploying high-risk systems are expected to implement more robust literacy programs.
GPAI Models and Code of Practice
The AI Act defines a GPAI model in terms of “significant generality” in its capabilities and competence in “performing a wide range of distinct tasks regardless of the way the model is placed on the market”—capturing large language models like GPT-4, Gemini 2.5 Pro, DeepSeek-VL, etc. On June 12, 2025, the European Commission published a set of helpful FAQs regarding what constitutes a GPAI model and the AI Act’s obligations as they relate to such models.
Article 56 of the AI Act provides for the publication of a General-Purpose AI Code of Practice by the European AI Office, including general guidance for providers of GPAI models (and further guidance for models which present a “systemic risk”) on compliance with the AI Act’s obligations. This GPAI Code, while voluntary, will eventually form the basis on which the European AI Office will assess compliance with the AI Act. The GPAI Code covers transparency and copyright-related rules, which apply to all providers of GPAI models (other than those under an open-source license), as well as specific technical and governance requirements for providers of GPAI models with “systemic risk.”
We previously commented on some of the key elements of the first draft of the Code in our earlier blog post. Second and third drafts of the GPAI Code have since been published by the European Commission. The third draft which was published on March 11, 2025, notably removed the use of key performance indicators as a benchmark which was introduced in the second draft, among various streamlining and reorganization changes. The most recent draft also emphasizes the need for the European AI Office to review and update the GPAI Code over time as technology advances. This draft is expected to be the final draft for which feedback can be submitted, and will form the basis of the final GPAI Code. However, the finalization of the GPAI Code has been delayed from the initial deadline of May 2, 2025, and is now expected in August 2025, raising industry concerns about regulatory uncertainty and uneven compliance preparation.
Potential Delays in Implementation
While the first provisions of the AI Act came into effect in February 2025, other obligations, such as those placed on providers of GPAI models, do not come into effect until August 2, 2025, under the current timelines, with further implementation arriving in phases until Summer 2027. The recent delay of the GPAI Code until August has led to speculation that certain key provisions of the AI Act might too be delayed. It was reported in May that the European Commission was considering a delay in the enforcement of the GPAI obligations under the AI Act to allow it to “simplify” some of the rules. The motivation for this tracks with other recent simplification efforts by the European Commission which have been ongoing amidst calls from businesses to reduce the regulatory burden of doing business in the EU. While the delay is yet to be confirmed publicly by official EU sources, various influential figures from Member States, including the Swedish Prime Minister, have voiced support for a delay in implementation, in some cases for up to two years.
Key Takeaways
Organizations developing or deploying AI systems within the EU must navigate these evolving requirements carefully. The potential delays in enforcement provide a window to strengthen compliance strategies but also introduce uncertainty. Ensuring AI literacy among staff is now a legal obligation, necessitating the development of tailored training programs. For providers of GPAI models, understanding and preparing for forthcoming obligations is critical, even as final guidelines remain pending.
In-scope organizations should:
- Enhance AI Literacy: Develop and implement training programs to meet the AI literacy requirements outlined in Article 4.
- Monitor Regulatory Updates: Stay informed about changes in enforcement timelines and the finalization of the GPAI Code of Practice.
- Prepare for GPAI Obligations: Even in the absence of finalized guidelines, begin assessing current practices against the anticipated requirements for GPAI models.
For assistance with any of these obligations, or for advice generally on the EU AI Act or AI preparedness, please reach out to your usual Pillsbury contact.
The authors would like to thank trainee solicitor Samson Verebes for his contributions to this blog.
RELATED ARTICLES
EU AI Act: First Set of Requirements Go into Effect February 2, 2025