After two failed trilogues, Parliament and Council finally landed a compromise that pushes the high-risk compliance deadline to December 2027, lightens paperwork for smaller firms, and writes a long-promised ban on non-consensual intimate imagery into Europe’s flagship AI law.
The European Commission confirmed on Wednesday that negotiators from the Parliament and the Council had finally reached political agreement on the so-called AI Omnibus, the package of amendments designed to soften the application of the bloc’s flagship Artificial Intelligence Act and bolt on a ban on AI-generated non-consensual intimate imagery.
It took three rounds to get there. the failed 28 April session collapsed after roughly twelve hours of haggling over how AI built into regulated products should be assessed for conformity. A Wednesday session, scheduled at short notice ahead of a 13 May fallback date, closed the gap.
Executive vice-president for tech sovereignty Henna Virkkunen, who pushed the simplification drive through the College of Commissioners last November, said the deal would let companies “focus on building, not on paperwork”, framing it as proof that Europe can keep its rules-based approach while making them workable for industry.
Deadlines move, paperwork lightens
The headline change is the timeline. Obligations on standalone high-risk AI systems listed in Annex III, covering biometrics, education, employment, essential services, law enforcement, justice and border management, will now apply from 2 December 2027 rather than 2 August 2026. Rules for AI embedded in regulated products under Annex I take effect on 2 August 2028.
For companies sitting on partly built compliance programmes, that buys roughly sixteen extra months. Brussels insists the postponement is a function of unfinished standards work, not a retreat: harmonised standards from CEN-CENELEC and a fuller library of guidance documents are the precondition for switching the obligations on.
Smaller firms get more concrete relief. The agreement extends a set of simplifications already available to SMEs to small mid-cap companies, including templated technical documentation, lower fees and easier access to regulatory sandboxes. The intent, repeated through the Commission’s press release, is to scale obligations to organisational size rather than apply a single compliance model to every provider in the value chain.
A ban on nudification, written into the Act
The most politically charged element is the new prohibition on AI systems that generate child sexual abuse material or that produce non-consensual intimate images of identifiable people. Lawmakers had been pushing for it since the late-2025 controversy over Grok’s nudification scandal, and Parliament made it a red line for the trilogue.
The text now bans the placing on the market and use of AI tools whose primary purpose is to undress people in images or to depict identifiable individuals in sexually explicit scenarios without consent. Companies have until 2 December 2026 to bring existing products into line.
The prohibition does not apply where developers have implemented effective safety measures to prevent generation and misuse, a carve-out negotiated to spare general-purpose models that already filter such outputs.
TNW reported on the political agreement on intimate deepfakes when Parliament locked it into its mandate in late March; the trilogue text largely tracks that position, though enforcement now sits squarely with national market-surveillance authorities and the AI Office rather than with sectoral regulators.
Critics will note that the package leaves the AI Act’s core architecture intact. The risk-based pyramid stays. Foundation-model rules, in force since August 2025, are untouched. The Code of Practice for general-purpose AI providers continues to apply on a voluntary basis. Watermarking obligations on AI-generated content slip from February to December 2026 but remain mandatory.
Civil-society groups, more than forty of which signed a letter against the Omnibus in April, have argued the simplification narrative obscures real cuts in fundamental-rights protection, particularly around biometric identification and AI in schools. Their concerns survive the deal: the trilogue did not reopen the substantive obligations, only their timing and paperwork.
Industry, by contrast, has read the package as part of a broader competitiveness drive that includes the GDPR simplification and the Data Act review. The agreement bears that out: every concession in the AI Omnibus is procedural rather than substantive.
The political agreement still needs formal endorsement by the Parliament’s plenary and by ministers in the Council, expected before the summer recess. Without that, the original 2 August 2026 high-risk deadline applies, a scenario the Commission has spent six months trying to avoid.
National authorities, meanwhile, get a parallel job: the simplified documentation forms, sandbox templates and SMC guidance need to be in place well before the new deadlines, or the relief on paper will not translate into relief in practice.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
