The EU AI Act: an in-house lawyer’s guide
Understand the EU AI Act scope, applicability and implications for in-house legal teams, with practical steps to manage AI risk and vendor accountability.
April 20, 2026
April 21, 2026
- The EU AI Act has a general application from August 2026. It has extraterritorial reach, including UK and US businesses with EU-facing AI use.
- It regulates specific AI use cases based on risk, not all AI, meaning most everyday tools fall outside high-risk categories. But new transparency obligations are proposed to apply from November 2025 – directly relevant to teams using generative AI tools.
- Even when using third-party AI, accountability doesn’t transfer – you’re still responsible for oversight and outcomes.
- Scrutiny is increasing now, driven by regulators and internal stakeholders, not just the EU AI Act itself.
- The biggest risk isn’t non-compliance, but lack of visibility into where AI is used and how decisions are made.
- Start with AI visibility, prioritizing high-impact use cases, reviewing vendor contracts and establishing baseline governance.
On August 1 2026 the EU AI Act is set to come into force for most businesses. But what’s included in the act, does it apply to you and when do in-house legal teams need to be concerned about it?
Those questions are exactly what we’re here to answer. The Act has broad reach, including beyond the European Union. But in practice, its scope is more targeted than the headlines first make out. The real challenge for teams isn’t just interpreting the law, but understanding where the organization might actually be exposed.
What is the EU AI Act and when does it apply?
The EU AI Act is the first comprehensive legal framework governing artificial intelligence. Its aim is to make sure that AI systems are safe, transparent and aligned with fundamental rights, especially in high-risk use cases like employment, credit decisions and access to essential services.
Where and when does it apply?
Like GDPR, the Act has extraterritorial reach. It applies to both organizations placing AI systems on the EU market and organizations using AI systems whose outputs are used in the EU – no matter where that AI system is based or headquartered.
That means both US and UK-based companies could still have obligations under the Act, especially where AI is used in EU-facing products or services and contracts involving EU customers, suppliers or employees.
As for when it comes into force, there’s been a phased approach so far to the Act, and there are still moving parts – for example, proposed amendments in negotiations right now may push some high-risk AI obligations back to December 2027. For now, the 1 August 2026 date is the key date to work towards.
What does the Act actually regulate?
Despite its name, the Act doesn’t regulate “AI” as a whole, but specific use cases based on risk. So not all AI systems are in scope, and not all organizations will have the same obligations. We'll go into this more below.
Why does this matter for in-house legal teams?
Most in-house legal teams aren’t building AI systems, but they are buying them, implementing them and relying on their outputs, especially across legal, procurement and contract workflows. For example, nearly half of corporate legal departments now have access to generative AI tools, according to a 2026 report by Thomson Reuters.
Across a lot of organizations, AI is increasingly used in:
- Contract review and clause extraction
- Risk scoring and summarization
- Supplier evaluation
- Drafting and negotiation support.
A lot of these capabilities sit inside contract lifecycle management (CLM) platforms or similar tools, often powered by general-purpose AI models.
On top of that, when questions start to surface around risk, accountability or compliance, they tend to land with legal. In this context, legal becomes the last line of defense, not just users.
What’s really changing in the broader regulatory landscape?
While the headlines may make the EU AI Act seem like a broad, sweeping regulation, the strictest obligations only apply to a relatively narrow set of use cases. But at the same time, scrutiny around AI is expanding rapidly, far beyond those categories – even if it’s not showing up in legislation yet.
The Act categorizes AI systems by risk, with the heaviest obligations focused on “high-risk” use cases. In practice:
- Internal tools like contract summarization, search or drafting, fall into low or limited risk categories
- Only specific use cases – like employment decisions or access to essential services – trigger full compliance requirements
It’s important to note that "low risk" doesn't mean "no obligations." AI literacy requirements have been in force since February 2025, and proposed amendments would introduce a November 2026 deadline for labeling AI-generated content (images, audio, video and text) regardless of risk category. If your team is using generative AI to produce anything client-facing, that deadline is worth having on your radar.
This means most organizations won’t face the same level of regulation across all AI use. But even where formal obligations are limited, expectations are rising quickly. Here are a few things to consider:
1. Regulators are already enforcing AI-related risks
Recent enforcement shows a clear pattern: regulators are targeting the foundations of AI systems – data, transparency and decision-making.
- Meta received a €1.2 billion GDPR find linked to data practices underpinning AI-driven services
- Clearview AI has faced multiple EU fines for scraping biometric data to train its models without a legal basis
While these are dramatic examples of enforcement and sit under existing frameworks like GDPR rather than the AI Act, they highlight what regulators are already focusing on:
- Where data comes from
- How systems are trained
- Whether organizations can explain what their AI is doing
Decision-making is under the microscope
Beyond data, regulators and courts are increasingly focused on how AI-driven decisions affect individuals. For example, cases involving Uber have challenged automated decision-making around driver management, with courts requiring greater transparency and more human oversight. In a series of rulings by the Amsterdam Court of Appeal, drivers argued that decisions such as dismissals were effectively completely automated, with so-called human review found to be purely symbolic.
This is directly aligned to meaningful human oversight, one of the core concepts in the EU AI Act.
For in-house teams, your AI tools may be making decisions around contract approvals, supplier evaluation and risk scoring. While those aren’t formally “high-risk” systems, the expectation is shifting: if AI is influencing your decisions, you need to be able to explain and challenge it.
Accountability doesn’t stop with the vendor
A common misconception is that AI risk can be outsourced, but in practice, it’s the opposite.
Even where AI is embedded in third-party tools:
- You still need to understand how it’s used
- You still need to assess risk
- You’re still expected to demonstrate oversight
This is especially relevant in CLM and procurement workflows, where AI outputs may influence contractual decisions, and responsibility can’t just be pushed back to the provider. In other words, you might not be building the system, but you are still accountable for how you use it.
Internal scrutiny is accelerating fastest of all
For most legal teams, the biggest pressures won’t be coming from regulators just yet, but from inside the business.
You’re probably already being asked questions like where you’re using AI, where you’re exposed and if you could defend it when challenged. Those questions are tied directly to board-level risk decisions, procurement decisions and customer and partner expectations.
Unlike regulatory timelines, this scrutiny is already here, and won’t be going anywhere any time soon.
What do in-house teams need to do now?
The good news is that given the narrowness of the EU AI Act scope, you probably don’t need to do anything right this minute! But you may need to start looking into AI governance frameworks – so where do you start?
- Map where AI is used already. Start with visibility and ask what tools are in use across the business and where AI is embedded, especially within vendor platforms.
- Focus on higher-impact use cases. Prioritize areas where AI influences decisions, affects contractual outcomes and interacts with external stakeholders.
- Review vendor contracts. Look for gaps around AI usage and disclosure, data rights and training and auditability and oversight. This is especially important for legacy contracts.
- Establish a baseline AI governance approach. This doesn’t need to be complex. Early steps might include internal guidance on AI use, defined ownership across legal, IT and procurement and basic documentation of systems and risks.
Keep up to date with the act's developments by following Summize on LinkedIn.
Discover even more!
Explore more about contracting and CLM in our ultimate contract guides





