AI mandates: what in-house legal teams need to put in place

A practical guide for in-house legal teams on creating AI mandates to manage risk, enable adoption and encourage responsible AI use across your business.

Meet

Title:
Company:
Location:

Bio:

Follow on LinkedIn >

Navigate through the article
See Summize in action
Read our AI & CLM guide
Our Legal Disruptors report
Published: 

May 11, 2026

Updated: 

May 11, 2026

Only got a minute? Here are the key takeaways
  • AI adoption is already happening across the business but without formal oversight, creating inconsistent practices and increased legal and data risk
  • An AI mandate goes beyond policy, providing clear, enforceable rules on how AI tools can be used in day-to-day work.
  • Key focuses for legal teams include defining guardrails around data usage, AI-generated outputs, and approved tools to reduce risk and ambiguity.
  • To be effective, AI mandates need to be embedded into workflows, supported by cross-functional ownership and reinforced through ongoing communication.

Departments across all areas of the business are adopting AI at fast rates – in fact, 80% of organizations are now using it in some form, including legal teams.  

But quick and informal adoption can have negative results – inconsistent practices, unclear accountability and growing risks around data, accuracy and compliance – so much so that only around a third of organizations adopting AI have established policies or ethics guidelines in place to control those risks.  

With regulatory scrutiny increasing globally through legislation like the EU AI Act, and expectations around responsible AI use becoming more concrete, legal teams are being pulled into a more central role in shaping how AI is deployed through the use of AI mandates. But how do you get started with creating one? Read on to find out.

What is an AI mandate, and how is it different from an AI policy?

An AI policy typically sets out principles – high-level guidance on responsible use, ethics and risk. An AI mandate goes a step further, translating those principles into clear rules, guardrails and operational expectations. It answers practical questions like:

  • Which AI tools are approved for company use?
  • What data can be used in AI tools?
  • What level of human oversight is required and for what tasks?

Why do you need an AI mandate?

There are three main pressures converging here:

  1. AI use is already happening, but without oversight. As mentioned, employees are adopting tools independently to save time and improve output. Without clear guidance, this creates a patchwork of approaches and hidden risks.
  1. Regulatory expectations are increasing. Across jurisdictions, there’s growing focus on transparency, accountability and risk management in AI use. For organizations operating internationally or using AI in high-risk decision-making areas, this quickly becomes complex.
  1. Data governance is under strain. AI tools raise a lot of questions about what data is being shared, how it’s used and whether it’s being retained or learned from. For legal teams, this is a big risk.

{{quote 1}}

Where are in-house teams making the biggest mistakes?

In-house teams are often brought in once AI usage is already widespread. At that point, the instinct can be to react quickly, but it can lead to a few common mistakes:

  • Treating AI as purely an IT issue – legal, security and compliance need to be closely involved from the outset.
  • Defaulting to blanket bans – outright restrictions are difficult to enforce and often push usage underground, causing more issues in the long run.
  • Creating policies without enforcement mechanisms – if there’s no clarity on how rules apply in practice, they won’t be followed.
  • Overlooking data-specific risks – not all data carries the same level of sensitivity, but many organizations don’t define the difference.

And vitally, this isn’t just a business issue – it can exist within legal teams too. You might be using AI securely in your own workflows, but what about the rest of the legal function? Without shared standards and visibility, usage can quickly become inconsistently, even among colleagues in the same team.

These challenges all point to the same underlying issue: a lack of practical structure.

How do you start building an AI mandate?

Your goal shouldn’t be to create a perfect framework immediately, but to establish clear, usable guardrails that can evolve over time. Here are a few core areas to focus on:

Define what “acceptable use” actually means

Be explicit about:

  • Which tools are approved (and which aren’t)
  • What types of use cases are encouraged, restricted or prohibited

Being clear here reduces ambiguity across the business.

Set clear rules around data

This is often the most critical area. Start by asking if your data can be used for training and if it should be. Crucially, not all “training” is the same – there’s an important distinction between:

  • Internal or private model improvement – where data is used to fine-tune or customize a model for your organization's own use, without leaving your environment or benefiting other users.
  • External or third-party model training – where data may be kept by the provider and used to improve broader models, potentially exposing sensitive information beyond your organization.

For legal teams, this distinction is often one of the most significant from a risk and confidentiality perspective – and one that isn’t always understood across the business.

From there, your AI mandate should define:

  • What data is off-limits (e.g. confidential, personal, commercially sensitive)
  • Whether different categories of data require different rules
  • How data can be inputted into AI tools safely
“The question isn’t just ‘can we use AI?’. It’s ‘what happens to the data once we do?’ That’s where a lot of organizations still don’t have clear answers.”
– Richard Somerfield, Chief Technology Officer, Summize
Put guardrails around AI-generated outputs

AI outputs aren’t always accurate, and treating them like they are introduces risk. An AI mandate should set expectations around:

  • Validation and review processes
  • When human oversight is needed on outputs
  • Accountability for final outputs

This is especially important for legal and customer-facing work.

Align with regulatory obligations across jurisdictions

For organizations operating across borders, AI governance isn’t one-size-fits-all. Legal teams should consider:

  • Which regulations apply in each region
  • Whether different standards need to be applied across different markets
  • How to maintain consistency while meeting local requirements

This doesn’t need to be overly complicated, but it does need to be considered early.

Decide on governance around the tools being used

An area a lot of organizations overlook is whether employees should be using personal AI tools at all. Your AI mandate needs to clarify:

  • Whether only company-approved tools can be used
  • How new tools are assessed and approved
  • What controls are in place for access and usage

Without this, enforcement can become difficult.

How do you make your AI mandate stick?

Even the most well-considered AI mandate won’t be effective if it isn’t adopted in the first place. Success with an AI mandate isn’t just about defining rules, but about embedding them into how people actually work. That means:

  • Clear ownership across legal, IT and compliance. Without shared ownership, gaps will emerge quickly, whether that’s in tool approval, data handling or enforcement.
  • Communication that goes beyond a one-off rollout. Teams need ongoing, practical guidance on what’s expected of them and how it applies to their day-to-day work.
  • Integration into existing workflows and tools. The easier it is to follow the mandate, the more likely people are to stick to it. That could mean embedding guidance into contract creation and review processes (particularly in self-service workflows), procurement workflows or the tools employees are already using.
  • A balance between control and usability. If guardrails are too restrictive, employees will find workarounds. If they’re too loose, risk increases. The most effective mandates find a good balance between the two and allow safe use, rather than blocking it altogether.
  • Regular reviews and updates. AI regulation and tools are evolving quickly, so what was accurate a year ago might already be out of date. Set a clear review cadence (at least annually, if not more frequent) and revisit your mandate whenever there’s a major regulatory change or a new tool introduced.
  • Review and update cadence. AI regulation and tooling are evolving faster than most other governance documents – so an AI mandate that was accurate twelve months ago may already be out of date. Build in a defined review cycle, at minimum annually, and triggered by any significant regulatory development or new tool adoption.

{{quote 2}}

In other words, an AI mandate shouldn’t be a static, one-and-done document. It needs to be visible, usable and reinforced over time. For legal teams, that often means shifting from a purely advisory role to a more embedded, operational one that works closely with the business to make responsible AI use the default rather than the exception.

If you’re starting to formalize your approach, our full guide to AI and CLM explores how legal teams can apply these principles in practice – bridging the gap between policy and day-to-day contract workflows.

About the author

Richard Somerfield

Chief Technology Officer

Richard, Chief Technology Officer at Summize, drives the company’s tech vision with a deep focus on enhancing the customer experience. With over 25 years of experience in technology and product development, including stints in tech businesses in Silicon Valley, Richard has been instrumental in shaping Summize's Contract Lifecycle Management (CLM) solution. His expertise in both professional and open-source products, allows him to blend innovation with practicality, which helps him lead the technological vision and innovation of Summize. By also closely collaborating with Summize's clients and our sales teams, Richard has the skills to ensure that Summize's CLM platform continually evolves to meet market demands and lead the industry.

LinkedIn icon

“From a legal perspective, the risk isn’t that AI is being used – it’s that it’s being used inconsistently, often without visibility into what tools are in play, what data is being shared, or who is accountable when something goes wrong. That’s where exposure quietly builds.”

Lexi Lutz
General Counsel

“Use your leadership team to champion your AI mandate. The organizations getting it right aren’t the ones with the longest policies, but the ones that make it easier for their teams to do the right thing by default.”

Tom Dunlop
CEO & Co-Founder