AI for legal teams: when ChatGPT works (and when it doesn’t)

Explore the advancement of AI in contract reviews and when to bring in purpose-built legal AI

Navigate through the article
See Summize in action
Our Legal Disruptors report
Published: 

November 25, 2025

Updated: 

November 25, 2025

Generative AI (GenAI) has moved from novelty to necessity in the legal industry. In-house legal teams and law firms are increasingly adopting AI tools from ChatGPT to more tailored legal tech platforms to streamline workflows, reduce costs and improve efficiency.

In fact, our own research shows that 89% of legal professionals are using AI tools, and 49% of are using multiple tools, regularly. And a recent survey by the Association of Corporate Counsel (ACC) and Everlaw Inc shows that the usage of GenAI in in-house legal departments has more than doubled in the past year!

But as more teams adopt GenAI, the risk grows alongside the enthusiasm. OpenAI (the company behind ChatGPT and other models like DALL-E and Sora) recently clarified that ChatGPT isn’t a substitute for legal advice, a move prompted by liability concerns and high-profile courtroom blunders. So where does that leave legal teams? We’re here to explore the critical areas to watch out for, and the practical use cases that still make sense.

Why shouldn’t you trust general-purpose AI?

GenAI is powerful, but it’s not infallible – and in law, mistakes can be both costly and career-ending.

Hallucinations happen

For legal professionals, precision is everything. But GenAI can produce hallucinations – giving you outputs that look authoritative but are completely fabricated. In November 2025, it was reported that a family lawyer from Maryland, US was sanctioned after filing a brief that included fake citations generated by ChatGPT. The judge called it “unquestionably improper” and ordered remedial ethics training.  

Compliance concerns

AI might be able to draft clauses and summarize contracts, but it can’t replace licensed lawyers.  

In 2023, the American Bar Association (ABA) issued Formal Opinion 512, stating that lawyers have to “reasonably supervise” AI tools and verify all outputs.  

That Formal Opinion means that lawyers can’t delegate ethical responsibility to a chatbot. Doing that risks breaching your ethical duties as a lawyer.  

Data privacy risks

Any in-house legal department knows that confidentiality is a regulatory and reputational must-have. When using public AI chatbots, every prompt you enter could potentially be stored for model improvement, and OpenAI themselves have confirmed that user inputs may be logged. That means sensitive contract terms, negotiation strategies or compliance details could end up outside your secure environment.

ChatGPT use cases that still make sense

It’s clear that general-purpose AI chatbots aren’t built for compliance-critical legal work – but they can still add value when used carefully. These platforms are great for low-risk, productivity-focused tasks like…

  • Idea generation and brainstorming – generate ideas for client emails, internal policy drafts or training presentation outlines.
  • Summarizing non-sensitive documents – quickly condense lengthy documents that don’t have confidential information in to free up your time for other tasks.
  • Drafting first-pass content – draft simple client letters, standard contracts or internal communications, and then review and refine for accuracy.
  • Legal research support – pull together legal research, summarize key court rulings and identify precedent issues (just remember to verify everything against trusted sources!)

General-purpose AI chatbot dos and don’ts for legal teams

Do
    Don't
      • • Use general-purpose AI chatbots for low-risk, non-sensitive tasks
      • • Ask AI to cite every source when using it for research
      • • Pair AI outputs with trusted legal databases or other resources before acting on them
      • • Implement internal review policies for all AI-generated content
      • • Document your review process to stay audit-ready
      • • Explore enterprise AI solutions with encryption, data residency and zero-retention policies for sensitive work
      • • Rely on AI-generated citations without verification
      • • Treat AI as a decision-maker – it’s a productivity tool, not a legal authority
      • • Copy and paste full contracts or proprietary data into public AI chatbots
      • • Share confidential or client-identifiable information in prompts
      • • Assume public AI platforms guarantee attorney-client privilege – they don’t

      It’s also vital for your company to have an AI mandate in place – a policy that outlines what can and can’t be done via AI tools and how to use them safely and responsibly. Find out more about AI mandates from our CEO Tom Dunlop in his guest blog for Forbes.

      Why are purpose-built legal AI platforms the safer choice?

      General-purpose AI chatbots are designed to answer anything for anyone. But legal teams need tools that can handle the nuance of contracts, regulation and organizational risk. That’s why many legal departments are turning to purpose-built legal AI platforms (like Summize!) – solutions built specifically for drafting, reviewing and managing legal work, guarded by your legal standards and housed within secure and compliant environments.

      Here’s why these systems are always the better choice:

      Built-in legal expertise and compliance-first workflows

      Purpose-built legal AI platforms don’t just generate text – they embed legal knowledge and compliance into every step. Unlike general-purpose AI chatbots, these systems are trained on legal-specific datasets, clause libraries and contract playbooks, while also incorporating structured workflows that uphold your professional duties. That means:

      • Clause suggestions and automated redlines that follow and are guided your legal standards
      • Fewer hallucinations, more consistent and precise outputs that drastically reduce review time
      • Transparent audit trails and version control for accountability
      • Configurable rules to make sure your work aligns with regulatory and ethical obligations

      Enterprise-grade security and data controls

      Unlike public chatbots, legal AI platforms are built with confidentiality as a core requirement. Such features include:

      • Zero-data retention by default
      • Encrypted environments with strict access controls
      • Data residency options to meet regional regulatory requirements

      Integration with tools legal teams actually use

      General chatbots sit outside your systems, adding yet another layer into an already existing tech stack. Legal AI platforms, however, plug directly into the tools you already use like Teams, Outlook, Word, Slack, Jira and many more. What does this mean for your business?

      • Faster drafting and review from familiar environments
      • Automatic population of key fields and metadata
      • Centralized knowledge management across teams

      Using AI in your legal team – without the risk

      We all know GenAI is reshaping the legal industry, but not all AI tools are created equal. General-purpose chatbots can be incredibly helpful for low-risk, day-to-day tasks, but they simply aren’t built for the accuracy, confidentiality and compliance that legal work demands. As the stakes rise and regulations evolve, legal teams need solutions that are designed specifically for their environment, protect sensitive data, reduce risk and support the high standards lawyers are held to.

      Purpose-built AI platforms like Summize offer a safer, more reliable path forward. They combine the efficiency of automation with the structure, controls and security your team needs, meaning less time spent on repetitive work, fewer errors to correct and more confidence in the outputs you rely on.

      AI isn’t replacing lawyers, but the right AI can strengthen your team, streamline your workflows and help you work with greater speed and certainty. Now the challenge isn’t whether to use AI, but choosing the right tools for you.

      Find out more about how AI could change your contracting game in our full AI and CLM guide, or check out Summize’s AI-powered contract lifecycle management solution for a closer look at implementing legal AI into your everyday workflows.

      About the author

      Richard Somerfield

      Chief Technology Officer

      Richard, Chief Technology Officer at Summize, drives the company’s tech vision with a deep focus on enhancing the customer experience. With over 25 years of experience in technology and product development, including stints in tech businesses in Silicon Valley, Richard has been instrumental in shaping Summize's Contract Lifecycle Management (CLM) solution. His expertise in both professional and open-source products allows him to blend innovation with practicality, which helps him lead the technological vision and innovation of Summize. By also closely collaborating with Summize's clients and our sales teams, Richard has the skills to ensure that Summize's CLM platform continually evolves to meet market demands and lead the industry.

      LinkedIn icon