Explore expert commentary and practical insights on personal injury law, liability coverage, and bad faith claims tailored for Missouri lawyers.

AI Systems for Missouri Lawyers: How They Work, What They Risk, and How to Use Them Responsibly

Safety, Security, and Ethics in Missouri Legal Practice

Missouri Injury & Insurance Law  |  missouriinjuryandinsurancelaw.com

Introduction

After thirty years of civil litigation in Missouri, I have watched the practice of law absorb one wave of technology after another. Word processing replaced the typewriter. Electronic filing replaced the courthouse run. Lexis and Westlaw replaced the digest. Each time, the profession adapted, the ethics rules caught up, and those who used the new tools well gained a real advantage for their clients. Artificial intelligence — specifically large language models — is the next wave, and it is moving faster than any of the others.

In 2023 it was reported that on 10% of lawyers believe in AI generative tools will have a transformative effect on the practice of law. E. Rachel, Shock Survey Reveals Mots Lawyers Sunning Game Changing AI Technology, JD J. 27, Mar. 2023. However, in the last three years the pace of adoption and use as exponentially increased. 

This article is written for the practicing Missouri attorney who wants a plain-English explanation of what these tools actually are, how they differ from each other, what safety and security risks they carry, and what our professional responsibility rules require. I am not writing this as a technology enthusiast. I am writing it as a civil trial lawyer, who is navigating the current wave of technology. 

There is a great deal of confusion in the bar about AI — confusion about its capabilities and limitations. Headlines make AI sound as if it will replace lawyers. In 2022 ChatGPT passed the bar exam outperforming 90% of new lawyers. ABA J., 15 Mar. 2023. However, the ability to pass a standardized test, while impressive, is not the true function of an attorney. AI isn’t intelligence. It is a computer program wrapped up in marketing and myth. My goal here is to cut through that noise and give you what you actually need to know to make informed decisions for your practice. 

I.  What Large Language Models Actually Are — In Plain English

A.  Prediction Engines, Not Intelligence

Despite the marketing terminology, large language models are not artificially intelligent in any meaningful sense of the word. They are sophisticated text prediction engines trained on vast amounts of text — books, articles, websites, legal documents, code — to predict the most statistically likely next word, sentence, or paragraph in response to a given input. Think of them as extraordinarily advanced autocomplete systems.

Here is how the underlying process works: during training, the system processes billions of text examples and builds mathematical relationships between words, phrases, and concepts. When you input a prompt, the system uses those statistical patterns to generate text that is statistically likely to follow your input based on its training. It has no understanding of meaning, truth, or accuracy. It only knows statistical likelihood.

This is a critical distinction. When a large language model tells you that a Missouri case stands for a particular proposition, it is not retrieving that case from a Westlaw or Lexis database. It is generating text that its training data suggests would appropriately follow your query which may or not include an internet search depending on the model used and the prompt. If a search was involved it may pull your case cite from sources that may or may not be accurate. 

If statistical patterns in its training data happen to produce text that looks like a real case citation but corresponds to no actual decision, the system will produce it with equal fluency and apparent confidence. This is what the profession calls a “hallucination” — though that term is somewhat misleading.

Practice Tip: The term ‘hallucination’ suggests the system is malfunctioning. It is not. It is doing exactly what it was designed to do — predict likely text — and sometimes likely text is false text. Understanding this is the foundation of responsible AI use. The failure mode is predictable, not random. Because LLMs are “train[ed] on enormous volumes of text,” which include caselaw, an LLM “may . . . distort information from an actual case” that “looks correct.” Francis & Jarral, supra, at 5-6; see also Hayes, 763 F. Supp. 3d at 1065 (“fictitious case citations created by generative AI tools” can “look[] like a real case with a case name”). Montes v. Suns Legacy Partners LLC 2026 U.S. Dist. LEXIS 68888 (U.S. Dist. Ariz. 2026).

B.  Deterministic vs. Stochastic Outputs

Large language models can produce outputs in two fundamentally different ways. In deterministic mode, the system selects the single most statistically likely next word at each step, producing consistent and repeatable results. In stochastic mode, the system randomly samples from among several statistically likely options, producing varied outputs that are more natural in tone but less predictable across multiple runs.

All commercial large language models are inherently stochastic at their core, but prompt engineering can push them toward more consistent behavior. Instructions like “provide a systematic analysis with numbered conclusions” or “list the specific elements in order” encourage more structured outputs. You cannot eliminate the underlying variability entirely. The same prompt can produce meaningfully different results across multiple runs, which is one reason independent verification is not optional — it is essential.

C.  What These Systems Cannot Do

This point is worth stating plainly because the marketing materials rarely do. Large language models have no capacity for legal judgment, strategic reasoning, or ethical analysis. They can identify patterns, summarize information, generate text that appears sophisticated, and organize material in useful ways. They cannot:

Determine whether a legal argument is actually sound or just sounds sound. Assess the strategic value of a coverage position or litigation approach. Evaluate the credibility or authority of a source. Apply the specific facts of your case to the specific requirements of Missouri law with any guarantee of accuracy. Exercise the professional judgment that Rule 4-1.1 requires of you.

These tools are powerful assistants for the lawyer who understands their limits. They are dangerous substitutes for the lawyer who does not.

D.  The Training Data Cutoff Problem

Every large language model has a training data cutoff date — a point in time after which it has no information about the world. For a general-purpose system like Claude, ChatGPT, or Gemini, that cutoff means the system has no knowledge of cases decided after that date, no awareness of statutory amendments, and no access to recent developments in any area of law. For Missouri civil litigation, this is not a theoretical problem — it is a practical one.

Database-integrated legal platforms like Lexis+ Protégé and Westlaw’s CoCounsel address this by connecting the language model to a current legal database, so that case citations are drawn from actual legal research rather than statistical prediction. This is the fundamental architectural difference between a general-purpose AI tool and a purpose-built legal research platform, and it matters enormously for any task involving current case law.

II.  Understanding AI System Categories

Not all AI systems are equal. They differ in their architecture, their data handling, their security posture, and their fitness for legal use. The three questions that matter most for any platform are: (1) What can it actually do and what are its limitations? (2) How current is its underlying data? (3) How does it record, store, and use information that you enter? Everything else is secondary.

A.  Consumer Large Language Models

The general-purpose consumer tools — free versions of Claude, ChatGPT, and Gemini — are the platforms that present the greatest risk in legal practice and the least protection under the professional responsibility rules. On the free tier of these platforms, your inputs may be used to train future versions of the model, conversation data is stored on the vendor’s servers, and you have no contract governing how your data is handled. Entering client-identifying information into a free consumer LLM is not something any prudent attorney should do under any circumstances.

That said, these tools have valuable uses in practice when information input is tightly controlled. Language analysis using anonymized text, document structure review, issue-spotting exercises with hypothetical facts, informal writing, and first-pass “research” on general principles are all appropriate uses. The key is complete anonymization and a clear understanding of what you are asking the system to do.

Practice Tip: Consumer LLMs are appropriate for anonymized, first-pass work. They are not appropriate for authoritative legal research, drafting from real case facts, or any task where confidential information might be entered. Treat the line between those categories as a hard rule, not a judgment call. “[T]he Rule 11(b) violation was not necessarily counsel’s use of generative AI to conduct legal research. The violation was (1) failing to understand the technology so that he could use it in an informed way; and (2) failing to check the research it generated for accuracy.” As attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged.” Lexos Media IP, LLC v. Overstock.com Inc., 2026 U.S. Dist. LEXIS 20944 (U.S. Dis. Kan. 2026).

B.  Enterprise General-Purpose Platforms

The enterprise versions of these same platforms — Claude Teams, ChatGPT Enterprise, Microsoft Copilot for legal practice, and Google Gemini for Workspace — operate under materially different terms than their consumer counterparts. The key distinction is that enterprise versions typically offer a commitment that conversation data will not be used to train the underlying model, provide a data processing addendum (DPA) that creates contractual obligations around how your data is handled, and offer administrator controls that allow the firm to manage access and data retention.

Claude Teams, for example, provides a no-training commitment, SOC 2 Type II certification (a third-party security audit), and data processing terms that can satisfy the “reasonable efforts” standard under Rules 1.6 and 5.3 when combined with a signed DPA. Microsoft Copilot for Microsoft 365 integrates directly into the software most law firms already use and operates within Microsoft’s enterprise compliance framework. ChatGPT Enterprise provides similar protections to Claude Teams. Google Gemini for Workspace operates within Google’s enterprise data processing framework.

These tools are general-purpose platforms deployed in a legal environment. They lack the legal database integration of purpose-built legal AI, which means case citations must always be independently verified. Their strength is in drafting, analysis, issue-spotting, document and organization. Legal judgment and independent verification remain the responsibility of the lawyer.

C.  Database-Integrated Legal AI Platforms

These systems combine large language model capabilities with comprehensive, current legal databases. The practical result is that legal research queries can be answered by reference to actual case law, not statistical prediction — which eliminates the hallucination problem for the research function, though not necessarily for drafting or analysis.

Lexis+ Protégé integrates directly with the LexisNexis database, providing access to opinions through current dates and operating in a closed-loop architecture that does not use attorney inputs for training. Westlaw’s CoCounsel is built on the Thomson Reuters Westlaw database and provides access to current judicial opinions, statutes, and secondary sources with built-in legal research workflows. Bloomberg Law’s AI integration combines legal database access with regulatory filings and corporate information.

D.  Enterprise Legal AI Platforms

The enterprise legal platforms are built for larger firms that want AI deeply integrated into practice-specific workflows with firm-customizable models trained on the firm’s own document history.

Harvey is the most prominent of these — an enterprise platform that allows firm-specific training on proprietary documents, integration with firm databases, and customized workflows for particular practice areas. It has received substantial investment from law firm partners and is designed for the BigLaw environment, though it is now available to smaller firms as well. LegalSifter provides contract and policy-specific analysis with customizable clause libraries. Kira Systems focuses on document analysis with machine learning for policy form comparison and document review.

For most Missouri small and mid-sized firms, these platforms represent more infrastructure than the practice requires. The cost and complexity of enterprise legal AI deployment is better matched to volume-intensive document review or large transactional practices. The combination of an enterprise general-purpose platform and a database-integrated legal platform is a more practical and cost-effective approach for civil litigation practices of this size.

E.  Example Workflow with Two Platform System: Legal Research tools or Integrated Legal AI Platforms and General Purpose Enterprise Platform

An example for a law firm might be a firm deploying Microsoft Co-pilot in their enterprise systems, so it is available in Microsoft applications such as Word, Outlook, Excel and PowerPoint and also use Lexis for their research database or even better using Lexis+ Protégé as a database integrated LLM. A workflow division might look like: 

Example— Drafting a Motion 
Step 1  LEXUS and/or PROTÉGÉ ]  Research the controlling standard and the key cases in Lexus or Protégé first.Before Copilot touches the draft, you need verified authority. For a motion: pull the applicable Rule, current Missouri cases on the standard of review, and cases on the specific substantive issue. Verify and Shepardize before proceeding.
Step 2  [ MANUAL ]  Prepare a written instruction memo for Copilot.Write a brief memo in Word (or dictate it) that includes: the court, the relief sought, the verified legal standard with correct citations, the key facts from the record with source references, and the argument structure you want. The better your instruction memo, the better Copilot’s first draft. Other ways to do this is to use a form motion as the starting point or if you have done previous motions on the same topic use that as a source file or in prompt. 
Step 3  [ COPILOT ]  Use Copilot in Word to draft from your instruction memo.Prompt Example: ‘Using the legal standard and facts in this document, draft a motion for ______structured as: (1) Introduction; (2) Legal Standard; (3) Applicable facts9with our without record citation) (4) Argument [structured as identified]; (5) Conclusion with specific relief requested.’ If you had Protege you would already have a first draft, if not Copilot drafts from your instruction and supplied authority via Lexus research— it does not generate the legal authority.
Step 4  [ MANUAL ]  Revise the draft for accuracy, voice, and completeness.Verify every record citation. Confirm every legal citation against your research. Correct any inaccuracies in the characterization of the facts or law. Add strategic arguments that require professional judgment. This review is substantive, not cosmetic.
Step 5  [ VERIFY ]  Final citation check: every case in the motion must be independently verified as good law.Run every citation through Lexis cite checking app one final time before filing. This is the step that prevents sanctions.
Step 6  [ MANUAL ]  Final review and filing.Read the complete motion from beginning to end. Confirm the caption, the court, the signature block, the certificate of service. 
  HARD STOPS AND WATCH-OUTS• Never let Copilot generate the legal citations in a motion. Citations come from your Lexus or Lexus with Protégé research and your own verification — not from Copilot’s training data.• Check whether the court where you are filing has adopted any local rules requiring disclosure of AI use in filings. Several federal courts have done so. • Copilot in Word does not have access to your case file unless the documents are in your Microsoft 365 environment (SharePoint or OneDrive). Do not expect it to know facts it has not been given.
Practice Tip: The instruction memo in Step 2 is where you do the lawyer work. If you put careful thought into that memo — the right legal standard, the right facts, the right argument structure — Copilot will produce a strong first draft. If you give it a vague prompt, it will produce a generic draft that requires more work to fix than it saved.

These two platform arrangements are where I think most small and medium firms will end up as some of the BigLaw focused applications are cost prohibitive. In addition, many case management systems such as MyCase, Clio and Smokeball are adding AI functions either in their own products or through integrations with third parties. Some of these will be for Backoffice functions, some will deploy specific tools such as demand letter writing, medical record acquisition and summarizations, billing, case summarizations, and client communication handlers. While these are not lawyer work product functions the professional rules still apply and oversight and management of these functions is critical. 

To learn more about specific use cases see this blog for:

Artificial Intelligence and Insurance Policy Interpretation: What the Eleventh Circuit Opened, What the Scholars Are Fighting About, and What Missouri Practitioners Need to Know

Using AI to Map the Anatomy of an Insurance Policy: Coverage Grants, Exclusions, Conditions, Definitions, and the Structural Logic Insurers Don’t Want You to See

And up coming posts in the series including:

Using AI to Parse the Grammar and Syntax of Insurance Policy Language

III.  Closed-Loop vs. Open-Loop: The Distinction That Matters Most

The most important architectural distinction for professional responsibility purposes is not which AI platform you use — it is whether the platform is closed-loop or open-loop with respect to your data.

A closed-loop system processes your input, generates a response, and does not retain that input for use in training future model versions or for access by third parties outside the contractual framework. Legal-specific platforms like Lexis+ Protégé are designed as closed-loop systems. Enterprise versions of general-purpose platforms, when properly contracted and configured, operate as effectively closed-loop systems.

An open-loop system stores your inputs, may use them to train future model versions, and provides no contractual framework governing how your data is handled. Free consumer tiers of general-purpose AI tools are open-loop systems. Entering client information into an open-loop system is not a close call under Rule 4-1.6 — it is an unauthorized disclosure.

The frustrating reality is that this distinction is not always clear from marketing materials, and vendor representations about data handling change. Always read the actual privacy policy and terms of service before inputting any law practice information into any AI system. For any system you intend to use for client work, obtain the vendor’s data processing addendum before you begin.

Practice Tip: Before you enter any information about a client matter into any AI tool, ask yourself one question: Do I have a signed contract with this vendor that commits them, in writing, not to use my firm’s data to train their model? If the answer is no, treat the platform as open-loop and act accordingly.

 

IV.  What AI Can and Cannot Do: Matching the Tool to the Task

A.  Pattern Recognition and Document Analysis

All large language models excel at identifying patterns in text. This makes them genuinely useful for document analysis tasks where the goal is to compare, organize, or identify structural features across large volumes of text. In insurance coverage practice, this capability is well-suited to comparing policy forms across carriers to identify carrier-specific modifications to standard ISO exclusions, analyzing coverage letters in large claims files for consistency, organizing document productions by issue, and identifying changes in policy language over time.

These are document analysis tasks, not legal research tasks. The AI is functioning more like an extremely fast and thorough document clerk than like a lawyer. That is appropriate and valuable — provided the attorney reviews the output and applies legal judgment to it.

B.  First Pass Only

General-purpose AI tools, including enterprise versions, are not reliable legal research platforms. They can be useful for initial orientation — identifying the general framework of a question, generating a list of potentially relevant issues, summarizing general principles. They should never be the last word on any legal question that matters.

Every piece of AI-generated legal research requires independent verification through a current legal database before it enters any work product. This is not a suggestion. It is required under Rule 4-1.1’s technology competence standard and under Missouri Informal Opinion 2024-11, which specifically addresses the candor-to-the-tribunal risks of AI-generated errors. The Eastern District of Missouri Court of Appeals has already addressed the consequences of filing briefs citing cases that do not exist. Federal courts have sanctioned attorneys under Rule 11 for the same reason. This is not a hypothetical risk.

C.  Drafting and Analysis

This is where enterprise AI tools provide the greatest practical value for civil litigation. Drafting motions, demand letters, coverage analysis memoranda, pleadings, client correspondence, deposition outlines, and internal strategy memos are tasks where a capable AI assistant can produce a strong first draft in a fraction of the time traditional drafting requires — provided the lawyer inputs sufficient context and then reviews, revises, and owns the product. In addition, you can use your own documents, memos, briefs and research and direct AI how to use those inputs and the scope of its task to generate new documents. 

One of AI’s problems is just how easy it is to use and how authoritative and persuasive its outputs appear. AI-generated drafts require substantive attorney review, not proofreading. The system can produce text that is fluent, well-organized, and internally consistent but legally wrong for your specific facts and jurisdiction. Every AI-generated draft that leaves the firm as attorney work product requires the same professional judgment you would apply to a draft prepared by a capable associate — which is a high standard.

V.  Anonymization: What It Means and When It Is Required

Anonymization is the process of removing from an AI input all information that could identify a client, a matter, or a party. Complete anonymization means removing client names, case names, party names, policy numbers, claim numbers, Social Security numbers, dates of birth, addresses, medical record numbers, insurance member numbers, account numbers, and any combination of facts specific enough that a reasonable person could identify the individuals involved.

The required level of anonymization depends entirely on the platform you are using. Database-integrated legal platforms with appropriate DPAs may not require anonymization. Enterprise general-purpose platforms with signed DPAs permit the use of client-specific information in drafting and analysis tasks — subject to the conditions discussed in the professional responsibility section below. Consumer platforms on the free tier require complete anonymization without exception.

A practical framework for most Missouri civil litigation practices: anonymize everything before entering it into any general-purpose AI tool. This protects against data breaches, inadvertent disclosure, and platform policy changes that you cannot predict. It also eliminates the need to make judgment calls about what level of identification is too much — a call that is easy to get wrong. The cost of anonymizing is low. The cost of an unauthorized disclosure is not.

At this point I could say never use a non-enterprise AI system for anything in your practice. It is not bad advice, maybe it’s the safest advice. However, you have to use a tool to understand its limitations and uses. If a lawyer or a firm is willing to jump into full deployment for their firm based upon a trial of an enterprise system or legal specific AI that is certainly is valid approach. 

My brain doesn’t work that way. I need to explore a tool and see what it will and won’t do and what is reliable and what is not. My own journey started with using consumer AI to plan a trip. I quickly learned just how useful this new wave of technology is and was lulled into a sense of safety. That sense of safety was broken when I went to confirm some of the itinerary that was ultimately produced. I learned certain train lines AI recommended were closed or no longer existed. That began my journey on personal use, and lead to my limited use of AI for work. 

My first steps on AI at work was to take several continuing legal education courses on the technology to learn how it works and the ethics and professional standards. I also learned how to construct prompts for different tasks and how to use AI with my own research, and documents. On this journey I have found AI useful in article and blog writing, in constructing PowerPoints, mapping insurance policies, and creating timelines and outlines. It can also do a good job of drafting petitions, motions, discovery and outlines when properly prompted. The ability to input text files or prompts to give viewpoint and style instructions maintains your own voice. Using your own database of documents and research provides information and outputs that is more reliable and consistent with your previous work product. I consider, and so should you, all AI output as first pass work. Don’t be lulled into a sense of safety and find yourself without a train. 

VI.  Professional Responsibility: What Missouri Lawyers Need to Know

Missouri Courts, like courts around the country are losing patience with carless use of AI:

“In the similar context of citing to fictitious cases, our sister court has recently warned litigants about the unfettered use of AI without an independent review. Kruse, 692 S.W.3d at 52; Stevens v. BJC Health Sys., 710 S.W.3d 602, 604 n.1 (Mo. App. E.D. 2025) (“Accordingly, in light of artificial intelligence’s increasing prevalence, we warn litigants that using artificial intelligence to draft a legal document may lead to sanctions if the user fails to perform a critical review of the end-product to ensure that fictitious legal authorities or citations do not appear in filings with this Court or any other court.”). Federal courts have done the same. See Willis v. U.S. Bank Nat’l Ass’n, 783 F. Supp. 3d 959, 961 (N.D. Tex. 2025) (internal quotation marks and citation omitted) (“And because artificial intelligence synthesizes many sources with varying degrees of trustworthiness, reliance on artificial intelligence without independent verification renders litigants – attorneys and pro se parties alike – unable to represent to the Court that the information in their filings is truthful.” (contained within the trial court’s “Standing Order Regarding Use of Artificial Intelligence”)). Chastain v. City of Kansas City, 728 S.W.3d 513, 525 (Mo. App. 2025).

The professional responsibility analysis for AI use by Missouri attorneys has two layers: 1) what the ABA has said at the national level, and what Missouri has said specifically and 2) what courts are saying. Both matter. The ABA and Missouri rules, comments and informal opinions give you all you need to know to safely use AI. The court decisions let you know that misuse will be punished. 

Missouri’s informal opinion preceded the ABA’s formal guidance, and together they provide a reasonably complete framework — though the law in this area continues to develop.

A.  Missouri Informal Opinion 2024-11 — The Starting Point for Missouri Lawyers

On April 25, 2024, the Office of Legal Ethics Counsel and Advisory Committee of the Supreme Court of Missouri issued Informal Opinion 2024-11 addressing the ethical use of generative AI by Missouri lawyers. This is the most directly applicable guidance for Missouri practitioners and should be your first stop before ABA opinions.

The opinion was issued in response to a lawyer asking for guidance on developing a firm AI use policy. The Committee addressed Missouri Rules 4-1.1 (competence), 4-1.6 (confidentiality), 4-3.3 (candor to the tribunal), 4-5.1 (supervisory responsibilities), 4-5.3 (nonlawyer supervision), and 4-5.4 (professional independence).

On competence, the Committee stated that lawyers must understand the types of generative AI being used and the risks and benefits of those tools. On confidentiality, it required lawyers to carefully assess AI platforms and services to ensure client confidentiality is maintained — including reviewing the terms and conditions to understand the security of information input, how the platform uses that information, and what data sources the platform uses to generate responses. On candor to the tribunal, it specifically noted that AI tools are not always accurate and that the accuracy obligation under Rule 4-3.3 applies fully to AI-generated content. On supervision, it required that firms develop a written ethical framework for AI use with appropriate training for both lawyers and nonlawyers.

Practice Tip: Missouri Informal Opinion 2024-11 is available free at the Missouri Legal Ethics website. Read the full opinion. It is short, practical, and directly addresses the issues you face. The link appears in the Further Reading section at the end of this article.

B.  ABA Formal Opinion 512 — The National Framework

On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, Generative Artificial Intelligence Tools — the first formal ABA opinion directly addressing AI in legal practice. The Missouri Bar’s publication noted that the ABA opinion and Missouri Informal Opinion 2024-11 are “very complementary and very consistent with one another” and that Missouri lawyers should read both.

Opinion 512 addresses six ethical obligations: competence, confidentiality, communication with clients, candor toward the tribunal, supervisory responsibilities, and fees. Several of its conclusions deserve specific attention from Missouri practitioners.

On client disclosure and informed consent, Opinion 512 takes a more demanding position than many practitioners expect. The opinion recommends that lawyers secure clients’ informed consent before using client confidences in AI tools, and it specifically states that boilerplate consent included in engagement letters will not be adequate in all cases. This is a more demanding standard than simply adding an AI disclosure clause to your standard retainer — the opinion contemplates that disclosure must be specific enough to allow the client to make an informed decision. Whether Missouri will adopt this interpretation remains to be seen, but it represents the direction the profession is moving.

On fees, the opinion notes that lawyers may not charge clients for time spent learning a technology to be used for client matters generally. If a client specifically requests the use of a particular AI tool, the lawyer may charge for learning that tool. If AI reduces the time required for a task, the fee for that task should reflect the actual time and effort involved.

C.  Rule 4-1.6 — Confidentiality (The Central Rule)

Missouri Rule 4-1.6(a)

A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by Rule 4-1.6(b).

Missouri Rule 4-1.6(c)

A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.

Missouri Rule 4-1.6, Comment [18]

Paragraph (c) requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client. Factors to be considered in determining the reasonableness of the lawyer’s efforts include, but are not limited to, the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients.

The multi-factor “reasonable efforts” framework in Comment [18] is the operative legal standard for evaluating AI platform decisions. It is a practical, cost-benefit analysis, not an absolute prohibition on cloud-based or AI-assisted tools. ABA Formal Opinion 477R (2017) applied this same framework to cloud computing and held that lawyers may use cloud services for client data provided they make reasonable efforts to protect confidentiality. That analysis applies directly to AI platforms. A signed DPA, SOC 2 certification, a no-training commitment, and appropriate firm-level policies are the components of a reasonable-efforts showing for enterprise AI use.

D.  Rule 4-1.1 — Competence and Technology

Missouri Rule 4-1.1, Comment [8]

To maintain the requisite knowledge and skills, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

Comment [8] was added in 2012 and establishes that technology competence is a component of the duty of competence. For AI tools, this means two things. First, you must understand what the tools you are using can and cannot do well enough to evaluate their outputs — not merely use them. Second, as the tools evolve, your understanding must evolve with them. This is not a one-time exercise. Missouri Informal Opinion 2024-11 reinforces this, requiring lawyers to understand the risks and benefits of any AI platform before deployment.

E.  Rule 4-3.3 — Candor to the Tribunal (The Sanction Risk)

Missouri Rule 4-3.3(a)

A lawyer shall not knowingly: (1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer.

This rule is where the consequences of AI misuse become concrete and career threatening. The number of attorneys sanctioned by courts for submitting filings containing AI-generated citations to cases that do not exist has grown steadily. Federal judges across the country — including in the Western District of Missouri — have issued show-cause orders, Rule 11 sanctions, and referrals to state bars for exactly this conduct. The Eastern District of Missouri Court of Appeals has already addressed citation-to-fake-cases as a violation of candor obligations.

Missouri Informal Opinion 2024-11 specifically addresses this risk, noting that “at this point, generative AI tools are not always accurate, thereby requiring careful attention to competence and supervision” to avoid false statements to a tribunal. The obligation is not merely to verify that a case exists — it is to verify that it stands for the proposition for which you cite it, that the quotation is accurate, and that the case has not been reversed or distinguished on the point you need. This problem is not just solo and small firms relying on consumer AI models. Large firms with sophisticated IT departments, with enterprise legal specific AI tools and legal databases have also encountered these problems. The reason of course is not the technology. 

“Men at some time are masters of their fates: The fault, dear Brutus, is not in our stars,

But in ourselves, that we are underlings”.

— William Shakespeare, Julius Caesar, Act 1, Scene 2

Cassius line hits today if we replace fate with AI. We lawyers are the problem. We are so busy, so pressured for time and results oriented we can be lulled by the seemingly easy and time saving output of AI. As game changing as AI can be, it is not a person, it has no judgment and we cannot allow predictive outputs to be our final work product. 

Every citation in every document that will be relied upon or that goes to any court must be independently verified through a current legal database, regardless of where the draft originated. This is not unique to AI — it is the standard of care. AI simply makes the failure mode more likely and more easily concealed from a casual reviewer.

Practice Tip: If a case in an AI-generated draft looks perfect for your argument but you cannot find it on Lexis, it does not exist. Do not cite it. Do not investigate further. Strike it and move on. The worst-case outcome of citing a fabricated case is sanctions, a bar referral, and a client who no longer has a lawyer.

F.  Rules 4-5.1 and 4-5.3 — Firm Governance and Vendor Supervision

Missouri Rule 4-5.1(a)

A partner in a law firm, and a lawyer who individually or together with other lawyers possesses comparable managerial authority in a law firm, shall make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct.

Missouri Rule 4-5.3 / Comment [3]

A lawyer may use nonlawyers outside the firm to assist the lawyer in rendering legal services to the client. When using such assistance, a lawyer must make reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations. The extent of this obligation will depend upon the circumstances, including the education, experience and reputation of the nonlawyer; the nature of the services involved; the terms of any arrangements concerning the protection of client information; and the legal and ethical environments of the jurisdictions in which the services will be performed.

Missouri Informal Opinion 2024-11 explicitly applies the nonlawyer supervision framework of Rule 4-5.3 to AI tools, treating the AI vendor as an outside service provider whose conduct the firm has an obligation to supervise and whose data handling terms must be evaluated before use. Rules 4-5.1 and 4-5.3 together require: a written firm AI use policy, training for all attorneys and staff who use AI tools, a process for evaluating vendor data handling before deployment, and ongoing monitoring as vendor terms change.

These are governance obligations that the AI product itself cannot satisfy. No matter how strong the vendor’s security posture, the firm must independently implement the policy framework and the training. If a staff member uses a consumer AI tool in violation of firm policy and discloses client information, the supervising attorney may be responsible under Rule 4-5.3(c) if the firm failed to take reasonable preventive measures.

G.  Rule 4-1.4 — Client Communication and Disclosure

The question of whether Missouri lawyers must disclose to clients that AI tools are being used in their representation is unsettled. Missouri Informal Opinion 2024-11 does not mandate disclosure in all cases. ABA Formal Opinion 512 takes a more demanding approach, recommending informed consent before inputting client confidences into AI tools and suggesting that boilerplate engagement letter language may not be sufficient.

The conservative and practical approach is to include meaningful AI disclosure language in engagement letters — language that identifies the types of tools used and explains that all AI-generated work product is reviewed by a licensed attorney before it is used. This protects the client, demonstrates good faith, and insulates the firm from the argument that a client was kept in the dark about how their information was handled. As state bar guidance continues to develop in this area, expect the disclosure obligation to become more specific.

VII.  A Practical Framework for Missouri Civil Litigation Practices

Drawing together the platform analysis and the professional responsibility requirements, the following framework is what I believe Missouri civil litigators should implement before using AI tools for client work.

Step One: Adopt a written firm AI use policy before anyone uses AI for client matters. The policy must identify which platforms are approved for which tasks, what information may and may not be entered into each platform, the required verification steps for AI-generated research and drafts, and who is responsible for monitoring vendor term changes. Missouri Informal Opinion 2024-11 requires this.

Step Two: Execute a data processing addendum with any enterprise AI vendor before entering client information. For Claude Teams, ChatGPT Enterprise, and similar platforms, a DPA is available and must be signed. Without a DPA, the platform should be treated as a consumer tool regardless of its marketing.

Step Three: Add meaningful AI disclosure to engagement letters and retainer agreements. The disclosure should be specific enough that a client understands that AI tools are used and that an attorney reviews all AI-generated work product. As the ethics guidance continues to evolve, revisit this language annually.

Step Four: Verify every case citation through a current legal database before it enters any work product that leaves the firm. This is the single most important operational safeguard. It prevents the most serious and most likely compliance failure.

Step Five: Train everyone in the firm who uses AI tools. The obligation under Rules 4-5.1 and 4-5.3 runs not just to attorneys but to all staff. A paralegal or legal assistant who enters client information into a consumer AI tool without authorization creates a potential Rule 4-1.6 problem, and the firm is responsible.

Practice Tip: None of these steps are burdensome if implemented from the start. The firms that will have problems are the ones that let staff and attorneys begin using AI tools organically — without a policy, without training, and without DPAs — and then try to retrofit compliance after something goes wrong.

VIII.  Conclusion

Artificial intelligence tools have moved well past the experimental stage. They are practical, available, and genuinely useful for civil litigation practice when selected and deployed thoughtfully. The lawyers who adopt and deploy this technology in compliance with professional obligations will be more effective for their clients. In my opinion every lawyer should learn to use these tools well — not as replacements for legal judgment, but as capable assistants that can do the time-consuming and mechanical work faster and more thoroughly than was previously possible.

The professional responsibility framework is workable. Missouri Informal Opinion 2024-11 and ABA Formal Opinion 512 together provide a clear road map: understand your tools, protect client confidences, verify AI outputs, govern your firm’s AI use with written policies and training, and disclose AI use to clients in a meaningful way. None of these requirements preclude AI use. They require that AI use be thoughtful and supervised.

I am convinced that these tools, properly used, make lawyers better — more thorough, more organized, and able to focus their professional judgment on the tasks that actually require it. But they require the same professional discipline that every other tool in the practice requires. The AI does not replace your judgment. It depends on it.

LLM platforms are sophisticated tools that enhance but never replace professional legal judgment. Verification, analysis, and strategic application remain the practitioner’s responsibility in every client matter.

Further Reading and Resources

The following sources are recommended for Missouri practitioners researching AI use and ethics. 

Missouri-Specific Resources

Missouri Informal Opinion 2024-11 (April 25, 2024) — The Office of Legal Ethics Counsel and Advisory Committee of the Supreme Court of Missouri. The essential starting point for Missouri lawyers developing AI use policies. Addresses competence, confidentiality, candor, and supervision under Missouri’s specific rules.

Missouri Bar Ethics Hotline — Direct consultation for confidential, informal guidance on specific AI implementation questions in your practice. Available to Missouri Bar members at (573) 635-4128.

Generative AI Ethics in Missouri and Beyond (Baker Sterchi, 2025) — A practical comparative analysis of how Missouri, Kansas, and Illinois have addressed AI ethics obligations. Good for understanding where Missouri sits in the national landscape.

ABA Ethics Opinions and Official Guidance

ABA Formal Opinion 512 — Generative Artificial Intelligence Tools (July 29, 2024) — The ABA’s first formal ethics opinion on AI in legal practice. Covers competence, confidentiality, communication, candor to the tribunal, supervisory responsibilities, and fees. Read this in full.

ABA Ethics Opinion on Generative AI Offers Useful Framework (ABA Business Law Today, Oct. 2024) — A clear, organized synopsis of Opinion 512 by Keith R. Fisher, covering all six ethical obligations addressed. Useful for understanding the full scope of the opinion without reading the full PDF first.

ABA Formal Opinion 477R — Securing Communication of Protected Client Information (2017) — The foundational ABA opinion applying the ‘reasonable efforts’ standard to cloud-based services. Directly applicable to enterprise AI platform decisions. Available from the ABA’s ethics opinions page.

Current Ethics Opinions and Reports Related to Generative AI — NYC Bar Comprehensive Survey (2025) — A thorough 50-state survey of all current ethics opinions and bar association reports on AI, compiled by the New York City Bar Association. Invaluable for tracking where national guidance is moving.

AI and Attorney Ethics Rules: 50-State Survey (Justia, 2025) — Continuously updated survey of each state’s current ethics guidance on AI. Quick reference for whether a state has issued opinions and what their key requirements are.

Academic and Technical Resources on How LLMs Work

Large Language Models in Legal Systems: A Survey (Nature/Humanities and Social Sciences Communications, 2025) — Peer-reviewed academic survey of how LLMs function in legal contexts, with coverage of opportunities, challenges, bias, and ethical risks. Accessible to non-technical readers and well-suited as a reference for understanding the technology.

A View of How Language Models Will Transform Law (arXiv, 2024) — An academic paper examining the structural transformation of the legal profession from AI adoption — including what tasks will remain lawyer-dependent and why. Useful perspective on the longer-term picture.

Understanding and Utilizing Legal Large Language Models (Clio, 2025) — Practitioner-focused overview of what legal LLMs are, how they work, and how law firms are integrating them into practice. Readable and current.

Leave a Reply