I will always remember the time a senior lawyer requested me to look at a contract drafted by ChatGPT. I felt my hands trembling when opening the document, not because I was afraid that AI would replace me, but because it was a burden of responsibility. Who is responsible when this thing goes wrong? The partner had no idea. Neither did I.
That was six months ago. That is one question I find myself lying awake on at night, as well as dozens more that did not exist when I began my paralegal career four years ago.
The legal world is evolving at a pace that many lawyers are not able to keep up with. Courts are making decisions on AI-generated evidence. Law firms are placing millions of bets on tools that are expected to reduce research time by half. Regulators are rushing to author regulations they barely comprehend about technology.
This is what is really going on in 2026, and what it entails for your practice.
The Big Picture: Why This Year Matters
According to a report by the American Bar Association, 73 percent of law firms currently utilize some type of AI. This is compared to 38 percent only two years ago. Adoption does not tell the entire story, though.
The relevant inquiry is how this shift is being addressed by courts, bar associations, and clients. And they are raising new legal questions at a rate that we are not able to answer.
Liability Questions Nobody Can Answer Yet
Who is accountable when AI writes a brief, which includes a hallucinated citation to a case? The lawyer who didn’t catch it? The firm that chose the tool? The company that built the AI?
Courts are beginning to provide answers to these questions, and the answer is not uniform. A California district court has recently fined a lawyer who provided AI-generated citations without checking them. The attorney claimed that the AI tool was sold as a trusted legal research tool. The court wasn’t sympathetic.
In the meantime, a company in Texas was able to defend its own use of AI as reasonable under the given circumstances, despite the fact that the AI committed minor errors. The difference? They demonstrated their verification procedure and revealed the use of AI to the court.
The moral: Find a way of recording everything. Show your work. Verify twice.
What Bar Associations Are Actually Doing
State bars are progressing at varying rates. Some are ahead of the curve. Others have languished in committee meetings.
The bar currently mandates Florida lawyers to be familiar with the AI tools on which they rely and to oversee AI outputs in the same manner they would junior associates. That’s a clear standard.
New York has done otherwise. Their advice is concerned with data security and client confidentiality in the use of AI tools in the clouds. They are concerned about the fate of client information after it is fed into an AI system.
The bar association of California published a 47-page opinion concerning AI use. The concise summary: legal professionals have to be competent in new technology, preserve the confidentiality of clients, and prevent the occurrence of decisions that involve professional judgment and are made by AI.
These aren’t suggestions. They are moral standards that may result in punishment.
The Discovery Nightmare
The following is an example that occurred to a colleague a month ago: opposing counsel sent 50,000 pages of discovery, which AI supposedly reviewed to determine which documents were privileged or not. The AI overlooked 127 privileged documents.
Who waived privilege? The judge said, the producing party. The AI didn’t have an excuse. Nor did the lawyer who trusted it and did not spot-check it.
The areas where AI is most likely to save time are discovery. It is also the place where there are the greatest risks. A single document omission can drown a case or even forego attorney-client privilege on a whole matter.
Courts are beginning to make lawyers describe their processes of AI-assisted reviews in more detail. Some judges want error rates. Others would desire to know training data and false positive rates.
These are some of the questions that most lawyers cannot answer regarding the tools they are using.
The Billing Problem For Lawyers
Research that once took hours can take minutes to complete with AI. That’s great for efficiency. It is awful on billable hours.
Companies are grappling with the pricing of this work. Bill for the time AI took? Bill for the value delivered? Use flat fees?
Clients are posing direct questions. Why are they paying 500 dollars an hour to work that AI completed in 15 minutes? Fair question. No easy answer.
Other companies are switching to value-based billing. The difference in costs is being eaten by others, and volume is hoping to compensate. Some are coming clean to clients regarding the use of AI and bargaining new fee proposals.
The companies that have not yet discovered this are losing customers to the companies that have.
What Judges Actually Think
Judges are paying attention. They are reading about AI hallucinations. They are becoming familiar with what these tools are capable of and incapable of. And they are not impressed when the lawyers accuse the technology of the errors.
Some judges are now asking questions about the use of AI in case management conferences. They would like to find out whether you have researched, drafted, or planned cases using AI. They will want to know the way you checked the output.
Certain courts are working out local regulations regarding AI disclosure. Others are taking it on a case-by-case basis. The future is obvious: you should not be secretive about your use of AI.
Privacy Concerns Nobody Talks About
Where does the information of a client you post to an AI tool end up? Who can access it? How long is it stored? What’s it used for?
The majority of lawyers are unaware of what the tools they use day-to-day are.
The problem is even more severe with free AI tools. They are not restricted to the terms of their service to train on your inputs. It implies that the personal data on your client will be in the knowledge base of the AI and may be available to other users.
The State Bar of California cautioned on this expressly. The information about your clients might be confidential, and using free AI tools may result in breaking confidentiality conditions unless you obtain informed consent.
Questions are even asked about paid enterprise tools. Where are the servers? Who has access? What will become of the company in case it is hacked or becomes bankrupt?
There are no theoretical issues. Several legal tech firms have suffered data breaches in the last year.
The Competence Requirement
The part that keeps me awake refers to the fact that Rule 1.1 of professional conduct emphasizes that lawyers must be competent in technology that is reasonably necessary to practice.
That criterion was to know how to use email and Word. It can now refer to learning the inner mechanics of large language models, training data bias, and when AI is hallucinating.
What is the number of lawyers who can describe the distinction between supervised and unsupervised learning? Or what a score of confidence is? Or why AI could prove more precise in certain cases than others?
Associations of lawyers are beginning to provide training on AI competence. Smart lawyers are taking it. The others are hoping they do not need it.
They’re wrong.
What To Do Right Now
Wait no longer till the light becomes clear. It’s not coming. Technology outpaces the law.
Begin monitoring your AI usage. Record the tools you are using, the reason, and how you validate outputs. You will require this paper trail in case of something going wrong later.
Read the terms of service of your AI tools. Yes, all of them. Be aware of where your data is stored and who is able to access it. When you cannot get clear answers, go to another tool.
Verify everything. The American attorney who received a sanction accepted AI-generated citations as authentic. They weren’t. Sanctions are more expensive and time-consuming than verification.
Obtain consent from the client when dealing with confidential information through AI. Include it in your letter of engagement. Describe the threats and opportunities. Let clients decide.
Renew your malpractice insurance. Inquire with your carrier on AI coverage. Most of the policies are silent on AI-related claims. Get it in writing.
Train your staff. All people who work with AI tools must be aware of the dangers. The error committed by your paralegal is your ethical failure.
The Stuff Nobody’s Saying Out Loud
This is what I learned during four years of working as a paralegal and a year as a lawyer: those people who think they know what AI is are the ones who know the least about it.
The wisest lawyers of my acquaintance are questioning. They’re running tests. They’re being cautious. They are not gambling their clients’ cases on something they do not understand.
The use of AI will alter legal practice. It already has. But it will not substitute lawyers who are cogent, checks and balances, and client-focused.
Those lawyers who run into trouble are the ones who view AI as magic. The ones who think it can never be wrong, those who do not bother to ask questions until things go badly.
Don’t be that lawyer.
Do you know Why Successful Lawyers Use AI as an Assistant, not a Replacement?
Sources and Further Reading
To find particular advice about using AI in your jurisdiction, consult the ethics opinions of your state bar. In 2025 or early 2026, most of the major state bars put up guidance.
ABA model rules do not explicitly deal with AI as of now, although Comment 8 on Rule 1.1 has technology competence requirements.
Other legal research sites, such as Bloomberg Law, keep track of court cases and regulatory proceedings related to AI in real time. The subscription fee is worth it when you are a frequent user of AI tools.
I have been in law for five years, four as a paralegal observing the technology decisions made by partners, and one as a lawyer making those decisions. I have witnessed companies spending money on AI services that they did not require and missing out on services that they did not acknowledge. Success and failure are typically determined by the kind of questions to ask first before making any commitments.
Connect with me on LinkedIn for legal drafting and other legal matters!