If you’ve been following the hype, it sounds like AI is about to revolutionize everything from grocery shopping to courtroom litigation. For us family law attorneys, it already has — at least in small but significant ways. AI tools now help manage the mountains of paperwork, scheduling nightmares, and data-heavy discovery that come with divorce cases.
But with great tech comes great responsibility.
While AI might be the shiny new assistant in the law office, ethics are the guardrails keeping us from turning legal practice into an unsupervised science experiment. In family law, where privacy, accuracy, and human judgment are everything, these guardrails matter.
Let’s talk about why.
Competence: Yes, Lawyers Must Understand Their AI Tools
Under California’s Rules of Professional Conduct (Rule 1.1) and echoed by the American Bar Association’s Formal Opinion 512 (2024), attorneys have an ethical duty to remain competent in the technology they use.
That doesn’t mean we all have to become AI engineers. But it does mean:
- We need to understand how AI tools work, especially their limits.
- We must assess the risks and benefits of using AI in client matters.
- We’re responsible for supervising AI output, the same way we would supervise a paralegal or junior attorney.
Put simply: AI can draft your discovery requests faster than any human, but I’m the one who has to make sure they’re correct, complete, and legally sound before they go out the door.
Confidentiality: Safeguarding Sensitive Divorce Data
Family law involves deeply personal information: finances, child custody disputes, medical histories, allegations of abuse. When AI tools are involved in processing this data, confidentiality concerns are front and center.
According to both ABA guidance and state bar recommendations, attorneys must:
- Vet AI tools and cloud services for strong data security protections.
- Understand where and how client data is stored and processed.
- Avoid using AI platforms that share data for training large language models without client consent.
For example, tools like LawToolBox (as discussed in your source) process data securely within a law firm’s private Microsoft 365 environment — a safer choice than free or public AI platforms with unclear data policies.
This matters because mishandling client data isn’t just embarrassing — it’s a potential ethics violation and malpractice risk.
Accuracy and the Hallucination Problem: Lawyers Are Still the Gatekeepers
One of the most famous AI blunders happened in 2023, when lawyers submitted a court brief filled with fake case citations generated by ChatGPT. The judge was not amused. (Mata v. Avianca, SDNY 2023.)
This is called “AI hallucination” — when AI confidently fabricates information that looks real but isn’t.
For family law attorneys, this is a huge ethical landmine. Imagine AI hallucinating a case precedent about child custody or spousal support. If an attorney fails to verify that information, they could mislead the court, violate duties of candor (Rule 3.3), and face sanctions.
That’s why ethical use of AI means:
- Double-checking every citation.
- Fact-checking AI-generated summaries.
- Never filing anything AI drafted without personal attorney review.
AI can assist, but it cannot replace human legal judgment. Period.
Bias and Fairness: Not All Data is Created Equal
AI tools learn from historical data. But what if that data reflects biased outcomes?
For example, if a predictive analytics platform is trained on family law cases where mothers overwhelmingly received primary custody, its outputs might lean toward assuming that trend continues — regardless of your specific facts.
The ethical lawyer’s role is to:
- Recognize and correct for inherent biases in AI recommendations.
- Ensure AI outputs are used as informative tools, not as gospel.
- Advocate for outcomes based on the client’s unique situation, not outdated trends.
The ABA and bar associations have raised serious concerns about bias in AI systems, urging lawyers to be vigilant about how these tools might perpetuate inequities if left unchecked.
Transparency: Telling Clients When AI Is Involved
Clients deserve to know when technology is being used in their case. While AI tools can help streamline tasks and lower costs, attorneys should be upfront about their role.
The ethical duty of communication (Rule 1.4) includes:
- Informing clients when AI tools are being used to assist with their case.
- Clarifying that all final work product is still supervised and approved by the attorney.
- Explaining the benefits (efficiency, lower cost) and limits (AI isn’t giving you legal advice).
Transparency builds trust — especially when people are wary of technology handling their personal divorce matters.
Ethics Are the Foundation, Not an Afterthought
At the end of the day, using AI in divorce law isn’t unethical. Using it irresponsibly is.
California family law attorneys must approach AI the same way we approach any new technology:
- With professional skepticism.
- With clear ethical oversight.
- With a commitment to client protection above all.
AI can help me process 1,000 pages of financial records faster. It can remind me of obscure filing deadlines. It can even draft a first version of a spousal support proposal. But it’s still my legal brain — my ethical obligation — that ensures those tools serve my clients well.
The machines are not taking over.
They’re just making the paperwork less painful.
