ChatGPT Is No Replacement for a Good Attorney

Not Quite SkyNet: Three Reasons Why ChatGPT Is No Replacement for a Good Attorney

By Scott E. Rahn and Meagan A. Paisley

Anxiety over the advancement of Artificial Intelligence (AI) technology has been on the rise over the last several years. The growing fears have been exacerbated by recent articles warning about AI from tech industry leaders such as Elon Musk, owner of Tesla and SpaceX; Steve Wozniak, co-Founder of Apple Inc.; Sam Altman, CEO of OpenAI; and Geoffrey Hinton, known as the Godfather of AI. Nevertheless, AI continues to advance, and its use is spreading across industries. From quick customer service on your cell phone provider’s website, to use in finance, banking, education, and transportation, AI calculates numbers, sees patterns, and analyzes hard data faster than a human has ever been able to do.

As AI moves into industries that not only involve numbers and patterns, but artful expressions, the fear of replacement only grows deeper. Anxieties surrounding the role of AI and the possibility of Terminator becoming a reality consumes newsstands and front pages of news sites with no “John Connor” to save us from seeming impending doom. However, as advanced as ChatGPT may seem, it is not yet quite SkyNet.

Here are 3 reasons why ChatGPT is not (yet) a reliable substitute for an attorney:

1. ChatGPT is Not Authorized to Practice Law

ChatGPT is a form of AI that is trained with data samples, then is able to find patterns from the data and use those patterns to make predictions or to generate new data or content.[1] While this tool is proving useful in automating certain aspects of customer service, language translation, and generating reports,[2] ChatGPT still has limitations that keep it from (1) being suitable to take over certain jobs, and (2) taking over the world.

While ChatGPT is starting to make its way into the legal profession, its current limitations keep it from truly being able to replace an attorney. According to NPR, Joshua Browder, founder of DoNotPay, developed an AI based service using Chat GPT to help people combat traffic tickets and generate legal arguments to help the user at court hearings.[3] The first hearing using the AI service was meant to take place in California on February 22, 2023, however, after receiving “threats” from multiple state bar associations and even one district attorney’s office alleging the use of DoNotPay’s AI services would constitute the “unauthorized practice of law” the use of the DoNotPay service at the hearing was cancelled.[4] 

While this might seem like the work of lawyers and bar associations fearing that their careers may be on the chopping block, replaced by automation, the reality is that AI runs afoul of “the unauthorized practice of law.” One of the reasons state bar associations exist is to protect consumers. State bars have exams and other requirements that act as checks to qualify people wanting to practice law by ensuring they satisfy the baseline competency in the law and ethics to provide legal advice. Only after a person shows their minimum competency, they may be admitted to the Bar and allowed to practice law.

No AI has been approved admission to the bar, and, therefore, cannot practice law. For this reason, AI is not ready to be your lawyer.

2. ChatGPT Can (and does) Lie (Convincingly)

According to ChatGPT developer OpenAI, “ChatGPT sometimes writes plausible sounding but incorrect answers or nonsensical answers.”[5] Codecademy also cites ChatGPT’s tendency to create “false information” and ability to create “deceptive content” as risks and limitations of ChatGPT.[6] Even worse, we’ve seen instances where ChatGPT will not only give incorrect information, but then double down by confirming the lie to be true.

A professor with Texas A&M University – Commerce attempted to ensure that his students did not plagiarize their end of the year papers by using ChatGPT.[7] He asked ChatGPT whether it had written the student papers.[8] ChatGPT told the professor that, yes, it had, in fact, written every single paper his students had turned in.[9] The problem is that this was not true, and all but one student had their names cleared of the accusation.[10]

This subterfuge has already been seen in the legal field. When asked to generate a list of legal scholars who had been accused of sexual harassment, OpenAI created a false list and even created a story that there was an accusation that one individual on the list had touched a student inappropriately during a trip.[11] The OpenAI chatbot even cited an article purportedly from The Washington Post, however, the article did not exist.[12]

If you’re looking for information on your attorney, opposing counsel, or a judge or investigating claims or looking for potential evidence against an opposing party, ChatGPT can and is likely to make up information. Where a technology has the ability to lie and affirm the veracity of its mistruths, you have to independently verify anything it says.

Perhaps the most well-known instance of AI providing false information comes from New York, where two attorneys and a law firm are facing orders to show cause because they used ChatGPT to conduct legal research, which they cited in their briefs. The problem was the cases did not exist. [13] To make matters worse, the lawyers doubled down and simply asked ChatGPT if the citations were real, and ChatGPT lied to the attorney, saying “yes”, and claimed that the cases could be found on LexisNexis and Westlaw.[14] The attorney failed to verify ChatGPT’s findings, which left the attorney and the law firm in hot water.

A resource that provides false information that can cause a person more trouble than they began with is hardly a resource at all. ChatGPT’s ability to make up case citations makes it a dangerous weapon against the person who wields it. One would be better served by hiring an attorney who will check any and all citations.

3. Security Risks and Attorney-Client Privilege when Using ChatGPT

Attorneys owe a duty of confidentiality to their clients, yet using ChatGPT could violate that duty. According to Codecademy, “ChatGPT is collecting data on the ways it is being used, including information being put into prompts. This is important to consider when entering sensitive information.”[15] The magic of ChatGPT is that it learns as it collects that data. What that means is that whatever information is put into ChatGPT is no longer confidential.

Even if confidentiality is not an issue, a data breach might be. According to Security Intelligence, Open AI confirmed a data breach that allowed some users to see other users first and last names, email addresses, payment addresses, and some credit card information.[16] Some companies are beginning to ban the use of ChatGPT, citing security risks to their data.[17]

Unlike AI, attorneys and law firms take precautions as possible to avoid security breaches to protect you.

While ChatGPT isn’t ready to be a tool to replace lawyers, it can still be a useful research tool, as long as you “trust but verify.”

I did reach out to the OpenAI chatbot to request comment. Here is the conversation.

Me:

Hello, I’m writing an article about why AI is not suitable for use in the legal field. Would you like to comment?

OpenAI:

Certainly! I’d be happy to provide some insights on why AI may not be suitable for use in the legal field. While AI has shown great potential in various domains, including healthcare, finance, and customer service, there are several challenges and considerations specific to the legal field that warrant discussion. Here are a few points to consider:

Interpretation of Legal Texts: Legal documents, such as contracts, statutes, and case law, often involve complex language and require nuanced interpretation. AI systems typically rely on pattern recognition and statistical analysis, which may not capture the subtle and context-specific meanings of legal texts accurately. Legal interpretation requires a deep understanding of legal principles, historical context, and precedent, which can be challenging for AI systems to emulate.

Lack of Human Judgment and Ethics: Legal cases often involve ethical considerations and require human judgment. The application of legal principles often involves balancing competing interests and considering broader societal implications. AI lacks the ability to comprehend moral reasoning, empathy, and other intangible aspects that human legal professionals possess. Relying solely on AI systems could lead to decisions that fail to account for the nuances and complexities of legal dilemmas.

Bias and Fairness: AI systems learn from data, and if the training data contains biases, it can perpetuate and amplify those biases. In the legal field, fairness and equal treatment under the law are fundamental principles. However, biased training data or flawed algorithms can inadvertently introduce or reinforce biases, leading to unjust outcomes. Ensuring fairness and mitigating bias in AI systems is a complex and ongoing challenge.

Lack of Accountability and Transparency: AI algorithms, particularly deep learning models, can be highly complex and opaque. This lack of transparency can hinder legal professionals’ ability to understand how AI systems arrive at their decisions, making it difficult to assess the reliability and validity of the outcomes. In legal proceedings, transparency and accountability are crucial, and the black-box nature of AI systems can raise concerns about due process and the right to challenge decisions.

Ethical and Legal Responsibility: Legal professionals are bound by ethical codes and legal obligations to uphold the principles of the legal system. When AI systems are involved in legal decision-making, questions arise regarding the allocation of responsibility and accountability. If an AI system makes an incorrect or unjust decision, who should be held responsible? This issue of legal and ethical responsibility is yet to be fully resolved in the context of AI in the legal field. It is worth noting that AI can still play a supportive role in legal practice, such as automating routine tasks, legal research, or document analysis. However, the ultimate decision-making and judgment should remain in the hands of human legal professionals who can consider the broader legal and ethical implications.

In conclusion, AI has the potential to enhance certain aspects of a legal practice, the risks of relying fully on AI for legal work mate in unreliable. Unless you’re able to have human oversight, judgement, ethics, fairness, and accountability, AI will be unable to replace us.


[1] Codecademy. Introduction to ChatGPT. https://www.codecademy.com/learn/intro-to-chatgpt/modules/introduction-to-chat-gpt/cheatsheet. Accessed: June 7, 2023.

[2] Leighton, Nick. 6 Ways Business Leaders Should Integrate ChatGPT. Forbes. February, 22, 2023. https://www.forbes.com/sites/forbescoachescouncil/2023/02/22/6-ways-business-leaders-should-integrate-chatgpt/?sh=4ba3e2db6c61. Accessed: June 7, 2023.

[3] Allyn, Bobby. A robot was scheduled to argue in court, then came the jail threats. NPR. January 25, 2023. https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats. Accessed: May 30, 2023.

[4] Id.

[5] OpenAI. Introducing ChatGPT. November 30, 2022. https://openai.com/blog/chatgpt. Accessed: June 7, 2023.

[6] Codecademy. Risks and Limitations of ChatGPT. https://www.codecademy.com/learn/intro-to-chatgpt/modules/risks-and-limitations-of-chat-gpt/cheatsheet. Accessed: June 7, 2023.

[7] Verma, Prashnu. A professor accused his class of using ChatGPT, putting diplomas in jeopardy. The Washington Post. May 18, 2023. https://www.washingtonpost.com/technology/2023/05/18/texas-professor-threatened-fail-class-chatgpt-cheating/. Accessed: May 30, 2023.

[8] Id.

[9] Id.

[10] Id.; The one remaining student admitted to use of ChatGPT in the course.

[11] Verma, Pranshu and Will Oremus. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused. The Washington Post. April 5, 2023. https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/. Accessed: May 31, 2023.

[12] Id.

[13] Armstrong, Kathryn. ChatGPT: US lawyer admits using AI for case research. BBC News. May 27, 2023. https://www.bbc.com/news/world-us-canada-65735769. Accessed: May 30, 2023.

[14] Id.

[15] Codecademy. Risks and Limitations of ChatGPT. https://www.codecademy.com/learn/intro-to-chatgpt/modules/risks-and-limitations-of-chat-gpt/cheatsheet. Accessed: June 7, 2023.

[16] Poremba, Sue. ChatGPT Confirms Data Breach, Raising Security Concerns. Security Intelligence. May 2, 2023. https://securityintelligence.com/articles/chatgpt-confirms-data-breach/. Accessed: June 9, 2023.

[17] Tilley, Aaron and Miles Kruppa. Apple Restricts Employee use of ChatGPT, Joining Other Companies Wary of Leaks. The Wall Street Journal. May 18, 2023. https://www.wsj.com/articles/apple-restricts-use-of-chatgpt-joining-other-companies-wary-of-leaks-d44d7d34. Accessed: June 9, 2023.

RMO LLP provides personal and efficient inheritance dispute services to individual and institutional clients. The firm’s attorneys focus on probate litigation involving contested trust, estate, probate, and conservatorship matters. Serving California and Texas, with offices in Los Angeles, Pasadena, Orange County, San Diego, Fresno, the Bay Area, Dallas, and Houston. For more information, please visit https://rmolawyers.com/.

Share on:

Most Popular

Search Topics

Read More

About the Author

Scott Rahn, Founding Partner

Scott Rahn resolves contests, disputes and litigation related to trusts, estates and conservatorships, creating a welcome peace of mind for clients. He represents heirs, beneficiaries, trustees and executors. He utilizes his experience to develop and implement strategies that swiftly and efficiently address the financial issues, fiduciary duties and emotional complexities underlying trust contests, estates conflicts and probate litigation.

Related Articles