Will ChatGPT Drive Lawyers Out of Business?

Introduction

ChatGPT, launched last November 30th 2022 by San Francisco-based Open AI, has seen incredible growth since then; it boasted over one million users within five days of release and continues to experience exponential growth.

It is an innovative tool that blurs the distinctions between machine and human creativity. It provides answers to a range of knowledge domains that are precise, detailed, and articulate.

If you are concerned about your job with the advent of ChatGPT, fear not! While AI and automation may change the legal profession, they are unlikely to completely replace human lawyers anytime soon. Lawyers will still play an important role in providing legal advice and representation to clients, while AI language models can assist them in their work. We will discuss how ChatGPT can complement your practice below

Will ChatGPT Drive Lawyers Out of Business?

Potential Uses of ChatGPT

Lawyers may use ChatGPT in a variety of ways, such as:

  1. Early Stage Research: Lawyers may use ChatGPT to conduct legal research and obtain answers to simple legal questions. ChatGPT has been trained on vast amounts of data and can assist lawyers in finding relevant cases, laws, and regulations.
  2. Drafting: Lawyers may use ChatGPT to assist in drafting simple legal documents such as contracts, pleadings, and motions. ChatGPT is quite good at providing suggestions and language for small paragraphs that can be incorporated into full legal documents.
  3. Communication: ChatGPT can help lawyers to quickly generate responses to emails or messages.
  4. Analysis: Lawyers may use ChatGPT to analyze and understand the basic landscape of unfamiliar legal topics, which can be especially useful in complex legal cases.

A great use is to apply ChatGPT to draft client letters and email responses. As a language model trained on vast amounts of data, ChatGPT can generate text that sounds natural and coherent. However, it’s important to keep in mind that ChatGPT is a machine-learning model and may not always provide accurate or appropriate responses.

Lawyers can use ChatGPT to generate a first draft of a client letter or email response, but they should carefully review and edit the text to ensure that it accurately reflects the advice and guidance they want to provide to their clients. Lawyers should also be cautious about sharing confidential information with ChatGPT, as it’s important to protect client confidentiality at all times.

Overall, ChatGPT can be a useful tool for lawyers to streamline their communication with clients, but it’s important to use it responsibly and in conjunction with other legal research and analysis tools.

It’s important to note that while ChatGPT can be a useful tool for lawyers, it should not be used as a substitute for legal advice from a qualified attorney. ChatGPT is a machine learning model and can’t provide legal advice or legal representation.

Limitations of ChatGPT in the Legal Field

While ChatGPT and similar AI technologies have shown impressive capabilities, they also have limitations when it comes to their application in the legal field. Understanding these limitations is crucial for legal professionals who are considering incorporating AI tools into their practice. Here are some key limitations of ChatGPT in the legal context:

1. Uncertainty and Confidence

AI models often provide responses with a level of confidence, indicating the model’s degree of certainty in its generated output. However, ChatGPT may generate responses that appear confident but are actually incorrect or speculative. Legal professionals should be mindful of this uncertainty and critically evaluate AI-generated content, seeking additional verification when necessary.

2. Ethical and Professional Responsibility

Legal professionals have ethical and professional obligations to their clients. Relying solely on AI-generated content without human review may not meet these obligations. Legal professionals must exercise their judgment, apply legal reasoning, and ensure that the advice they provide is accurate, comprehensive, and aligned with the best interests of their clients.

3. Confidentiality Issues in ChatGPT Usage

Confidentiality is a critical aspect of the attorney-client relationship, and it’s important to maintain the confidentiality of legal communications. However, it’s important to keep in mind that as an AI language model, ChatGPT is not a licensed attorney and cannot be considered part of the attorney-client relationship.

When an individual poses a question to ChatGPT, the information provided is typically not considered confidential, as there is no attorney-client relationship established between the individual and ChatGPT. However, it’s important to be cautious about sharing sensitive information with any third-party service, including ChatGPT.

It’s also worth noting that some companies and organizations that provide language models and other AI tools may have their own privacy policies and terms of service that govern how they handle user data. It’s important to review these policies carefully to understand how the data provided to ChatGPT or other AI tools may be used or shared.

For example, OpenAI’s FAQ states the following:

Can you delete specific prompts?

No, we are not able to delete specific prompts from your history.  Please don’t share any sensitive information in your conversations.

OpenAI’s Privacy Policy also states:

Communication Information: If you communicate with us, we may collect your name, contact information, and the contents of any messages you send (“Communication Information”).

As such, patent attorneys must take care not to disclose confidential information to publicly-accessible large language models like ChatGPT.  

Confidentiality Issues in ChatGPT Usage

4. Lack of Legal Expertise

ChatGPT is a language model trained on a diverse range of texts, but it does not possess specialized legal expertise or domain-specific knowledge. It may struggle to understand intricate legal concepts, interpret complex statutes, or provide accurate legal advice in nuanced situations. Legal professionals should exercise caution and consider AI-generated content as a starting point for further analysis and validation.

4. Incomplete or Outdated Information

AI models like ChatGPT rely on the data they are trained on, and they may not have access to the most up-to-date legal information. Laws, regulations, and legal precedents are constantly evolving, and it is essential to consult current legal sources to ensure accuracy and compliance. Legal professionals must be aware that AI tools may not reflect the most recent legal developments.

5. Limited Contextual Understanding

ChatGPT lacks contextual understanding beyond the text it is provided. It may struggle to grasp the broader context of a legal issue, including specific circumstances, case-specific facts, or the intent of the parties involved. This limitation can affect the accuracy and relevance of AI-generated responses in legal scenarios that require a comprehensive understanding of the context.

6. Accuracy is a Concern

One of the most concerning aspects of chatGPT’s accuracy in providing answers is its tendency to produce “hallucinations,” or seemingly correct answers that are actually incorrect. This occurs because it generates its answers by making multiple guesses.

Legal professionals who rely on the system for their work often face difficulties when the answers provided by it are not always reliable. Unfortunately, many of these answers possess numerous flaws which render them untrustworthy and should not be relied upon by attorneys.

Another issue with its output is its tendency to be highly biased. For instance, it may select a response that appeals most directly to the questioner’s preferences and therefore appeals more easily to acceptance by the respondent. Furthermore, it frequently draws upon past responses as an example when crafting its response.

Therefore, it is essential to carefully evaluate its output for accuracy and suitability before using it as a resource. If a lawyer relies on an AI tool to complete their work, they should ensure that the tool has been trained to produce reliable and useful outcomes.

ChatGPT is built upon OpenAI’s GPT-3 large language model, which uses deep learning to anticipate language structure and generate text from given prompts that mimic human speech and writing patterns. Compared with past iterations of large language models that tend to be less user-friendly and require specific syntax requirements for operation, ChatGPT provides a more intuitive experience.

One of the most concerning aspects of chatGPT's accuracy in providing answers is givin answers that seems correct but are actually incorrect

Additionally, it’s unlikely to be able to perform as many tasks as a human, since they lack specific knowledge about certain types of things. This limitation is an important one for many large language models since it restricts what computational skills they can acquire.

Though these obstacles may seem insurmountable, they are achievable. It is likely that future versions of large language models will improve their capabilities in this area; however, this will take time as the systems are not currently designed for such tasks and must rely on external tools for assistance.

7. We don’t know how it reached a conclusion

The internet and the technologies that emerged from it feel like a giant organ for deceit–for upscaling access to speech and amplifying lies. Now deep-learning AI further compounds this by hiding the operation of chatbots so that nobody, not even their creators, can explain what they do.

For instance, the machine learning algorithms behind chatGPT remain poorly understood. They involve multiple layers that may contain duplicate data points which must then be processed through various processing paths with varying steps.

One of the limitations of ChatGPT and other machine learning models is that they can sometimes be difficult to interpret. This is because these models work by analyzing vast amounts of data to identify patterns and make predictions. It can be challenging to trace how exactly they arrived at a particular recommendation.

However, there are techniques that can be used to increase the interpretability of machine learning models like ChatGPT. For example, some researchers have developed methods for visualizing the internal workings of neural networks like ChatGPT, which can help to shed light on how the model is processing information and making decisions. Other techniques, such as feature importance analysis, can be used to identify which aspects of the input data are most important in influencing the model’s output.

It’s also worth noting that while interpretability is an important goal in machine learning, it’s not always the only or even the most important consideration. In some cases, the accuracy or performance of the model may be more important than its interpretability. However, in domains such as law, where transparency and explainability are crucial, efforts are being made to develop machine learning models that are both accurate and interpretable.

8. ChatGPT can’t argue on a particular side

ChatGPT, as a machine learning model, cannot be “controlled” in the same way that a lawyer can be trained to argue both sides of a case. ChatGPT is designed to generate text based on the input it receives and the patterns it has learned from the data it has been trained on. It doesn’t have the same level of discretion or critical thinking ability that a human lawyer has.

For example, I tested ChatGPT by requesting it to draft a petition to increase rent on the side of the landlord, and despite repeated attempts, ChatGPT took the side of the tenant, so it appears that the training text ChatGPT is consumer-oriented text and not landlord oriented text:

ChatGPT can't argue on a particular side

That being said, ChatGPT can be useful in generating arguments and identifying potential counterarguments for both sides of a legal case. By inputting information about the case, such as relevant laws and regulations, ChatGPT can generate text that outlines different arguments and positions that both the plaintiff and the defendant might take. This can help lawyers to better understand the strengths and weaknesses of each side’s position, and to develop a more nuanced and well-informed legal strategy.

It’s important to keep in mind, however, that ChatGPT is a machine-learning model and can’t provide legal advice or representation. Ultimately, the arguments and positions that lawyers choose to take in a legal case are based on their own judgment, experience, and knowledge of the law. While ChatGPT can be a helpful tool in the legal process, it should be used in conjunction with other research and analysis methods, and the final decisions and arguments should be made by trained legal professionals.

9. Lack of Emotional Intelligence and Interpersonal Skills

Legal practice often involves emotional support, empathy, and interpersonal skills when interacting with clients, opposing parties, or in court proceedings. ChatGPT lacks the ability to understand and respond to emotional cues, which can limit its effectiveness in situations that require human empathy, negotiation, or strategic decision-making.

Conclusion

As an AI language model, ChatGPT can provide information and generate text in response to legal questions. However, it is important to note that ChatGPT is not a licensed attorney and cannot provide legal advice or engage in the unauthorized practice of law.

The practice of law involves providing legal advice and representation to clients, which requires a license to practice law. In most jurisdictions, only licensed attorneys are authorized to engage in the practice of law, and non-attorneys who provide legal advice or representation may be engaging in the unauthorized practice of law.

While ChatGPT can provide information and generate text based on legal knowledge and data, it is not capable of providing legal advice or representation. It is important for individuals to seek advice and representation from licensed attorneys for any legal matters that they may be dealing with.

In summary, while ChatGPT can provide information and generate text in response to legal questions, it cannot engage in the unauthorized practice of law or provide legal advice or representation. Last but not least, I asked ChatGPT the following question:

Good thing that ChatGPT knows who’s the boss!