With the growing technological advancements in the legal industry, legal professionals are increasingly exploring the use of large language models (LLMs) such as ChatGPT and Google’s Bard to enhance their practices. The potential of LLMs and their applications to revolutionize the legal profession is undeniable, but the integration of these models into the practice of law requires ethical considerations.
Background on LLMs
The rapid pace of technological progress in the area of LLMs is driving the popularity and diverse applications of ChatGPT and other platforms. As a result, legal professionals are increasingly interested in understanding how LLMs operate and their potential to optimize legal practice. However, the integration of this novel tool into the practice of law raises ethical considerations that must be addressed to ensure the responsible use of the technology.
But first, here is a quick, simple primer on how ChatGPT works including an explanation of the terms associated with the platform and examples of prompts geared toward legal work: Why ChatGPT Matters for the Future of Legal Services.
Much discussion on attorney-client confidentiality is centered around shielding sensitive information from unintended recipients, e.g., cloud-based cybersecurity or email encryption. A prudent attorney must contemplate how their clients’ information is being received, transmitted, stored, and even destroyed.
It’s not unusual for an attorney to utilize legal research tools such as Westlaw or Fastcase by inputting their clients’ legal issues or even specific facts at issue. But what about when legal professionals share those issues and facts with ChatGPT? The comments to Rule 1.6 (Confidentiality of Information) offer some reminders:
- “ Paragraph (e) requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client or who are subject to the lawyer’s supervision. See Rules 1.1, 5.1 and 5.3. The unauthorized access to, or the inadvertent or unauthorized disclosure of, information relating to the representation of a client does not constitute a violation of paragraph (e) if the lawyer has made reasonable efforts to prevent the access or disclosure…”
- “ When transmitting a communication that includes information relating to the representation of a client, the lawyer must take reasonable precautions to prevent the information from coming into the hands of unintended recipients. This duty, however, does not require that the lawyer use special security measures if the method of communication affords a reasonable expectation of privacy…”
- “3(c) Use of Content to Improve Services. We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services. We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services. You can read more here about how Non-API Content may be used to improve model performance. If you do not want your Non-API Content used to improve Services, you can opt out by filling out this form. Please note that in some cases this may limit the ability of our Services to better address your specific use case.”
The ChatGPT General FAQ page further emphasizes, “Please don’t share any sensitive information in your conversations.”
- “5(b) Security. You must implement reasonable and appropriate measures designed to help secure your access to and use of the Services.”
Similar to securing your information transmitted in the cloud, information transmitted via electronic means must be properly safeguarded. You must “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Rule 1.6(e).
Therefore, attorneys and legal professionals may use ChatGPT for general legal research, writing, and getting ideas, but should avoid providing specific details of a client’s case or disclosing personal or confidential information.
Supervising people and AI
Rules 5.1 and 5.3 state that attorneys have a duty to supervise lawyers and non-lawyers working with them. Attorneys should ensure that those in your organization using ChatGPT, lawyers and non-lawyers alike, are properly trained and understand the ethical considerations surrounding its use. Similar to my discussion before of attorneys’ use of chatbots, this duty extends to others’ use of ChatGPT and to ChatGPT itself.
The responses generated by ChatGPT can be imperfect and even problematic. LLMs such as ChatGPT are trained on very large amounts of text data, so they may not always provide the most up-to-date or relevant information on a given legal topic, even when the prompt directs it to focus on a particular context. Review and refine ChatGPT responses to ensure that they accurately reflect the unique circumstances of the client’s case and provide comprehensive responses to legal issues.
For example, while ChatGPT can quickly generate a step-by-step guide for a simple legal problem such as returning a security deposit, jurisdictional nuances such as local ordinances or court document requirements are more error-prone. But you cannot blame the bot, as ChatGPT can only generate text based on patterns it has learned from the data on which it was trained. And when it hasn’t been trained on the data you need, it’s pretty darn good at fabricating responses instead.
Attorneys know to closely examine the subsequent treatment of a case (i.e., Shepardize) to ensure its authority before relying on its use. Likewise, attorneys should supervise and review any output generated by ChatGPT (e.g., see ChatGPT hallucinations example) and train their legal professionals to verify outputs before using them.
Plagiarism and ChatGPT detection
Regulation of artificial intelligence use will be a growing issue for industries and governments, including the legal profession. At the February 2023 ABA Midyear Meeting, a resolution was adopted taking a first step to emphasizing fundamental concepts such as accountability, transparency, and traceability in ensuring the trustworthiness of AI systems.
Regardless, attorneys remain ultimately responsible for their work product and advice to their clients. Even when the content is accurate, attorneys want assurance that it is original. While ChatGPT is programmed to provide original responses to prompts, the process can create strings of content from other works, because outputs are generated by analyzing vast amounts of data. So, if you are directly using portions of a response as your work product, it would be prudent to check the information with a plagiarism checker and be transparent about your use of LLMs. Three free tools include Grammarly, Chegg, and Quetext.
Likewise, there have quick advancements in tools used to detect the use of ChatGPT in content, including from the maker of ChatGPT itself, OpenAI. Here are three free tools to help distinguish between AI-written and human-written text: OpenAI’s AI Text Classifier, Writefull, and Sapling.
I tested these detection tools on a ChatGPT known authored blog post. While the conclusion was consistently correct, the degree was wide-ranging between the products from “99.9% fake” to about 75% ChatGPT generated. Here are the results.
How should legal professionals use ChatGPT?
Legal professionals can use ChatGPT as a powerful tool to improve their efficiency and productivity. Still, as with any novel tool or process, its use must be subject to ethical considerations, particularly regarding confidentiality and supervision.
As ChatGPT and similar applications provide new ways to enhance the practice of law and the delivery of legal services, don’t fear the replacement of lawyers by robots. Instead, I encourage attorneys to embrace your technology competency requirement to understand “the benefits and risks associated with relevant technology.”
The post The Rise of ChatGPT: Ethical Considerations for Legal Professionals appeared first on 2Civility.