Article
ChatGPT: What’s the fuss?
Demystifying Generative AI in Legal
- eDiscovery and Investigations
- 4 Mins
What’s all the buzz around ChatGPT? This new chatbot, powered by an artificial intelligence (AI)-based language model called GPT-3, has caused a lot of waves lately in our industry. Users can ask questions about anything, and the bot will engage in conversational dialogue with very organized and succinct responses that are often indistinguishable from human-generated output.
GPT-3, from the OpenAI foundation, is part of a family of AI models known as Large Language Models (LLMs). This is an area of research (Generative AI) that is moving quickly, with the recent announcement of GPT-4, and similar LLM’s such as Google’s LaMDA also making the news. How well the ChatGPT bot understands language is remarkable and seems superior to any other LLM currently available to the public, although it is important to understand that it is specialized and trained on a far smaller subset of data than GPT-3 itself. Users of ChatGPT can provide feedback on responses which is used to further tune the application. General ChatGPT usage is currently free during the initial phase with advanced membership options available for a fee.
People are already starting to use ChatGPT to get answers to everyday questions and even for some research purposes. To test capabilities, users have even asked ChatGPT to take standardized tests. While it did not perform for all, it did achieve passing scores for some tests including the Uniform Bar Exam (UBE) and the Medical College Admission Test (MCAT).
As with any emerging technology, it is crucial to balance both benefits and risks when utilizing this technology for business purposes. From a legal standpoint, it is necessary to understand the unique collection obstacles and review benefits this technology can offer for eDiscovery purposes; other use cases for lawyers and their clients; and how ethics may factor into any LLM based endeavor.
Putting ChatGPT to Work
With new technology trends, concerns often surface about whether certain tools will replace humans – especially in the legal industry. The sentiment applies here, as there have already been questions about whether GPT-3 and other similar AI language models could take over the human function for jobs that still require an element of oversight. In the eDiscovery space, could LLMs review and flag responsive documents without needing human oversight? Or does this seem way too far-fetched? Before answering that question, check out what quickly came back when we asked ChatGPT to write a blog about ChatGPT and eDiscovery:
ChatGPT, a cutting-edge language model developed by OpenAI, has quickly gained recognition for its ability to process and understand natural language. As a result, it has become an invaluable tool for a variety of industries, including the legal sector.
In the field of eDiscovery, the use of ChatGPT has revolutionized the way that attorneys approach the discovery process. With its advanced language processing capabilities, ChatGPT is able to sift through massive amounts of data, including emails, text messages, and social media posts, in a matter of seconds. This allows attorneys to quickly identify relevant information, saving both time and money.
This is just a third of what the chatbot actually came back with. It also included information about the key advantages of using ChatGPT in eDiscovery and final thoughts on how use cases for language models will increase in the future. At first glance, it makes sense why some analysts believe this tool could replace humans in higher level roles. However, it is important to recognize that although language models such as GPT-3 may prove beneficial – limitations exist.Understanding the Limitations
While the ChatGPT bot and AI models like GPT-3 are innovative, there are still risks and limitations to account for before using them for business purposes. Take the example above. While it came back with a lot of helpful information and even formatted the text like a blog – what did ChatGPT miss? It did not provide any information about the tool’s training history, limitations, risks, or ethical considerations. These are all things that lawyers and their organizations have to consider before using new technology so they can make an informed decision and adequately represent their clients.
Here are five key limitations to consider as advanced language models continue to emerge and evolve. This will help balance the benefits and risks so organizations can make educated assessments about appropriate use cases.
- Lawyers will still need to make some relevance and privilege determinations if using LLMs for litigation or investigatory review functions. There is currently no strong evidence that this technology would be able to perform these human functions appropriately. As this type of model evolves it could instead prove well-suited for first pass review (similar to TAR), with the goal of reducing costs and optimizing legal workflow.
- Models like GPT-3 will need to be trained on specific document sets in order to be useful for a specific organization’s investigation or case. This will require a cost-benefit analysis and comparison to tools already deployed, as it will likely require significant training to be useful in this scenario.
- Sometimes the chatbot will still answer inquiries incorrectly. This could be detrimental when utilizing for document review, research, settlement evaluation, motion drafting, or contract drafting. This does not mean that advanced language models will never be appropriate in such situations. Decision makers need to weigh the risks and benefits for each use case, which will be easier to do as more studies and statistics become available.
- Training data will inevitably become stale, which means that models like GPT-3 will need to be continuously trained and updated in order to generate quality responses.
- Lawyers always have to account for their ethical obligations when dealing with emerging technologies. Client confidentiality, security and privacy are some considerations that surface with tech usage. Putting confidential client information into a language model like ChatGPT will waive privilege and can violate the attorney-client relationship. Any information included in a prompt will not be deleted and can be used for training purposes. Consider these factors before using for document review, contracting, language translation, and other use cases that involve confidential information. Client consent is also crucial when using any new technology and lawyers need to remain informed about the benefits and risks in order to provide competent representation.
While language models are an exciting area that creates new avenues for innovation, fears of this technology replacing human expertise are unfounded. There are too many risks and factors that still require legal expertise and human judgment. In fact, even the creators of such models warn that their output should not be used for anything critical, independent of human review and analysis.
Large Language Models could initially gain adoption for creating simple templates, contract management, administrative automation, and some document review, but its use for legal research or brief writing seems unlikely anytime soon. Tools like ChatGPT do not account for factors such as a judge’s preferences, unique processes, or client goals. In addition, unless these types of AI models are trained in a secure way, there is also no guarantee that sensitive information will be kept confidential.
What should legal organizations do now to stay ahead of the curve? Proceed with caution. Monitor developments with ChatGPT and similar tools. Limit use cases until more is known. Create policies and trainings around this technology usage. Advise corporate clients about the benefits and risks of using tools like this for business purposes. And, above all, have external partners that understand the technical aspects of emerging technologies to turn to for consultative purposes.
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.