Angle
Legal Bots Raise Liability and Ethics Concerns
- eDiscovery and Investigations
- 3 Mins
As artificial intelligence applications advance rapidly, eDiscovery technologies are perhaps the most visible example in the legal community. AI technologies, including legal bots, are becoming more common in legal services, increasing risk of error or harm. It is an open question who will bear responsibility for that harm and to what extent our legal system is up to the task of providing a remedy.
AI technologies include “bots,” which are short for “robot.” These software programs perform automated repetitive tasks, while “chatbots” are robots that simulate human conversation.
Sometimes AI applications, especially chatbots, behave in ways unanticipated by their programmers. For instance, a Chinese developer pulled popular messaging chatbots that unexpectedly seemed to criticize communism. Similarly, Facebook chatbots that were programmed to want certain items and negotiate with each other for the items started communicating in nonsensical English. This led news organizations to state incorrectly that the chatbots had invented a new language.
There are no current reports of legal bots behaving unexpectedly, but the possibility of error or unexpected behavior in the future exists. Current examples of legal bots include DoNotPay, which started out challenging parking tickets; Robot Lawyer LISA, which “negotiates” contracts between parties; and ROSS, which acts as an on-demand research associate.
Assessing Liability When AI Causes Injury
AI can’t be sued, and even if it could, AI possesses no assets. However, legal frameworks that have been applied to machines that make no decisions may be applicable to AI. When a factory robot injures a worker, the employee’s injuries may be compensated after the application of negligence or product liability principles. An examination of the facts and mechanisms for assigning fault allows for the apportionment of blame and potentially provides a remedy.
These existing legal concepts may suffice when it comes to determining liability and compensation. If an AI application is programmed to make autonomous decisions along specific lines and injures someone, then negligence principles, such as the foreseeability of the injury and contributory negligence of human actors, could be used to apportion culpability. Product liability theories may also be available in some instances of harm, on the theory that if a robot causes harm, this is implicit proof of some defect with the legal bot.
However, it is unknown to what extent the owner or user of the AI, who may not have had any input into the programming that caused the harm, but who ultimately made the choice to use the AI, will be liable. The importance of user/owner liability rises as the technologies become more complex and the ability of the end users to enter inputs increases.
Ethics Questions Arise with Increased Use of AI
In addition to questions of liability for harm caused in the legal setting, practitioners should consider the ethical implications of using AI. Ethics rules already have been enacted in many states that require legal professionals to be competent and diligent in understanding new technologies such as those used in eDiscovery. This same duty likely applies to other technologies such as legal bots.
Lawyers also are tasked with supervisory duties, which could include supervising the actions and results of legal bots.
Likewise, duties related to privilege and confidentiality may be implicated as use of legal bots includes inputting client confidences. The method of securing information input or produced by a legal bot should be carefully reviewed to ensure protections against data breaches.
Legal Bots Offer Opportunity and Risk
Legal bots and other technologies can automate legal workflows. As the capabilities of legal bots increase, legal professionals must keep pace to understand who bears responsibility when things go wrong and how best to protect themselves in such an event. They must also consider their responsibilities with respect to the ethical use of these technologies.
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.