Article
The Importance of Using AI Effectively and Transparently
- eDiscovery and Investigations
- 4 mins
Artificial intelligence (AI) is everywhere, and most people use it on the daily. From facial recognition software allowing access to mobile devices to recruitment technology that helps human resources departments vet candidates, this technology is versatile and complex. In the legal space, AI has emerged most frequently in litigation and regulatory contexts. Examples include solutions that streamline and improve efficiency with document review during investigations or discovery and litigation analytics to inform decisions such as the probability that a motion will be successful before a specific judge. In all these scenarios organizations and their legal counsel have important responsibilities to use AI effectively while remaining ethical, especially when dealing with challenges to any data judgement calls. Being transparent about the tech selection process, manual oversight, and technical aspect of the AI solution used can help organizations maintain compliance.
AI from a Business Perspective
Integrating AI usage as standard practice can raise efficiency for many processes. For example, being able to use predictive technology to find suitable candidates for hiring vacancies can significantly cut down on time and fill spots quickly. Organizations with high volumes of contracts can also realize time and cost-saving benefits when using AI software to not only manage their contracts but also take a deep dive into key issues affecting negotiations or compliance – such as effects of a new regulation or force majeure applicability. For litigators, using AI for early case assessment can inform strategy and guide key decisions including when to settle or which documents to preserve to avoid future spoliation claims. These are just a few illustrations affirming that AI can be very useful across every industry. The question then becomes: when is it worth it to invest in AI from a business standpoint?To answer this question, organizations need to ponder when AI will be most effective. Beneficial features to look for when investing in new technology are whether it will add business value, increase efficiency, enhance current processes, reduce costs, and manage risk. Some ways to gauge this are through solution comparison, benchmarking, tracking performance metrics after deployment, or business transformation consulting. Remember that it may take some trial and error to find what combination of people, process, and technology is optimal.
Even when organizations feel AI investments are economical and effective, ensuring practices remain ethical and meet all legal obligations is crucial. This means at minimum having a basic understanding of how technology operates to explain decisions. While a challenging feat since AI is so complex, failure to consider technical aspects before and after investing will be problematic because there is a trend of explainable and transparent AI emerging in regulatory matters which will likely expand.
Transparency Obligations
AI is great for automating processes, tapping into business intelligence, and managing costs. However, this type of technology is not intuitive by design which makes it difficult to explain. Take the example of AI recruitment software. If an organization is faced with a claim rooted in discrimination, it will need to be able to explain the technology behind the decision as part of their defense. While the manual training component helps, ambiguity exists around how the software processes this data and makes judgements to support the results. This is where the ethical quandary comes into play – as clients, consumers, regulators, or the courts may want access to the data and processes backing decisions. Additionally, the General Data Protection Regulation (GDPR) grants a legal right for consumers to have AI explanation when an automated decision significantly affects them like with the hiring example noted above. Similar and stricter obligations are also materializing, such as China and the EU’s proposed algorithmic regulations.With these obligations trending globally before regulatory bodies and courts, it is critical to minimize risk by finding ways to make AI better explainable. While this will always be a very technical and often challenging battle, here are some best practices to consider:
- Incorporate transparency into research: When vetting an investment, see if there is any information available that indicates issues or improvements with a certain solution. Organizations should look at public studies or testing data, talk to colleagues, consult with industry experts, or meet with counsel before making an investment to ensure their preferred AI systems promote transparency.
- Consult with counsel and provider partners: It is crucial to factor ethical AI usage into information governance and risk management initiatives. Counsel and providers helping with data governance or business transformation are beneficial resources. They can consider applicable legal and regulatory obligations applying to AI automated decision making imposed by court decisions, the GDPR, other privacy laws, consumer finance regulations, and more. Additionally, they can consult on best practices or create standardized templates for documenting AI model creation and training.
Also consider creating an AI Explainability statement, the first of which emerged in 2021. The purpose of this statement is to increase and support transparency. It should include the reasons for using AI, how it functions, logic behind decision-making, training components, and system maintenance. An organization should update the statement when necessary and as more organizations publish them, best practices for what to include will evolve.
- Monitor key AI updates: Look out for whether more organizations publish AI Explainability statements and what they include. Also pay attention to any amendments or delays with the proposed algorithmic regulations discussed above, as this will be instructive for other countries wishing to regulate this technology. While there has not been a flood of data protection decisions regarding transparent AI, there have been a few [cases?] before GDPR enforcement agencies over the last two years. If an upward trend continues in this regard, more organizations will need to enhance AI transparency practices. Getting ahead of the curve will save this headache down the road and help ensure that operations relying upon automated technology remain ethical and defensible.
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.