Angle
Ten Use Cases for Portable AI Models
- eDiscovery and Investigations
- 4 Mins
In recent years there has been a noticeable shift in the legal community from hesitation around using emerging technologies to embracing modern tools that can optimize process and improve cost management. Legal teams that were once set in their ways and skeptical of technology now are looking for automated tools that can improve efficiency at every level. Portable AI models are the new tool on the scene that can help with not only eDiscovery review, but so much more. These tools reuse prior insights to jumpstart review, eliminating the need to create and train a new model every time a similar matter or question arises.
The first type available is a pre-built model that can help identify language in datasets on repeat topics such as privileged content or insulting behavior. The second type is a customized bespoke model trained to pinpoint issues or answer questions unique to a particular organization. Generally, a service provider will have a library of pre-built models on common topics for clients to easily utilize. Custom models will require working with a service provider to internally train unique models. Both types of portable AI models have continuous evolution potential and can be a key value generator for legal teams.
Use Cases
The use cases for portable AI models are growing, the technology is more powerful than ever, and application can reach outside the legal department. Current awareness around how many use cases are out there is lacking, but this will absolutely change as adoption grows. This can only happen with more education opportunities on application and ROI potential.
Here are ten use cases where AI models can prove useful in litigation, investigations, and beyond:
Culling data for document review
- Privilege review: During the eDiscovery process, the privilege review phase can be cumbersome. Applying a pre-built model from a provider that is trained to target privilege language can be a huge aid in privilege identification and the redaction process. This will streamline eDiscovery review while maintaining confidentiality where appropriate.
- Sensitive data identification: A model targeting certain words or phrases that indicate misconduct has proven especially effective in employment litigation. For example, in a sexual harassment case teams can apply AI models on communication data to pinpoint sexually explicit themes, concepts, and language. This software can also detect comments on appearance, bullying, discrimination, harassment, and/or threatening behavior. This can help parties jumpstart review by identifying key actors and witnesses earlier.
- Litigation risk analysis: Teams can apply portable models before even reach the eDiscovery phase as another way to perform early case assessment and make decisions about settlement. Using the employment situation discussed above, having the ability to run a pre-built sensitive language model during the investigatory phase could save an organization the expense of moving forward with a case if it is more suited for settlement or dismissal.
- Pinpointing valuable keywords: This is an illustration of how layering tech can yield more efficient results. The legal team can first use a pre-built model to determine optimal keywords. Then, they can use the keywords in conjunction with other tools to further cull the dataset and pull out what is necessary for manual review.
- Custodian identification: A challenging and time-consuming part of eDiscovery can be identifying where pertinent data resides. Although there is other tech as well as information governance strategies that can help with this feat, using a portable AI model is just another beneficial tool to explore. This application is especially helpful where organizations have built bespoke models that have already been customized to account for unique internal workflows and data repositories.
Regulatory compliance functions
- Data elimination: Just as with litigation, both pre-built and custom AI models will be useful during a regulatory investigation to cull cumbersome datasets. Many regulatory bodies impose stricter deadlines, making tools that can expedite review necessary to remain compliant. This is also an effective way to cut costs, as investigative budgets are generally smaller.
- Internal investigations: Teams can deploy models that will assign a sentiment score to prioritize evidence hotspots or detect fraudulent behavior that would raise compliance risk. For example, a model geared towards kickbacks, insider trading, or related topics can help detect fraudulent patterns that are the subject of an internal investigation. By running a pre-built model on the data, teams can uncover which custodians are using words and phrases indicating the fraudulent behavior so they can quickly act.
- DSAR compliance: Under the GDPR, consumers can request access to see how organizations are using their data. Since quick turnaround is required, an AI model already trained to identify personal information sources (which can come in many forms) can help teams achieve compliance fully and expediently.
- Monitoring internal behavior: This application is beneficial in the financial services industry. Leadership can use a model to monitor employee behavior to ensure that employees are acting appropriately and not promising their clients unattainable rates or assets.
Data breach response
- Post-breach analysis: Applying an AI model after a breach occurs can help narrow down who to notify and where sensitive data resides. Time is of the essence in these situations, so being able to quickly apply a tool like this will greatly aid in mitigation efforts.
Conclusion
Portable AI models are the new tech tools to watch. The use cases and maturity will only continue to expand as more organizations become aware of how these models work and what benefits they can offer to legal and other departments. This is a tool that not only saves on cost and time, but also promotes efficiency and consistency. Now is the time to monitor new industry and court developments, evaluate investment opportunities with providers offering pre-built or bespoke models, and discuss potential use cases with leadership teams.
To learn more about portable AI models, consider reading our recent whitepaper on the topic.
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.