This article by Abhishek Mittal, vice president of data and operational excellence at Wolters Kluwer, was originally published in Legal Dive.
Artificial intelligence (AI) is changing workflows in every corner of the business — and legal departments are no exception.
The term artificial intelligence was coined by computer scientist John McCarthy about 60 years ago.
Since then, AI has become one of the most promising technological innovations in the corporate world and beyond. Google CEO Sundar Pichai has even suggested that AI may be more impactful than the discovery of fire or the invention of electricity.
Much like a fire, though, AI doesn’t keep burning on its own. Someone must build and train it. That’s why, a decade ago, Harvard Business Review declared data scientist the “sexiest job of the 21st century.”
Having worked with many brilliant data scientists, I find the job title to be a bit of a misnomer. To start, successful AI solutions require the right mix of design, data, and domain expertise.
Data scientists on their own cannot build AI models, just as AI models on their own cannot handle all decision-making. That’s why I refer to data scientists as “decision scientists.” Even with the advent of AI, decision-making is still in human hands at the end of the day.
Let’s take a closer look at what that means in practice — and how corporate legal departments can get the most out of the technology.
AI as enabler
One of the biggest misperceptions about artificial intelligence is that it is going to replace people, which is simply not the case. Instead, legal professionals who use AI will replace legal professionals who do not.
Think of AI as an enabler, akin to the technology in smart cars. The car cannot drive itself, but it can help with specific tasks like backing up, parking, or changing lanes.
In the future, AI will be as ubiquitous as Microsoft Excel. But decision-making and review processes will still require validation by a human end user.
When my company was designing its AI-assisted legal invoice review, for instance, we first paired data scientists and domain experts to build the model. Then, we put the AI that they built to the test.
We gave one group of experts a set of invoices to review manually. We gave another group the same set of invoices but accompanied by AI scores provided by our newly built model. We did this repeatedly, so we could track the results over time.
The experts with AI were able to generate greater savings — sometimes saving four times more than the control group. The AI acted as a highlighter, allowing them to focus on items that demanded greater due diligence. But humans were still part of the review process.
The right talent
Once you understand that AI is not going to replace human talent, it becomes more obvious that you need the right people to get the most out of the technology.
In the beginning, we had more data scientists (as they’re commonly called) than we did domain experts. But domain experts are the ones who know which processes and customer experiences are best suited to be improved by AI.
We’ve continued to grow our roster of domain experts, and they use AI more frequently than our data scientists.
Additionally, they’re the ones driving enhancements, as they have a better understanding of what problems need to be solved.
Not all companies can build their own AI models in-house, though. If you’re looking to choose a partner, pick one with the most usage.
Ask potential partners how many customers are using their models and how many years of experience their models have. Many companies will throw out all the right buzzwords.
But a tried-and-true model is the key to getting the most out of artificial intelligence.
Use cases
According to McKinsey, by the year 2025, data will be embedded in every decision, interaction, and process. But in the meantime, it’s important to prioritize use cases based on which problems are most suitable for AI.
To that end, ask yourself: What decision are we trying to improve? Are there a lot of transactions happening? Do we have sufficient data? Is there an opportunity to create a feedback loop?
Once again, the right mix of data scientists and domain experts is key to answering these questions.
In some cases, people use AI for quality assurance checks. Other times, it’s used for predictive insights. Regardless, it’s very important to analyze the “why” of your use case before you start building the model.
Our AI-assisted invoice review was an appealing use case because we had so much data on legal spend already. This gave us a huge head start when it came time to build our models.
The promise of artificial intelligence cannot be understated; it will be commonplace in corporate legal departments in no time. And yet, AI is not a plug-and-play solution that’s going to make decisions for you.
Instead, it should enable your experts to make better decisions for themselves. AI isn’t meant to replace human beings; it’s meant to augment their knowledge. Plan accordingly.