A man seated at a desk using a computer
LegalAugust 26, 2024

Technology industry legal teams need trustworthy AI

By Jitendra Gupta, Head of AI & Data Science, Wolters Kluwer ELM Solutions

Many companies in the technology industry currently have an intense focus on artificial intelligence (AI) solutions. This is true both for the products they create and the tools they use internally to accomplish their goals. To those working in engineering or programming roles, AI is often an area of expertise, which gives them an advantage when evaluating AI tools that are being considered for their function.

However, for the staff of the corporate legal departments of technology organizations, working in technology does not make them experts on AI. There are AI-driven solutions for legal that can help you increase efficiency, improve decision-making, and demonstrate the value that the legal department delivers to the company. But, legal leaders need to be certain that they choose only the most trustworthy AI tools.

While Deloitte, NIST, and Gartner have all released guidance regarding AI trustworthiness, my years of experience successfully training and refining ELM Solutions’ AI model have allowed me to develop my own list of critical considerations for evaluating AI-driven solutions. Here are five main components of trustworthy AI.

Data

AI is only as trustworthy as the data it’s trained on. The greater the volume and variety of data, the more trustworthy the AI will be. By ingesting clean, refined, and (ideally) structured data that reflects a wide swath of scenarios, a model will be able to offer more accurate predictions and recommendations.

Volume and variety are also important to reduce bias. On one hand, AI often promises to reduce bias by minimizing the role of fallible, subconsciously prejudiced humans. On the other, bias can be inherent in certain sets of data, increasing the risk that the AI will produce biased results. Training AI with a high volume of varied data mitigates this risk, as does the use of a feedback loop to monitor for bias on an ongoing basis.

Models

Trusted AI is transparent AI. “Black-box algorithms”—AI models in which there is no visibility into how the decision is made—are at the heart of distrust in AI. Therefore, AI must be transparent. Interpretive models that clearly explain the algorithm’s output are important to building trust in AI.

Users must be able to understand how the AI came to its conclusions. Then, they can assess those conclusions and feed input back into the system, helping make the AI even more accurate, intelligent, and trusted.

Partners

Carefully vetting the partners you use for AI solutions means carefully vetting the models they build. In addition to ensuring your partner has a large, diverse data set on which to train AI, also analyze whether they have a solid business model to support continual development. There are countless startups touting game-changing AI, but the quality of their models is irrelevant if they go out of business a few months (or even a few years) down the road.

I suggest choosing partners with the most usage, as usage translates to trustworthiness in most instances. Ask potential partners how many customers are using their models, how long they have been building their models, how long those models have been in operation, and whether they have the skills and assets to maintain high-quality models for years to come.

Quality

AI models must be updated and calibrated continually to ensure trustworthiness. Feedback loops from internal users are the bread-and-butter of continual quality control, as they help ensure that models are up-to-date and accurate.

I also recommend having a third party audit any AI you use. At our company, we have an independent team audit our models regularly, assessing everything from data collection to model building. While quality controls are built into our models, it’s reassuring to have an external team apply its own framework for assessing reliability and accuracy.

Having the stamp of approval from an independent source also helps improve the confidence of customers and partners looking to use our AI. It has helped us increase usage, refine our model, and improve trustworthiness.

People

Perhaps the biggest misconception about AI is that it’s going to eliminate the need for humans in workflows altogether. This is a myth. Keeping a human in the loop is crucial to maintaining trustworthiness.

While a machine can sort through a large amount of data much more quickly than a human, any decision recommended by AI must still be vetted and approved by someone with the proper expertise. Indeed, it is their expertise and input that makes AI work more efficiently and effectively. Essentially, people keep the system honest while using their expertise and experience to improve the overall quality of AI-generated results.

Trust in AI cannot be built overnight, but it can be cultivated and, ultimately, achieved. It must be established and fostered on an ongoing basis through diverse data sets, sufficient feedback loops, great partners, careful implementation, and knowledgeable human beings. Put all of those together, and you’ll have a trustworthy AI system that works for your team.

For more on the elements of enterprise legal management that can help technology sector legal departments excel, visit our portal for the technology industry.

Back To Top