Businesswoman Networking Using Her Smartphone
Legal January 16, 2024

The importance of human-centered AI

This article was originally published by Architecture and Governance Magazine.

By Jitendra Gupta, Head of AI & Data Science, Wolters Kluwer ELM Solutions

“Set it and forget it” might be a great infomercial tagline, but unfortunately, it’s never really applied to technology. Especially when it comes to artificial intelligence (AI). AI must be adjusted, updated, and sometimes fixed so that it can continue to deliver high-quality recommendations.

This is called a feedback loop—the iterative process that AI uses to continuously learn and refine its recommendations. In the feedback loop, every output is collected and analyzed, allowing AI to “learn” from each decision, make improvements, and increase the accuracy of its output.

Although AI is designed to be automated and self-sufficient, people play an important role in the feedback loop. AI isn’t born; it’s created, ideally by internal teams with diverse backgrounds and perspectives. Likewise, customer input is critical to ensuring the AI continues to function well and meet people’s needs. So, a corporate lawyer using AI to analyze invoices or select law firms for certain matters should be empowered to provide their vendor with insights into what’s working, what isn’t, and what could be improved.

Call it “human-centered AI,” where humans don’t just create the technology, they have a hand in its ongoing maintenance and upkeep. Human-centered AI begins at the point of the technology’s creation, but it doesn’t stop there. It continues throughout the lifecycle of the AI, ensuring that the technology constantly delivers accurate, trustworthy, and reliable intelligence.

Laying the groundwork for trustworthy and reliable AI

Creating a functional and reliable AI requires a combination of domain and data science expertise with design acumen.

  • Domain experts are particularly important when developing AI for the legal sector, as legal operations professionals, attorneys, and others bring highly valuable knowledge when training AI to deliver results for corporate legal departments (CLDs).
  • Data scientists cleanse, analyze, and glean insights from large amounts of data.
  • AI design strategists create systems, design prototypes, and assist in model building, all while focusing on delivering intelligence in a user-centric way.

It’s impossible for an AI model to work optimally without all these individuals working together. For instance, a model built just by data scientists might technically work, but it probably won’t be focused on the user or their business needs. Meanwhile, a model created by an AI designer may not have the breadth of insights it could have if a data scientist and domain expert were also involved.

It’s this diversity of human talent and perspectives that lays the initial groundwork for everything that organizations want in AI. This includes algorithms that are non-discriminatory and fair, secure and reliable, transparent and explainable, accurate and trustworthy, and enhance human, societal, environmental, and commercial well-being. Working together, these teams create accurate AI solutions that are both useful and customer-friendly.

Five phases of human-centric AI maintenance

Even after an AI-powered software solution is introduced and begins learning on its own, human beings must stay in the loop to ensure it continues to perform effectively and provide high-quality output. This is where customers are added to the feedback loop, providing another perspective to augment the work being done by internal teams to optimize the AI.

The post-release timeframe can be broken down into five phases that, together, form the basis of an effective AI solution:

Enhancement. The vendor’s internal teams proactively monitor the performance of the AI in the real world. Meanwhile, customers provide feedback and recommendations on potential issues and opportunities for improvement, which are then assessed by the vendor.

Analysis. The vendor analyzes possible issues to identify the cause of an anomaly, assesses the impact on the performance of the AI (including metrics of recall on the AI’s machine learning capabilities), the accuracy of the AI’s recommendations, and more.

Review. The results of the analysis are reviewed by the internal team and portfolio owner, who then decide on a rollout plan. Customer feedback is also reviewed and incorporated into the plan.

Implementation. Updates are made to the current model, and new updates are implemented every few months to keep the model current. Fixes are made as needed in the interim.

Monitoring. After updates are made, the performance of the AI is closely monitored by data scientists and design strategists to ensure the changes are effective.

The future of AI is being created by humans

Recently, concerns about AI replacing human experts have shifted to concerns about AI security, accuracy, and privacy. However, having human beings proactively engaged in the development and servicing of AI can help minimize those concerns.

People give AI its intelligence and ability to produce trustworthy recommendations, and their watchful eyes help keep AI secure and proprietary information safe. People are also instrumental in the upkeep of AI, ensuring that it continues to perform optimally and deliver intelligence that people can rely on.

In short, the present and future of AI aren’t being driven by machines. They’re being forged by human beings who provide the direction, insights, and maintenance the technology requires to flourish.

Back To Top