3D illustration Rendering of binary code pattern
ComplianceMarch 04, 2025

Keeping pace with ARTIFICIAL INTELLIGENCE: Third-party risk management

By: Kris StewartJon Selleys

(As published in ABA Risk & Compliance magazine)

IT’S EVERYWHERE! What’s that? Artificial intelligence (AI) — in particular, generative AI. Algorithms are hardly noticeable anymore — they are so commonplace. But generative AI is still emerging in so many ways: summarizing social media comments, responding to Google searches, and now creating emojis out of pictures of our pets. But has it found its way yet into third-party solutions? Most likely!

Managing third-party risk is on the minds of risk and compliance professionals — and regulators. With the rapid adoption of advancements in artificial intelligence, particularly with generative AI, it’s more important than ever to know how third parties may be implementing AI in their solutions to ensure compliance programs remain effective.

Before addressing third-party management challenges, it is important to review key aspects of AI and some of its typical applications in banking.

Banks and other financial institutions (FIs) — and the third parties serving them — report significant investments and product offerings in the following areas that now often tap AI capabilities:

  • KYC, due diligence, and fraud detection: FIs are using AI to connect databases, authenticate identity, identify patterns of behavior, and quickly discover anomalies.
  • Customer Service: FIs use AI to provide customer service chatbots and offer customers insights into their accounts.
  • Privacy and cybersecurity: FIs and businesses of all stripes are turning to AI to monitor critical systems for cyber threats and to ensure proper use of private data.
  • Compliance program management: FIs are using AI to monitor and analyze laws and regulatory changes, streamline the adoption of changes and new laws and regulations, and simplify the management of policies and controls.

Key concepts, components, and processes of AI solutions

With any AI application, compliance professionals must understand important concepts, components, and processes to be competent in discussing the adoption of these applications within an organization.

Models

AI is often thought of as a monolithic "thing" that can improve productivity and efficiency. In reality, AI isn’t a single "thing" but many different types of AI models with different functions that can be used or combined to create a tool or application that addresses areas of need within an organization.

Generative AI is an example of one type of model that has driven much of the recent excitement about AI. This model effectively creates coherent text, summarizes information, and drafts documents. It is effective for applications like creating compliance reports, generating realistic synthetic data for testing, and summarizing legal and regulatory text. It can sometimes hallucinate, that is, make up information — because it doesn’t actually know things — but it produces coherent content from the input provided by the user and the model’s training.

"KYC, due diligence, and fraud detection: FIs are using AI to connect databases, authenticate identity, identify patterns of behavior, and quickly discover anomalies."

Other common types of AI are machine learning models. These models work with large datasets and predict outcomes, identify patterns, create groups within data, and simplify large datasets. They could be helpful when looking for fraud, assessing risk, and identifying customer segments.

Natural Language Processing (NLP) models understand and process texts. Technically, some generative AI falls into the category of NLP. Still, where NLP models focus on understanding, interpreting, and processing text and language, generative AI models can create new content based on what an NLP model has produced. NLP is effective at extracting meaning, classifying text, and analyzing sentiment. This model could help review legal or regulatory text for key obligations.

There are other model types, and new models are assuredly coming. The key takeaway is that AI models must be suited for the specific task or purpose they are intended to perform. In any application, multiple models are likely to be used together to support more complex tasks.

Data

Data can be a little confusing in the context of AI. There is the data used to train the model, and the data used in the implementation of a particular application. The data for training the model should be clean and pre-processed to avoid the creation of misleading patterns. It should be diverse enough to prevent bias and represent the problems the model intends to solve. On the other hand, implementation data is meant to test whether the model effectively does what it is designed to do. This data is representative of the data that would be used in the ongoing use of the model, and the output should be reviewed regularly to ensure that the model is working as expected.

Algorithms

Algorithms are the instructions that a model follows to accomplish a task. They range in complexity, depending on the needs of the task and the model(s) used. The complexity and the type of models used directly impact how "explainable" or transparent an algorithm’s results are. Some tasks may require complex algorithms and the use of multiple AI models to properly benefit from the adoption of AI solutions. If those tasks also have higher regulatory risk requiring transparency in decision-making, it may take additional analysis and transparency strategies before implementing an AI solution. It is important to understand the balance between the "explainability" and the expectations of the algorithm, ensuring that they are within the risk and compliance thresholds of the organization.

Potential risks and issues

Bias: The phenomenon of bias occurring, albeit inadvertently, when training AI models, is a significant risk — especially in the banking industry, where fairness and compliance are critical. AI models can learn and perpetuate biases in the training data, leading to unequal or discriminatory outcomes. Mitigating bias requires data auditing, diverse training datasets, and regular monitoring to ensure the model’s outputs align with regulatory requirements.

Lack of transparency: AI’s "black box" problem is another important risk. Many advanced AI models, such as deep learning neural networks, operate in complex and opaque ways, making it difficult to understand how decisions are made. This lack of explainability can undermine trust among stakeholders, complicate compliance with regulations requiring clear reasoning (e.g., adverse action notices in lending), and negatively impact the ability to identify and correct errors or biases. Mitigating this risk requires prioritizing interpretable models where possible, using explainability tools, and ensuring that decision-making processes are transparent and defensible to regulators and customers.

Over-reliance on AI: While AI tools excel at automating processes, creating content, and identifying patterns, they are not infallible and may fail to account for context or nuances that require trained human judgment. For example, an AI tool might misinterpret a regulatory requirement or flag legitimate transactions as fraudulent, causing operational disruptions or regulatory scrutiny. Without human oversight to validate and adjust AI outputs, errors can go unchecked, leading to compliance violations or customer dissatisfaction. Mitigating overreliance on AI requires a balanced approach that combines AI efficiency and human expertise.

Performance Risks: Implementing an AI solution is not a one-time event where an organization can set it up and let it run. Its use requires ongoing management of performance risks to maintain reliability. Issues like false positives and negatives disrupt legitimate transactions and allow fraud, exposing the FI to financial and reputational harm. Models require regular retraining and updates to incorporate changes in data patterns, laws and regulations, and business needs. Further, continuous monitoring is needed to detect issues such as performance degradation or changes in data patterns (data drift) that can impact model accuracy.

Emerging regulatory environment

So, following this overview of AI, it is important to consider how FIs can apply this knowledge to manage risk associated with the use of AI by third parties. As a starting point, it is important to understand the emerging regulatory environment focused on AI.

It is common for regulation to lag with emerging technologies, and that’s true with AI. The state of the law around AI is complicated. In 2024, more than 120 AI-related bills were introduced in Congress, but by year’s end none had been enacted. In addition, all but 15 states had taken varying measures around AI regulation. Global activity around AI regulation is also occurring with the European Union’s AI Act leading the way. While the regulation is lagging, some themes are emerging that will provide guidance for third-party risk management programs.

The themes generally fall into governance, transparency, and individual rights. The focus is on industries that represent critical infrastructure, which of course, includes banks.

  • AI governance includes requirements for robust assessment processes to understand the risks introduced by using AI, particularly the potential for adverse impacts on individual privacy and rights. In fact, some organizations have introduced the role of an AI governance officer.
  • There is a strong emphasis on transparency, particularly in customer-facing transactions, to ensure clarity about where AI is being used and how it influences decision-making processes. Transparency also involves proper documentation, including information provided by third parties to users and disclosures shared with customers.
  • Another key consideration is safeguarding individual rights. This includes offering customers alternatives to AI-driven decision-making, providing opt-out options, and establishing clear processes for appealing AI-generated decisions. Additionally, there is a focus on ensuring a duty of care to protect individuals from algorithmic discrimination, reinforcing fairness and ethical use of AI.

When evaluating the use of AI within FIs and by third parties, it is essential to consider the banking agencies’ model risk management guidance, which provides a foundational framework for managing the risks associated with models, including AI. Issued by the Federal Reserve (SR 11-7),1 the OCC (Bulletin 2011-12),2 and the FDIC (FIL-22-2017),3 this guidance emphasizes robust model development, validation, and governance to ensure reliability and compliance. As AI systems become increasingly complex, these principles remain highly relevant, particularly in ensuring transparency, mitigating bias, and maintaining accountability in AI-enabled solutions.

In addition, both the European Union’s AI Act4 as well as the Colorado AI Act5 serve as examples of comprehensive regulation and are helpful in understanding the themes in emerging AI regulation, even if they don’t happen to be laws with which your FI must comply. Additionally, FIs may find value in reviewing the AI Risk Management Framework6 that the National Institute of Standards and Technology (NIST) published in 2023, as it may provide insight into emerging themes in AI regulation.

One final source for excellent information is the U.S. Department of Justice. In September 2024, the DOJ published an update to its Evaluation of Corporate Compliance Programs (ECCP).7 This guidance is aimed at providing DOJ prosecutors with a framework for evaluating corporate compliance programs. The September 2024 update was largely focused on emerging technologies and the risks and controls needed to manage compliance when the use of AI is prevalent.

Staying informed about emerging laws and regulations will be helpful as third-party risk management programs evolve to reflect third-party use of AI. These themes help suggest key areas on which to focus due diligence processes and potential contract provisions when engaging with third parties utilizing AI in their solutions.

Assessing third-party AI capabilities

When evaluating a third party’s AI capabilities, the first step is to gather information about the FI’s current or potential use of AI to ensure alignment with organizational goals. Understanding how AI is currently applied — or could be applied — helps ensure alignment with organizational goals. In addition, establishing a clear and agreed-upon internal definition is crucial since AI can have varied interpretations. Then, assembling a skilled and knowledgeable team to lead the third-party assessment process is vital.

With AI evolving rapidly, having the right expertise at the table to evaluate both the risks and benefits of AI-enabled solutions is fundamental to the success of the bank’s third-party risk management program.

Evaluating a third party’s AI capabilities involves probing numerous areas. From a technical perspective, it is critical to assess the third party’s adherence to good data governance and security practices. This should include evaluating the quality of the data used by the AI system and validating how the algorithms are working to assess potential biases that could impact decision-making. FI’s should ask third parties about the data sources used to train the AI and their relevance to the products or services that the third party provides to the bank. Depending on the solution being proposed, consider whether a proof of concept is also needed as part of the evaluation.

Another critical element is transparency and explainability. Explainability refers to the ability to understand and interpret how an AI model makes its decisions, providing transparency into the underlying processes and algorithms. Third parties should provide detailed documentation on their solution(s). This should include information on the system’s decision-making processes, how their algorithms work, the data on which they rely, and how organizational data may be used in the system. Effective third-party risk management requires the ability to explain the AI uses in the systems and their impact on the FI’s operations and customers.

Generative AI models, in particular, pose unique challenges in achieving explainability due to their complexity. To address these challenges, consider requesting examples that demonstrate how the model arrives at its outputs, offering insight into the third party’s procedural controls and human oversight. This information is key in having confidence that the third party’s use of AI is in alignment with legal and regulatory requirements.

"AI models can learn and perpetuate biases in the training data, leading to unequal or discriminatory outcomes."

A number of key performance indicators (KPIs) may be useful in evaluating the performance of a third party’s AI systems, and should be considered for inclusion in the third-party’s documentation. Consider KPIs that help assess accuracy (how often AI predictions are correct), precision, and recall (which helps assess the relevance and completeness of AI predictions), as well as ethical and fairness metrics that help identify and measure bias in AI predictions and ethical treatment by the system of all user groups.

Relationship management: Contract and monitoring considerations

Due to some of the risks associated with AI-enabled solutions, some contract considerations are worthy of discussion. First, consider the very fluid state of the regulatory environment. When considering a third party’s governance practices, it will be important to validate that they are staying abreast of the emerging legal and regulatory guidance. Specific laws should be addressed if applicable, like the EU AI Act, and existing contractual provisions may need enhancement to ensure new laws are adequately considered. For jurisdictions where compliance with overarching AI laws like the EU or Colorado law is required, consider requiring third-party audits as a part of the strategy.

The unique risks associated with the use of AI, in particular with generative AI, may necessitate a review of contracts to determine if additional due diligence requirements may be appropriate. Are the results of the third party’s risk assessments on their AI systems available to review? Do they have a well-documented incident response plan? How will future deployments of AI into existing third-party products and services or changes be managed?

Protecting data and ensuring that the third party is well-informed on AI-related data governance practices should be key considerations. It’s essential to review third-party data handling policies to verify compliance with data protection laws, including data minimization and anonymization. Third parties should conduct regular data protection impact assessments and provide reports such as audit results, risk assessment and mitigation reports, and data security and access control reports. If data security audits are not already a part of the relationship, these may be reasonable contractual requirements to add. Make sure the FI’s contract with their third party provides needed access to complete initial and ongoing monitoring.

Covered earlier in this article is the need for transparency and explainability in AI solutions. As part of the contracting process, consider including the requirement that the third party is creating and providing appropriate documentation detailing how their algorithms work, including how decisions are made, along with tools and reports that facilitate explanations to customers. For example, a third party may be able to provide documentation on their algorithm, notification on updates to the model, decision-making frameworks, or results of bias and fairness testing.

For most AI-enabled solutions, be sure to understand what mechanisms are in place for quality oversight and control. Document what human review processes are in place and, where appropriate, what override capabilities exist. Requiring good documentation and access to ongoing monitoring reports is essential for managing these critical relationships.

This is an exciting time as AI revolutionizes the banking industry. As the technology evolves, engaging in roadmap discussions with third parties will help to focus the third party on the FI’s needs as the third party contemplates the deployment of new products and services and AI use cases. AI’s capabilities are expanding rapidly, so unmet challenges today may well be in scope tomorrow.

Moving ahead

FI risk and compliance professionals are increasingly expected to contribute to the adoption of AI solutions within their organizations. This involves understanding the laws and regulations related to AI, its basic concepts and expanding capabilities. Like other technological advances, what is novel and anxiety-inducing today will become table stakes tomorrow. Staying current on the statutory and regulatory landscape and the latest capabilities and trends is essential. Third-party providers must also maintain this awareness. All the best practices for third-party risk management remain, but the scope of that relationship will grow as third-party partners bring new AI solutions to market and enhance existing offerings.

Back To Top