ComplianceApril 23, 2025

Navigating compliance in the age of AI: Insights from risk experts

For compliance officers, the adoption of transformative technology like artificial intelligence (“AI”) needs to be done both responsibly and in a measured way, minimizing risk while maximizing return on investment (“ROI”).

Wolters Kluwer convened industry leaders at the 2024 CRA & Fair Lending Colloquium[1] from large and small banks to discuss important issues such as how they use technology to gain efficiencies and enhance transparency and equity in lending practices.

Much of the conversation focused on benefits and concerns around AI and its role in compliance[2], with practical recommendations on implementing the technology effectively and responsibly. Here are some of the main points shared by the panelists.

AI use cases for risk and compliance

A 2024 Moody’s survey[3] of 550 risk and compliance experts revealed that eight out of every ten respondents expected widespread adoption of AI by 2029, though only 9% were active AI users at that time. Still, those who were using AI reported noticeable improvements in efficiency, risk identification, and other key metrics.

The Moody’s report aligns with the perspective of our panelists, each of whom discussed the successes they’ve seen even with incremental use of AI. Our panel cited specific use cases where AI is optimal, including:

  • Identifying lending patterns and practices, including possible disparities that could create unequal financial opportunities for certain individuals.
  • Loan-to-cost indicator analyses that use AI to provide accurate assessments of loan risk based on the cost of the project or asset being financed.
  • Real-time fraud monitoring and prevention, including detecting fraudulent documents or identifying high-risk borrowers with a history of applying for and defaulting on loans.

All these use cases involve automation, which the panelists identified as a primary reason risk and compliance managers use AI. AI can also automate compliance checks[4] for data privacy mandates like the General Data Protection Regulation (GDPR), uncover security gaps and non-compliance within supply chains, and more. The panelists agreed that even banks that have taken a conservative approach to AI acknowledge the value of automation in improving and simplifying these fundamental tasks. As one panelist said, “AI has made a lot of the manual processes that we rely on, like audits, more efficient. We're leveraging the technology to move these processes forward more aggressively.”

AI usage at large and small banks

There is a clear demarcation between AI adoption among large versus small banks. Today, large banks are actively using AI for real-time monitoring and streamlining existing processes for greater efficiency. Simultaneously, large banks are wary of the risks associated with AI and have prioritized implementing responsible AI controls. Robust training programs are common at larger banks to educate employees on the benefits and effectiveness of AI; and large banks are more likely to have the resources to build their own AI platforms and invest in advanced technologies.

Meanwhile, smaller banks may be more risk-averse to considering AI and have more limited budgets. Therefore, one panelist noted that many smaller institutions are taking “a more creative and nuanced approach” to AI. They are applying the technology in very targeted ways for minimum costs and maximum ROI. Standard practices include automating manual procedures like scanning for any data indicating potential fair lending risks or non-compliance.

Feature Large Banks Small Banks
AI Focus Efficiency; real-time monitoring Identifying lending patterns, community focus; efficiency
Decision-Making AI assists, but humans make decisions AI assists in a limited fashion; humans make decisions
Budget Larger; can invest in advanced technologies Limited; requires cost-effective strategies
Approach Cautious; emphasis on guardrails Creative, nuanced, community-focused

Figure 1: Key differences between use of AI in large and small banks

Building a strong risk and compliance AI strategy

Moody’s respondents cited a lack of transparency into how AI models make decisions as one of their biggest AI-related challenges. Our panelists reflected that sentiment and were quick to flag the pressing need to build trustworthy AI platforms that illuminate the so-called AI “black box.”

In starting an AI strategy, transformation teams should define specific objectives, areas of application, and expected outcomes. The strategy should also identify key use cases—such as fraud detection, credit risk assessment, personalized customer service, process automation, or investment management. Banks should align these initiatives with broader business goals to reap tangible value from their AI investments.

The core elements of a sound risk and compliance AI strategy include:

  • Assigning roles and responsibilities. Banks should designate AI governance roles, delegating duties to AI ethics teams, compliance officers, data scientists, and risk managers who oversee deployment, monitoring, and transparency.
  • Frequent testing. Audits, bias detection analysis, and explainability testing should be performed routinely and consistently to root out biases, prevent discriminatory lending practices, and reinforce accountability.
  • Maintaining human oversight. Humans should review every decision before it impacts borrowers.
  • Documenting AI functions. Organizations must document how AI models function to ensure alignment with regulatory requirements and reporting, ethical lending standards, and corporate policies.

The AI regulatory environment

The federal government has repealed many regulatory frameworks established during the Biden administration, including those related to transparency and bias. However, this should not change how financial institutions approach the management of AI transparency or ethics, since federal lending laws like the Equal Credit Opportunity Act (ECOA) and the Community Reinvestment Act[5] (CRA) require the use of unbiased data to understand lending practices and prevent discriminatory practices.

Simultaneously, many states continue strengthening their laws around responsible AI use. For instance, Colorado, New York, and Connecticut are among the states that are either preparing to introduce or have already introduced legislation about transparency regarding AI’s role in lending operations. Financial organizations operating in these and other states must continue to adhere to local regulatory requirements.

As one panelist remarked, "Make sure you understand all the regulations out there, as well as industry best practices around responsible AI." He indicated that "the pendulum may swing from a regulatory standpoint." While it's going one way now, it will eventually go in the opposite direction. Risk and compliance managers must prepare for any eventuality.

Conclusion: AI is digital transformation

When a risk and compliance officer talks about digital transformation, the subject of AI is never far from the conversation. The technology has upended the financial landscape to the point that any other technology discussions have taken a backseat.

The Colloquium discussion demonstrated that financial institutions need to take a very deliberate approach when implementing AI. Financial institutions must establish clear AI governance, maintain human oversight, and ensure responsible and ethical deployment of the technology. They must also be cognizant of the evolving regulatory landscape and be ready to adapt.

Risk and compliance managers should find ways to responsibly make AI work for their organization. The technology offers a great deal of promise and tremendous upside, if it is implemented properly.

Join the discussion. Mark your calendars for the next CRA & Fair Lending Colloquium, November 16 – 19, 2025.

_______

[1] 2025 CRA & Fair Lending Colloquium | Wolters Kluwer

[2] The future of bank compliance: A conversation with Lexi, AI avatar | Wolters Kluwer

[3] Moody’s risk and compliance survey

[4] Digitizing Risk and Compliance: How AI Can Help Manage a Growing Challenge

[5] CRA as a platform for compliance in the boardroom | Wolters Kluwer

Back To Top