Programmer looks at code on multiple screens
Cumplimiento09 noviembre, 2023

Conducting NIST audits using the NIST AI Risk Management Framework

The rise of accessible artificial intelligence (AI) has become ubiquitous in our personal and professional lives. AI tools are used for content creation, education, customer service, healthcare, agriculture, and many other use cases. Of course, with the advance of an emerging technology comes the abuse of that same technology. The concern is so high that the U.S. Department of Commerce recently announced that through the National Institute of Standards and Technology (NIST), they “will establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” The AI Safety Institute will build on the existing body of knowledge within NIST, including the NIST AI Risk Management Framework. The AI RMF is intended to help organizations of all sizes and sectors design, develop, deploy, and use AI systems in a trustworthy and responsible manner. This article will explore the basics of the NIST AI Risk Management Framework and how internal audit leaders use it in their audit practice:

  1. Understanding the NIST AI Risk Management Framework
  2. NIST AI Risk Management Framework functions
  3. How to audit using the NIST AI Risk Management Framework
  4. Prioritize your NIST audit now
Solutions

FedRAMP for Business

TeamMate’s most secure cloud-hosting environment
Work efficiently to deliver the insights your business needs in a FedRAMP Authorized environment. Manage your risk while you help the business manage its risk.

Understanding the NIST AI Risk Management Framework

The NIST AI Risk Management Framework is a general framework meant to be adapted to an organization’s specific requirements. Since the field of artificial intelligence is rapidly changing, the framework is still evolving, especially with the recent Executive Order and announcements from the U.S. Government.

NIST AI Risk Management Framework functions

The NIST AI Risk Management Framework and the NIST AI RMF Playbook are based on four core functions:

  • Govern: Establish governance structures and processes to build a culture of AI risk management across the organization.
  • Map: Identify and assess the risks associated with AI systems and people involved in using AI within the organization.
  • Measure: Assess, analyze, and track the exposure from identified AI risks.
  • Manage: Implement and maintain risk management controls to mitigate identified risks.

The NIST AI Risk Management Framework provides a simple method for identifying and assessing AI risk management across an organization. NIST compliance with the AI RMF does not require strict adherence to a defined set of controls; instead, the broad nature of the framework allows organizations to modify the guidance to fit their needs. According to the framework, the goal is not NIST compliance but “to help manage the many risks of AI and to promote trustworthy and responsible development and use of AI systems.” AI systems built with these core functions in mind would likely comply with the growing body of AI regulations and thereby earn the trust of stakeholders.

How to audit using the NIST AI Risk Management Framework

While the NIST AI Risk Management Framework is not prescriptive like FedRAMP and StateRAMP, we can perform a NIST audit using the NIST AI RMF as a guide. For example, the audit department could start by identifying the population of AI systems within the organization. Next, the team assesses the current state of AI risk management through interviews with key personnel, reviews of documentation, and observations of AI system development practices. The NIST audit fieldwork should focus on the risk management practices to govern, map, manage, and measure AI-associated risks. Finally, the team can test the design and effectiveness of the controls in place that are identified through fieldwork.

The NIST AI Risk Management Framework contains pervasive references to required documentation. Requiring clear, comprehensive documentation over the development lifecycle for AI has been a point of contention for several years. AI algorithms and training programs notoriously lack documentation and are often called black boxes. Part of any AI NIST audit should focus on the need for documentation on all critical processes, calculations, and models coming from AI.

Prioritize your NIST audit now

The artificial intelligence technology explosion is underway, and the NIST AI Risk Management Framework is well-suited to guide audit teams to understand the risk and control environment. If you have not added an AI audit to your plan, consider doing so now. NIST provides many resources to help organizations perform AI RMF audits, including the AI RMF Playbook and the AI RMF Assessment Guide.

Subscribe below to receive monthly Expert Insights in your inbox

For auditors who are challenged to improve audit productivity while delivering strategic insights, TeamMate provides expert solutions, delivered with premium professional services, to auditors around the globe and in every industry.
Back To Top