Generative artificial intelligence (GenAI) is permeating every aspect of our lives. From chatbots and code generators to content creation tools and predictive models, generative AI technologies are changing the way people work and how organizations operate. From a risk and control perspective, we need to be aware of the impact this technology is having on our organizations. GenAI introduces a new class of strategic, operational, compliance, and reputational risks that many enterprise risk management (ERM) and internal audit teams are only beginning to understand. Boards are growing increasingly concerned about being caught off guard by AI-related threats that could undermine the achievement of business objectives. ERM and internal audit teams must act now to integrate generative AI risks and mitigation into enterprise frameworks, enhance stakeholder understanding, and ensure that robust controls and assurance mechanisms are in place before unintended consequences materialize. In this article, we will explore the new types of risks introduced by generative AI, provide guidance on integrating AI risk management into the overall enterprise risk framework, and describe how ERM teams can collaborate with key stakeholders to establish an effective governance program for this new technology.
Breaking down generative AI risks and mitigation options
Understanding GenAI’s impact on risk management
By now, most of us have experienced both the power and unpredictability of generative AI. Systems like ChatGPT can produce original content, including text, images, code, and video, by learning patterns from information found online. Naturally, this technology has quickly made its way into the workplace. In some cases, GenAI was introduced thoughtfully with a specific purpose in mind, or through software updates that incorporated GenAI capabilities. As the practice of including GenAI in our workflows and in the software we use everyday increases, we must be aware of the inherent risks that GenAI introduces. Let’s explore the major risk categories, such as strategic, operational, technological, compliance, and reputational risks, that we must address before adopting GenAI into our workplace.
Strategic risk
While generative AI capability brings immense value, it also carries significant risk. Strategically, organizations may find themselves overly reliant on AI-generated outputs without fully understanding their limitations. Decisions influenced by flawed AI models or inaccurate outputs can misalign with long-term objectives, resulting in costly missteps. Further, the assumption that generative AI will automatically create efficiencies or new opportunities may result in overinvestment in tools that lack sufficient governance or business alignment.
Operational risk
Generative AI tools can introduce hidden vulnerabilities. One of the most pressing concerns is data leakage. Employees may unintentionally share confidential or proprietary information with publicly available AI tools that can retain and use that data to train future models. Additionally, these models are prone to what experts call "hallucinations," where the AI generates content that appears plausible but is entirely incorrect. In highly regulated industries, such as those in the legal, financial, or medical sectors, this can lead to significant errors, cause harm to individuals, or result in compliance violations.
Technology risk
We also need to understand the risks associated with shadow AI. In some cases, generative AI is quietly introduced through existing software updates without the organization’s knowledge. In other instances, employees may purchase and use their personal GenAI accounts at work. In either case, the pace of deployment could bypass traditional software vetting and change management practices, meaning these tools are integrated into workflows without adequate oversight or testing.
Compliance risk
Governments and regulatory bodies are enacting frameworks governing AI. The European Union’s AI Act, U.S. executive orders, and emerging guidelines from agencies such as the FTC and SEC are placing new obligations on organizations to ensure transparency, accountability, and fairness in their use of AI. At least in part, this is because many generative models are trained on datasets that may include copyrighted material, personally identifiable information (PII), or biased content, raising concerns around intellectual property rights, data protection, and discriminatory outcomes. Organizations using third-party AI platforms must also contend with heightened third-party risk, especially when vendors fail to disclose their training data or model architecture.
Reputational risk
Perhaps the most difficult category of AI risk to quantify is reputational. A single misuse of generative AI can quickly escalate into a public relations crisis, particularly if it involves customer-facing content, intellectual property, or a breach of trust through the disclosure of confidential information. Trust is hard to regain once lost. Inappropriate, biased, or misleading content generated by AI can damage customer loyalty, investor confidence, and employee morale. Internally, poor communication about AI policies and controls can breed fear, confusion, or resentment among staff, especially if employees view AI as a threat to their roles.
View a demo
TeamMate+ Audit
Length: 2 minutes, 54 seconds
Risk management’s role in AI risk mitigation
Generative AI is already being used to augment existing workflows. As a result, teams will need to evaluate the use of AI within these existing workflows and ensure that risks and controls are designed and operating effectively. Rather than treating AI as a discrete or isolated issue, it should be viewed as a horizontal risk that cuts across every aspect of the business. The process starts with formally updating the risk register to include specific AI-related risks, such as generative content reliability, data bias, model misuse, third-party exposure, and regulatory non-compliance. These risks should be evaluated by cross-functional risk assessments that include input from IT, legal, compliance, HR, and business unit leaders.
Equally important is aligning AI risks with the organization’s strategic objectives and risk appetite. Boards should assess whether their existing appetite statements accurately reflect their tolerance for risks arising from automated decision-making, data usage, and untested technologies. In many cases, existing governance structures will need to be rethought or expanded to accommodate the speed and scale at which generative AI is being deployed. ERM should ensure that AI is explicitly considered when making strategic planning decisions.
During conversations about AI risk with senior management, ERM can help by simplifying AI risk into terms that are understandable to stakeholders who may not have technical backgrounds. This means moving beyond general warnings and providing concrete examples of how generative AI could fail, backfire, or violate laws. For instance, presenting case studies where AI tools inadvertently exposed sensitive data or created biased outcomes can be far more compelling than theoretical risk matrices. Risk teams should consider facilitating AI-specific risk workshops, creating management briefings for the board, and holding training sessions for business leaders to demystify AI technologies and build support for stronger governance.
Combined assurance for AI risk mitigation
Risk management teams can also collaborate with the First Line who are directly involved in deploying or interacting with AI tools. Often, these teams are eager to implement and use new AI tools touted as major time savers, but they might not be fully aware of the inherent risks associated with AI. ERM should get involved in the implementation process early on by participating in governance committees or model review boards that oversee AI adoption. This enables risk professionals to advise on suitable use cases, data input restrictions, and review procedures before implementing AI tools.
First-Line teams also need guidance in creating an inventory of where AI has been implemented to serve as comprehensive records of all AI models in use across the organization. These inventories should include information on each model’s purpose, source, data inputs, risk classification, and owner. Having transparency in AI usage is critical not only for oversight but also for future audits, system patching, regulatory inquiries, or crisis responses. ERM should help build policies that define who can use AI, for what purposes, and under what safeguards, ensuring alignment with data protection laws and ethical standards.
Risk management should also coordinate with internal audit as the Third Line. While ERM assesses and advises on risks, internal audit provides independent assurance that controls are working as intended for AI risk mitigation. For generative AI, this requires a nuanced and evolving audit plan. ERM can support internal audit by sharing findings from AI risk assessments, helping to prioritize high-risk areas, and giving a perspective on audit criteria that reflect both technical and ethical considerations. An internal audit evaluation may include verifying whether proper approvals were obtained for AI adoption, whether data privacy policies are being effectively enforced, and whether there are mechanisms in place to detect inappropriate use. Internal audit can share its findings with ERM, creating a feedback loop for continuous improvement related to the organization’s AI risk management posture.
Establishing an effective control environment for AI risk management
Generative AI risk mitigation requires a combination of governance, technology, and culture, as well as those individuals that have the knowledge and skills to use and assess GenAI. At the governance level, organizations must establish clear accountability for AI-related decisions and outcomes. This often means forming AI oversight committees, defining escalation paths, and integrating AI risk into existing risk governance structures. Policies should be drafted to outline who is responsible for AI procurement, monitoring, and incident response.
From a technology perspective, protective measures must be embedded into both the development and deployment phases of AI use. This includes prohibiting the input of confidential data into public models, classifying data so that only approved sources are used for training or prompting, and setting up monitoring tools to detect anomalies in model behavior. Models should undergo rigorous testing for bias, hallucination, and accuracy, with results documented and reviewed regularly. In high-risk use cases, outputs should always be subject to human verification before being acted upon.
Human oversight also plays a critical role. Employees need to be trained on the appropriate use of generative AI tools, the risks they pose, and the steps to take when something appears to be off. This training should be tailored to specific roles. What’s relevant for a software developer using AI-assisted coding will differ from what’s important for a marketing professional using an AI content generator. Creating a culture where AI use is transparent, intentional, and subject to scrutiny is essential for long-term resilience.
Alignment with established frameworks
Organizations seeking a more structured approach to AI risk management can adopt formal frameworks such as the NIST AI Risk Management Framework (AI RMF). This framework is built around four key functions: map, measure, manage, and govern. Mapping involves understanding the context and purpose of AI systems, including potential harms. Measuring focuses on assessing the reliability, accuracy, and fairness of the AI models. Managing is about prioritizing risks and implementing mitigation strategies. Governing refers to the enterprise-wide culture, leadership, and accountability structures needed to ensure AI is used responsibly. For ERM teams, this framework can serve as a blueprint for building or enhancing AI risk management programs that are both scalable and aligned with industry best practices.
Generative AI is not a future risk or an emerging risk. Whether you realize it or not, GenAI is already in use within your organization. Boards are increasingly aware of the reputational, regulatory, and strategic implications of poor AI governance. ERM teams play a critical role in ensuring that organizations harness AI responsibly, transparently, and in alignment with their business objectives. By taking a proactive, collaborative, and informed approach, ERM teams can help organizations reap the benefits of generative AI while minimizing the risks that threaten to derail long-term success.