The rapid growth and adoption of generative AI (GenAI) technologies have sparked a race among organizations to strengthen internal audit coverage over the potential risks associated with these advancements. A recent survey conducted by Gartner reveals the urgency and complexity of this task, highlighting the need for effective oversight in this evolving landscape.
A Snapshot of the Survey Findings
Gartner’s survey involved 102 chief audit executives (CAEs) who were asked to rate the importance of providing assurance over 35 identified risks. The results highlighted six key risks poised for increased internal audit attention:
- Strategic Change Management
- Diversity, Equity, and Inclusion
- Organizational Culture
- AI-Enabled Cyberthreats
- AI Control Failures
- Unreliable Outputs from AI Models
These findings underscore a crucial point: half of the top six risks identified for expanded audit coverage are directly linked to AI. This trend suggests that as organizations increasingly integrate AI technologies, they must also develop robust frameworks to address the new and unique challenges these technologies present.
Navigating the Complexity of AI-Related Risks
“Many internal auditors are looking to expand their coverage in this area,” explains Thomas Teravainen, research specialist with the Gartner for Legal, Risk & Compliance Leaders practice. The spectrum of AI-related risks is vast, ranging from potential control failures and unreliable outputs to sophisticated cyberthreats.
To illustrate, consider a scenario where an organization leverages AI for customer service operations. If the AI model generates biased or inaccurate responses, it can lead to reputational damage, legal repercussions, and loss of customer trust. The challenge for internal auditors is not only to identify such risks but also to implement controls that mitigate them effectively.
The Confidence Gap: A Significant Barrier
One of the most striking revelations from the survey is the notable lack of confidence among internal auditors when it comes to managing AI risks. Despite the clear importance of these risks, fewer than 11% of respondents felt very confident in providing assurance over AI-related issues like control failures and unreliable outputs.
This confidence gap poses a significant challenge. As Teravainen notes, “With such a broad array of potential risks coming from all over the business, it’s easy to understand why auditors aren’t confident about their ability to apply assurance.” The complexity of AI technologies, combined with their rapid evolution, means that internal auditors must constantly update their knowledge and skills to stay ahead of potential risks.
Key AI Risks in Focus
- AI-Enabled Cyberthreats: As AI technologies evolve, so do the threats they face. Malicious actors are increasingly using AI to create more sophisticated cyberattacks, making it imperative for internal auditors to understand these new threats and develop strategies to combat them.
- AI Control Failures: AI systems are only as good as the data and algorithms that power them. A failure in AI controls—whether due to poor data quality, biased algorithms, or inadequate oversight—can lead to disastrous outcomes. Internal auditors must ensure that AI models are rigorously tested and validated before deployment.
- Unreliable Outputs from AI Models: AI models can produce outputs that are biased, inaccurate, or even nonsensical—often referred to as “hallucinations.” These unreliable outputs can mislead decision-makers and harm an organization’s credibility. It is crucial for internal auditors to establish processes that monitor and validate AI outputs continuously.
Meeting Stakeholder Expectations
With CEOs and CFOs increasingly viewing AI as the technology that will most significantly impact their organizations over the next three years, the pressure on CAEs to provide robust assurance over AI-related risks is mounting. Continued gaps in confidence among internal auditors could undermine their ability to meet these expectations, potentially leaving organizations vulnerable to unforeseen risks.
Moving Forward: Strategies for Effective AI Risk Management
To bridge the confidence gap and enhance internal audit coverage of AI-related risks, organizations can consider the following strategies:
- Invest in AI Training for Auditors: Equip internal auditors with the knowledge and skills needed to understand and evaluate AI technologies. This includes training on AI fundamentals, as well as specialized courses on AI risk management and compliance.
- Develop AI-Specific Audit Frameworks: Traditional audit frameworks may not be sufficient to address the unique challenges posed by AI. Developing AI-specific frameworks can help auditors assess AI risks more effectively and provide more targeted assurance.
- Implement Continuous Monitoring and Validation: AI models are dynamic and can evolve over time. Continuous monitoring and validation are essential to ensure that AI systems remain reliable and compliant with organizational standards and regulatory requirements.
- Foster Collaboration Across Departments: Internal auditors should work closely with data scientists, IT, and legal teams to gain a comprehensive understanding of how AI technologies are being used within the organization and the associated risks.
Conclusion: A Call to Action for Internal Auditors
The rise of generative AI presents both opportunities and challenges for organizations. As these technologies become more integrated into business operations, the role of internal auditors in managing AI-related risks becomes increasingly critical. By expanding their coverage and developing the necessary expertise, internal auditors can help safeguard their organizations against the potential pitfalls of AI, ensuring that innovation and risk management go hand in hand.
In summary, while the path forward is complex, proactive steps taken today will enable organizations to harness the power of AI safely and responsibly, driving growth and innovation without compromising on risk management.