How to Establish GRC Practices for AI

June 12, 2025
AI demands a new GRC playbook—governance must address evolving risks across models, data, users, and infrastructure to ensure secure, ethical, and effective AI deployment.

Why AI Governance Demands a New Playbook

AI is not just another software tool—it’s a paradigm shift. While the term “artificial intelligence” has been around since the 1950s, it’s only recently become widely deployed across industries. With this rise comes the challenge: traditional governance, risk management, and compliance (GRC) models aren’t equipped to fully manage the unique risks and dynamics of AI.

So, how is AI different—and how do we build the right governance around it?

 

READ MORE: AI Governance—Tech Problems Without Tech Solutions

 

What Makes AI Governance Unique?

Unlike traditional software, which follows a set of predefined rules written by developers and behaves in predictable ways, AI operates differently. AI systems learn patterns from data and evolve, adapting as they encounter new inputs. This capacity for change makes them less transparent, harder to test, and often more difficult to predict. AI introduces a new set of risks that traditional governance frameworks are not always equipped to handle.

AI systems typically operate in two major phases. During the training phase, curated datasets are used to teach the model how to interpret and respond to information. Following this, in the inference phase, the trained model applies what it has learned to new, real-world inputs. The model then generates outputs—such as predictions or classifications—which are often accompanied by varying levels of confidence or uncertainty.

Each set of these phases presents its own set of risks. These risks can arise from the model itself, the people who develop or manage it (such as data scientists and engineers), the quality and biases present in the training data, the nature of incoming data and the outputs it generates, the behavior of end users, and the reliability of the infrastructure that supports the system.

Effective AI governance must account for all of these interconnected elements. It is not enough to govern the model in isolation; a comprehensive approach requires oversight of the entire AI ecosystems, from data pipelines and development practices to deployment environments and human interaction.

Experience shows that it is best to assess risk and develop governance programs by examining the training and inference phases across each element:

  1. The model
  2. The people (developers, data scientists) who build and maintain the model
  3. The data used to train the model
  4. The data inputs to the model
  5. The output
  6. The users
  7. The infrastructure

READ MORE: Tips to Build an AI Governance Team

 

How to Incorporate AI into Existing GRC Practices

The way scientists and developers categorize models does not lend itself to developing governance programs, assessing Use Cases, or communicating with stakeholders, shareholders, and users. Scientists and developers use terms like Deep Learning, Fuzzy Logic, and Reactive machines. The rest of us discuss categories like Machine Learning, Neural Networks, and Generative AI. Product vendors use product names. Unfortunately, a clean taxonomy does not exist, and a lack of common terminology hinders our ability to communicate reliably.

Efforts by standards bodies and industry bodies are developing a common taxonomy with agreed-upon language. Until then, we can craft our governance programs around deployment models and modalities driving toward specific use cases.

Deployment models range from most to least controlled: on-premises custom-built models using our proprietary data are totally under our control. Publicly available products with AI embedded within them are the most out of our control. In between is a range driven by your sector and value creation strategy.

Modalities is a fancy way of categorizing the output type: text, code, images, voice, video, and the like.

As we craft our programs, policies, and controls, we need to look at the traditional factors of likelihood and impact through the lens of both the modality and the deployment model. AI embedded in an office productivity tool to check spelling, grammar, and punctuation presents a different risk than a custom-built application used for making medical diagnoses.

 

DOWNLOAD NOW: Tips for Reducing AI Risk

 

Learn Basic Blocking and Tackling for AI

This is where traditional security practices, applied to AI, can have an immediate impact.

The guiding principles underlying Zero Trust are directly applicable to AI governance. While the Cloud Security Alliance (CSA) has pieces in the works that view AI through the lens of Zero Trust, they are not yet available. In the meantime, you can review Zero Trust Guiding Principles.

Key AI Governance Focus Areas:

  • Security Culture in AI Development Data scientists often lack security training and may be overprivileged. Many models lack access controls, logging, or monitoring. These protections must be implemented in surrounding infrastructure.
  • Establish a Governance Group AI governance requires a cross-functional team with board-level backing. Representation from legal, compliance, risk, and business units are crucial.
  • Update Corporate Policies Create or revise AI-specific policies. Most urgently, update Acceptable Use Policies (AUPs) to address generative AI tools. Addenda are often easier than full rewrites.
  • Identity and Access Management (IAM) Extend IAM to cover data scientists and developers. Enforce least privilege, separate duties, and access controls on training data and model code.
  • Protect Model Integrity Since many models lack internal access controls, surround them with safeguards. Use digital signatures and hashes to detect tampering.
  • Secure Training Data Know your data sources (provenance), protect confidentiality, and ensure integrity. Digital signatures help detect unauthorized changes. Over-collecting data can increase privacy and regulatory risks.
  • Awareness and Training Train three groups: data scientists, developers, and users. Developers need to understand threats; users must recognize risks like hallucinations or deepfakes and know how to respond.

AI Deep Fakes and Influence Operations

Adversarial AI, especially with social engineering attacks, is on the rise. Tailored training for relevant roles like help desk personnel, systems administrators, executive support staff, and accounts payable.

CISA, FBI, NSA, and the UK’s National Cybersecurity Center have excellent resources.

Logging and Monitoring. Logging and monitoring features and functions are likely not inherent to your model or data sets. They will need to be provided through the supporting infrastructure.

Logging and monitoring will need to be developed for event detection, traceability, and incident response. More likely, your organizations have standards in this space. It is best to start there and look towards viewing those through the lens of AI.

 

DOWNLOAD NOW: AI Acceptable Use Policy Template

 

Role of AI Bill of Materials (BOM)

Much like a traditional Bill of Materials (BOM), an AI BOM provides a detailed inventory of all components of an AI product—the components within the product, how it was trained, cautions, and instructions. Just like a Bill of Materials (BOM) tells us what is on a truck or a Software BOM (SBOM) maps out third-party software in a product, an AI BOM provides insight. 

An AI BOM is like a list of ingredients and nutritional information on food products.

While the standard(s) for AI BOMs are evolving, it is clear they have two primary purposes. First, an AI BOM provides transparency so you can make informed, risk-based decisions. Second, when something goes wrong, and it will, the AI BOM provides insight so you can assess the impact, determine the next steps, contain, respond, and correct.

It is only a matter of time before we learn of a component embedded in an AI product, like we saw with Log4J that we need to assess before determining the next steps. An AI BOM provides transparency, reduces confusion, and accelerates the timeline. In a perfect world, an AI BOM will also provide instructions.

Anyone looking to use a product will benefit from an AI BOM. Anyone building a product will foster adoption and gain credibility by providing an AI BOM. Given the shift of liability from customers to product vendors and service providers globally, as evidenced by The National Cybersecurity Strategy and Security by Design and Default. It is only a matter of time before AI BOMS are demanded by customers, regulators, and legislatures.

AI Potential and Safety

AI has great potential. Like any disruptive technology with great potential, we are best served by incorporating safety, security, and privacy into our value-creation strategy. AI is different. Understanding how it is different is the key to not only creating value but also our risk mitigation.

Take our CISO Comp and Budget Survey in less than 10 minutes and receive career defining data and other valuable insights and data sets.

Security staff professionals can also take our 2025 Cybersecurity Staff Compensation and Career Benchmark Survey.

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.

Subscribe to IANS Blog

Receive a wealth of trending cyber tips and how-tos delivered directly weekly to your inbox.

Please provide a business email.