
Tips to Build an AI Governance Team
Many organizations today allow the use of artificial intelligence (AI), but few are ensuring that artificial intelligence (AI) is appropriately governed. This is a fundamental problem. AI must be used safely and responsibly aligned with the NIST AI Risk Management Framework to enable the business and to support its mission and vision.
Without AI governance, the privacy and security of a business’ information can be vulnerable and there is a lack of visibility in the data that is being used and shared with third-parties. It’s impossible to see whether data is being used by AI or if AI is being misused with our business’ data. To make order out of this chaos, it’s key to develop and implement a formal AI governance program.
Why Your AI Program Needs Guardrails
A common tendency, especially within privacy and security functions, is to feel the pressure to address everything—even those things that are outside of our domain, such as questions that business, legal, human resources, or other stakeholders would be better suited to deal with. Reluctancy to work with other colleagues within the business—but outside of the privacy and security functions—is often due to our colleagues not getting back to us when we have questions. A culture may not exist to share the necessary information between our trusted peers and specific processes may not have been developed either.
But AI will soon be everywhere and embedded in everything. AI affects everything and everyone across the enterprise. AI is disrupting and transforming businesses, but only those businesses that choose to proactively govern AI will reap the positive benefits. Otherwise, AI that is not governed will lead to unmanaged and unmitigated risks. For these reasons, it is essential to build an effective cross-disciplinary team for AI governance.
Download: AI Vendor Questionnaire
Below is a set of practical steps to ensure that the AI governance team is representative of essential functions across the enterprise:
Tips to Build a Cross-Disciplinary AI Governance Team
The AI governance team should be cross-disciplinary by design. Every major area of the business must be represented. Below are examples of the types of roles and responsibilities that might be included in an AI governance team:
- Privacy: Oversees the collection, use, disclosure, and retention of personal data, sensitive data, proprietary information, and intellectual property. Works closely with cybersecurity and legal teams to ensure compliance with privacy laws and data handling policies.
- IT: Manages infrastructure, cloud environments, and AI data platforms; ensures the reliability, scalability, and availability of systems that support AI services.
- Cybersecurity: Monitors and secures AI systems, including access controls, data ingress and egress, third-party APIs, and internet-facing services. Supports threat detection, incident response, and vulnerability management across AI environments.
- Legal and Compliance: Interprets laws, regulations, and contractual obligations related to AI systems. Ensures alignment with legal requirements and works with privacy and cybersecurity teams to draft and enforce AI policies (e.g., acceptable use, data sharing, AI model usage).
- Human Resources (HR): Implements and enforces AI-related policies (e.g., acceptable use), manages workforce impacts, and develops employee training and AI literacy programs.
- Finance: Evaluates returns on investment, budgets for AI-related tools and services, and assesses financial risks associated with AI adoption.
- Operations: Ensures that AI systems enhance operational efficiency, reliability, and resilience. Oversees change management related to AI integration in workflows and processes.
- Business Units: Represent functional and strategic business priorities. Identify and refine AI use cases aligned with organizational goals and operational needs.
- Executive Leadership: Provides strategic oversight, sets risk appetite, ensures alignment with enterprise priorities, allocates resources, and champions AI governance at the board level.
AI Governance Team Best Practices
Here are some ways in which the AI governance cross-disciplinary team can work together:
- Developing use cases for AI for use across the enterprise and within designated business functions
- Ensuring that AI analysis and output has minimal bias
- Monitoring AI models for drift
- Ensuring that a “human in the loop” is involved when appropriate, such as in high-risk use cases or when human values, judgment, or discernment are essential.
- Selecting and vetting AI solutions and services
- Developing and implementing AI policies (e.g., acceptable use policies that address the responsible and safe use of AI)
- Developing AI literacy training for the workforce
- Adapting the workforce to use AI solutions and services
- Ensuring that the workforce uses AI effectively and efficiently, including with the use of agentic AI (i.e., autonomous task agents), generative AI (e.g., large language models), etc.
- Providing additional feedback to help strengthen AI governance for the business as a whole.
It is essential for the AI governance committee to develop a written charter that specifically sets forth structure, including roles and responsibilities.
Download: AI Acceptable Use Policy Template
AI will help transform businesses. But businesses need to be able to use AI in the right way to further their mission, vision, and goals. By having diverse stakeholder representatives from major components of the organization, a cross-disciplinary AI governance team can ensure the business is ready for both the present and the future as AI technologies evolve.
Businesses that invest in robust AI governance—with diverse and representative stakeholders—will not only mitigate risk; they will also be better positioned to use AI safely, ethically, and responsibly.
Need a practical IR template to manage cybersecurity incidents? This Incident Response Plan (IRP) template includes both general procedures and specific incident scenarios. Use this template to: Build rapid response plans to minimize downtime and potential damage; Tailor your response based on the nature of the incident; and Build resiliency with IR guidance to current with new threats. Download this critical template and build a cohesive IR plan.
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.