AI Governance Policy

Current Version: 1.1
Last Updated: 9/25/2025

This document outlines the mandatory procedure for the submission, review and approval of all third-party AI platforms and tools within the organization used by employees for productivity or embedded into products for serving clients. Adherence to this process is critical to ensure the security of company and client data, mitigate legal and financial risks, and maintain regulatory compliance.

Acquisition and Approval

New AI Service Acquisition Request

All requests for the acquisition of new AI platforms or tools must be formally submitted to the cybersecurity & compliance department for initial assessment. The submission shall be made through the IT helpdesk via a formal ticket. The request must include:

  • A detailed description of the proposed new AI tool and its intended business purpose and benefits.
  • The name of the vendor or provider.
  • The contact information for the requesting individual and department head.
  • A preliminary assessment of the type of data, including any sensitive data, and business confidential information the AI tool will access, process or store.
New AI Service Security Review

On receipt of a request, the cybersecurity & compliance department will initiate a mandatory vendor security review. This review is a prerequisite for the approval and acquisition of any new AI tool. The review process will include the following stages:

  • Initial vetting: The IT department will conduct a preliminary assessment of the vendor's security posture based on publicly available information and industry-standard security ratings.
  • Vendor questionnaire: The vendor will be required to complete a comprehensive security questionnaire provided by the IT department. This questionnaire will address topics including, but not limited to, data encryption protocols, access controls, incident response plans, and data retention and destruction policies.
  • Contract and policy review: The vendor and any of the vendor’s service provider' s terms of service, privacy policy, explainability documents, and any other relevant legal agreements or relevant public-facing documents will be scrutinized by the IT department, in conjunction with the legal department, to identify any terms that may be inconsistent with the organization's intellectual property, security and data privacy standards.
  • Use case review: The legal department will review the use case to determine whether it is a high-risk (e.g., employment, processing of biometrics, client-facing uses, etc .) or limited risk (e.g., non-client facing chatbots) processing activity. Depending on the use case, additional risk mitigation steps may need to be implemented prior to onboarding vendor.
New AI Service Security Request Decision

To proceed to the final decision stage, all new requests must satisfy the following requirements:

  • Completed vendor security review: The vendor must have successfully completed the security review process to the satisfaction of the IT department.
  • Signed data processing agreement: Where applicable, the vendor must agree to enter into a data processing agreement (DPA) that meets the organization's requirements for the handling and protection of company and client data.
  • Prohibition on AI training with client data: A critical and non-negotiable condition for the approval of any new AI tool is a contractual guarantee from the vendor that it will not use any client data, whether anonymized or in aggregate , to train its AI, machine learning models or any similar analytical systems. This prohibition must be explicitly defined in the governing legal agreement between the organization and the vendor.
  • Compliance with law: The vendor must agree to comply with all applicable laws relevant to the processing activity at the time the agreement is signed and as new laws come into effect.
  • Risk mitigation: The organization must have a documented plan in place to implement any necessary mitigation steps (e.g., notices).

The final decision to approve or reject will be made by the CTO, based on the findings of the vendor security review and in consultation with the head of the requesting department, cybersecurity & compliance department findings and the legal department as necessary. All decisions, along with the supporting documentation from the review process, will be formally recorded and maintained by the cybersecurity & compliance department for audit and compliance purposes.

In-House AI Product Governance

AI System Processing Accuracy and Reliability

Ensuring the accuracy of IANS AI systems output and the reliability of the data they process requires a holistic and continuous approach. Enforcement processes and procedures to support data quality include focus on:

  • Accuracy: Data should correctly reflect the real-world objects or events it describes.
  • Completeness: Datasets should not have missing value or records.
  • Consistency: Data should be uniform and not contradictory across different datasets.
  • Timeliness: Data should be up-to-date and available when needed.
  • Uniqueness: There should be no duplicate records in the dataset.
  • Manual validation: All AI inputs and outputs should be regularly reviewed by human evaluators. Outputs with high uncertainty or inputs that differ significantly from the training data should be identified and flagged, prompting immediate human review before further use.
AI System Safety for Disaster Mitigation – Machine Unlearning

Any AI system used by IANS or developed by IANS must address one of the most complex and critical challenges in modern AI safety and governance. When an AI system learns information it shouldn't have, whether it's private user data, sensitive client data, copyrighted material, harmful bias or a factual inaccuracy, the system must be capable of machine unlearning. Use the principle(s) of:

  • LLM retraining from scratch:
    • Identify and isolate the data that must be removed.
    • Create a new training dataset that excludes this data.
    • Delete the current production model.
    • Train a brand-new model on the sanitized dataset.
    • Deploy the new model.
  • Retrieval-augmented generation (RAG) model unlearning:
    • Knowledge stored in external documents: RAG models retrieve information from an external database or corpus. To "unlearn" something, you can simply remove or update the source documents, without retraining the model.
    • No need to retrain the core LLM: Since the generation model doesn’t memorize facts internally, there's minimal residual learning unlike standard LLMs.
    • Immediate effect: Removing a document from the retrieval index instantly corrects output.
    • Accountability and human oversight: Define roles and responsibilities. Clearly outline who is responsible for the different stages of the AI lifecycle, from data collection to model deployment and monitoring.
    • Human-in-the-loop (HITL): Integrate human oversight into AI-driven decision-making processes, especially critical or high-impact decisions. This ensures a human can intervene and override the AI when necessary.
    • Recourse and redress: Establish mechanisms for individuals to challenge or appeal decisions made by AI systems that affect them.

AI Transparency and Privacy

In-house AI training dataset transparency is of utmost importance. Ways to foster transparency include:

  • Safeguarding corporate and client data from external AI training: AI models integrated into internal workflows or products must strictly prevent any exposure of sensitive corporate or client data to external AI systems or LLM training pipelines not fully controlled by the organization.
  • Making AI decisions easy to understand: Use tools or create custom models to demonstrate why the AI made a certain decision.
  • Using simple stand-in models: Use easy-to-understand models (like decision trees) to explain how a more complex model works.
  • Providing clear documentation: Create model cards and dataset sheets to explain what the AI is used for, how well it works for different groups, where the data came from, and what the limits and risks are. Describe how the data was collected, labeled and checked so others understand where it came from and how reliable it is.
  • Setting up logging and audit trails: Record all model inputs, outputs and internal states to enable post-decision review. Establish auditability to allow auditors and regulators with secure access to logs to verify compliance and investigate anomalies.
User Disclosure and Consent to Use of AI

Ensure user consent is explicit. User consent must be "informed," meaning the user must be provided with sufficient information to understand the nature, purpose and implications of the data processing to which they are consenting. User consent also must be “specific,” meaning that the user consent should not be bundled with other consents. Regulatory frameworks, including but not limited to the General Data Protection Regulation (GDPR) and various U.S. state privacy laws, require specificity and transparency in data processing disclosures.

Refer to IANS Privacy Policy for details of such products and services requiring user consent that involve the use of AI.

AI Used by IANS Employees or Faculty for Personal Work Productivity

IANS’ value proposition with clients is based on offering unique, proprietary insights and perspectives based on our experience in the cybersecurity industry, not by repeating information that is already in the public domain.

To maintain this standard, all employees and Faculty members must disclose the use of public generative AI tools (like ChatGPT) when generative AI is used to draft, edit, or produce any part of client-facing content or deliverables.

  • Internal staff should report this usage to the COO.
  • All Faculty should report this usage to the SVP, Strategic Partnerships & Product Development.

Employees and Faculty may use public generative AI tools for background research, brainstorming, or summarization, provided the information is independently validated. Disclosure is not required for such background use.

In addition, inputting personally identifiable information (PII) or other sensitive corporate and client data into public generative AI tools is strictly prohibited. For all work involving such information, employees must use IANS' corporate-approved and licensed AI service (Corporate AI). This platform has been vetted by compliance to ensure the security of IANS’ and its clients' data:

  • Data privacy and security: Corporate AI adheres to strict privacy and security standards, including compliance with GDPR and other data protection regulations.
  • IANS control: IANS has control over its employee data, including how it is collected, used and stored.
  • Data residency: Corporate AI respects data residency requirements, ensuring data is stored and processed within specified geographic boundaries.
  • AI training: Data accessed through Corporate AI and user interactions is not used to train the underlying LLMs, ensuring IANS corporate and client data remain private.
  • Tenant isolation: Corporate AI operates within IANS’ secure tenant, inheriting all existing security, privacy, identity and compliance requirements.
  • Transparency and compliance: Corporate AI provides transparency about how data is used and ensures compliance with all relevant data protection laws.
  • Audit trail: Use of corporate AI enhances audit capabilities by providing robust logging, traceability and access control, ensuring every decision and data interaction is recorded and accountable.