Calendar Icon White
April 19, 2026
Clock Icon
9
 min read

AI Governance Policy: What to Include, Who Writes It, and How to Enforce It

A realistic AI governance policy is enforceable — not aspirational. Here's what to include, who should write it, and how to enforce it with modern platforms. Free downloadable template included.

AI Governance Policy: What to Include, Who Writes It, and How to Enforce It
ChatGPT
Perplexity
Grok
Google AI
Claude
Summarize and analyze this article with:

TL;DR

  • An AI governance policy is the written document that says what AI usage is allowed, what's prohibited, and what's conditionally permitted. It's the scaffold your technical controls enforce.
  • A policy without enforcement is theater. Most AI policies fail because they're written in conference rooms disconnected from actual usage and have no technical backing.
  • Good policies are short, specific, and data-informed. Draft after shadow AI discovery, not before. Per-team rules where risk profiles differ. Clear on personal accounts, regulated data, and consequences.
  • Modern enforcement is real-time. Browser and endpoint DLP that blocks, warns, or audits prompts live — turning policy into a control. Strac's AI governance platform is purpose-built for this.

AI Governance Policy: What to Include, Who Writes It, and How to Enforce It

AI Governance Policy — the structure that actually works when paired with modern enforcement
Good AI policies are short, data-informed, and paired with real-time enforcement — not aspirational documents in a shared drive

✨ What Is an AI Governance Policy?

An AI governance policy (often called AI acceptable use policy, GenAI policy, or AI usage policy) is the written document that defines:

  • Which AI tools employees are allowed to use
  • What data may be submitted to those tools
  • What approvals or reviews are required for specific use cases
  • What consequences apply to violations
  • How the policy is reviewed and updated as AI evolves

The policy is the top of the governance stack. Below it sit standards (more specific technical requirements), procedures (how people execute), and technical controls (what actually enforces). Without a policy, the whole stack lacks an anchor.

Without technical enforcement under the policy, the policy is decoration.

Why Most AI Policies Fail

The failure pattern is consistent:

  1. A cross-functional committee is formed after an AI incident or board inquiry.
  2. The committee drafts a policy over 60–90 days. Legal and HR tighten the language. Security adds aspirational controls.
  3. The policy is published to SharePoint. An all-hands meeting announces it. Employees sign an annual attestation.
  4. Nothing changes operationally. Employees still use ChatGPT with sensitive data; they still have personal accounts; shadow AI keeps growing.
  5. Six months later, another incident. The committee blames insufficient training. Cycle repeats.

Three reasons this fails:

  • Policies are written aspirationally, not based on actual usage data.
  • Enforcement is assumed to be human (employees will read and comply), not technical (systems detect and prevent).
  • There's no feedback loop — no dashboard showing policy adherence, no trend data on shadow AI, no evidence for auditors.

The fix is to invert the sequence: deploy discovery and technical enforcement first, then write a policy grounded in actual data.

✨ What to Include in a Working AI Governance Policy

Seven sections. Keep them short.

1. Scope and purpose

Who does this apply to (employees, contractors, vendors with system access) and what does it govern (AI systems used, AI systems built).

Two sentences. No legalese. Example: "This policy applies to all employees, contractors, and third parties with access to [Company] systems. It governs the use of AI tools — both those [Company] provides and external AI systems accessed from [Company] devices or accounts."

2. Sanctioned, conditional, and prohibited tools

A short list — named tools, with status:

  • Sanctioned: ChatGPT Enterprise, Microsoft Copilot E5, Claude for Enterprise. Approved for general use.
  • Conditional: Perplexity Enterprise (finance and legal use only, approval required), GitHub Copilot (developers only, no proprietary code).
  • Prohibited: Consumer-tier ChatGPT / Claude / Gemini on personal accounts, unsanctioned AI coding tools.

Keep this list alive — add new tools as they're reviewed, don't let it ossify.

3. Data handling rules

What data may go into AI tools:

  • Always prohibited: PCI (cardholder data), PHI (health information), credentials, secrets, M&A confidential, privileged communications.
  • Approval required: customer PII (non-public), internal financial data, proprietary source code.
  • Permitted: public information, marketing copy, general business questions, anonymized data.

The policy should say this. Technical controls should enforce it.

4. Personal accounts

Explicit prohibition on using personal AI accounts (personal ChatGPT Plus, personal Claude, personal Gemini) for work. This is the single highest-leverage rule — it closes the shadow AI backdoor.

Example language: "Employees may not use personal AI accounts for [Company] business. When a corporate alternative is available, employees must use it. Where no corporate alternative exists, requests for sanctioned access go through [AI review process]."

5. Approval workflow

How to request new AI tool access, including what's evaluated (security, data handling, vendor BAA status, cost). Keep the workflow lightweight — if it takes 90 days to approve a tool, employees go around it.

6. Consequences for violations

Progressive and clear:

  • First violation: coaching, re-training
  • Repeated violations: formal discipline, access restrictions
  • Severe violations (willful regulated-data exfiltration): HR review, potentially termination

Integrated with existing HR/discipline policy — don't create a parallel system.

7. Review cadence

Policy review every 6–12 months, or whenever a new significant AI tool emerges (e.g., when a new model provider becomes relevant to your employee base). Named policy owner (usually the CISO or Chief AI Officer).

A Short, Effective Policy Template

Here's a compact working policy — treat it as a starting skeleton, not legal advice. Customize for your organization, legal jurisdiction, and regulatory profile.

AI Acceptable Use Policy — [Company Name]
Effective: [Date] | Owner: [Name/Role] | Review: Annually

1. SCOPE
This policy applies to all [Company] personnel using AI tools for work.

2. SANCTIONED TOOLS
- ChatGPT Enterprise (company SSO only)
- Microsoft 365 Copilot
- Claude for Enterprise
- GitHub Copilot Business (developers only)
[Update quarterly]

3. PROHIBITED DATA IN AI PROMPTS
Never paste the following into any AI tool:
- Payment card numbers, CVVs, bank account numbers
- Protected health information (medical records, patient data)
- Passwords, API keys, access tokens, private keys
- Social Security numbers unless specifically authorized
- Confidential M&A, legal, or HR matters

4. CONDITIONALLY ALLOWED (request approval)
- Customer PII → request via [channel]
- Internal financial data → request via [channel]
- Proprietary source code → developer-specific policy

5. PERSONAL ACCOUNTS
Do not use personal AI accounts (ChatGPT Plus, Claude, Gemini Free) for work.

6. APPROVAL FOR NEW TOOLS
Submit [AI tool review form] for new AI tools. Review typically 5 business days.

7. VIOLATIONS
Progressive: coaching → formal warning → discipline. Integrated with [HR policy reference].

8. QUESTIONS
Contact [email] or #ai-governance in Slack.

This fits on two pages. Employees will actually read it. Your technical controls enforce it.

✨ Making the Policy Enforceable

The gap between policy and reality is closed by technical enforcement. Four layers:

Layer 1: Real-time prompt inspection

Browser extension and endpoint agent inspect every AI prompt before submission. Sensitive data (PCI, PHI, credentials) is blocked. Borderline data generates a warning. Everything else passes. Logs capture the entire event stream.

This enforces sections 3 and 4 of the policy automatically.

Layer 2: Shadow AI discovery

Endpoint agent maps every AI tool running locally. Identifies personal-account usage. Generates reports for the AI review committee.

This enforces section 5 of the policy (personal account ban) and feeds section 6 (tool review workflow).

Layer 3: Cross-SaaS controls

Redaction in Slack, Jira, Zendesk, Salesforce, Google Drive, SharePoint — the tools feeding AI connectors. If sensitive data is redacted before it reaches an AI tool, the prompt-level rules compound in effectiveness.

Layer 4: Evidence and audit trail

Every detection, block, warning, and override logged with full context. Mapped to NIST AI RMF, EU AI Act, ISO 42001, HIPAA, PCI, SOC 2. Exportable to SIEM. Auditor-ready.

This enforces section 7 (review and improvement) and makes the entire program auditable.

Who Should Own the Policy?

Three roles typically co-own AI governance policy:

CISO (primary operational owner): drafts the security-technical sections, owns enforcement platform, reports metrics to executives.

General Counsel / Chief Privacy Officer: reviews for legal/regulatory soundness, especially around regulated data (HIPAA, GDPR, CPRA) and employment law implications.

Chief AI Officer (where the role exists): strategic coordination, business use case approvals, AI council leadership.

HR: integration with existing conduct/discipline policies, training, communication.

Cross-functional AI council: exceptions, emerging tool review, escalations.

The anti-pattern is a security-only policy written by security without legal or HR input — it won't survive contact with actual violations.

Policy Maintenance: The 90-Day Cadence

AI tools change fast. Policy that's static is stale.

Monthly: policy owner reviews incident data. New AI tools in use (from discovery reports). Emerging risks (e.g., new Copilot feature, prompt injection technique). Update sanctioned/conditional/prohibited list.

Quarterly: executive review of AI risk metrics, policy exceptions, enforcement trends. Update policy if patterns suggest changes.

Annually: full policy review with legal, HR, business stakeholders. Align to updated regulatory expectations (EU AI Act phases, NIST RMF updates, ISO 42001 adoption). Re-publish and re-train.

Ad-hoc: significant new regulatory obligation (new state law, new EU AI Act phase), significant new AI tool category (e.g., agentic AI mainstream), significant incident.

Where to Go Next

If you need to implement AI governance policy with technical enforcement behind it — book a 15-minute demo. We'll show you the platform that turns policy into a real-time control across ChatGPT, Copilot, Claude, Gemini, and 50+ AI tools.

Related reading: What Is AI Governance? · AI Governance Implementation · Best AI Governance Tools · AI Usage Governance vs Model Governance · ChatGPT Security Risks · Microsoft Copilot Security

Frequently Asked Questions

What should an AI governance policy include?

Seven core sections: scope and purpose, sanctioned/conditional/prohibited tools list, data handling rules (what's prohibited, what requires approval, what's permitted), personal account rules, approval workflow for new tools, consequences for violations, and review cadence. Keep it short — two pages is enough. Detail lives in standards and procedures below the policy.

How do I write an AI acceptable use policy?

Start with shadow AI discovery, not a blank page. Deploy a discovery tool for 10–15 days to baseline what AI tools your employees actually use with what data. Draft the policy based on the data — pragmatic allowances for legitimate use, clear prohibitions on regulated data and personal accounts. Review with legal, HR, and business leaders. Publish, train, and pair with technical enforcement.

Is an AI policy legally required?

It depends on jurisdiction and industry. The EU AI Act imposes documented policy requirements on deployers of high-risk AI. HIPAA, PCI DSS, GDPR, and SOC 2 all have implicit documentation requirements for any technology handling regulated data — AI falls under those. Many enterprise customers now require vendor AI policies as part of security reviews. Practically, yes — you need one, whether the law names it explicitly or not.

Who should write the AI policy?

A cross-functional team: CISO or security leadership (drafts technical sections), Legal or Privacy Officer (regulatory review), HR (integration with conduct policies, training, discipline), and business stakeholders (use case validation). A policy written by security alone usually fails stakeholder adoption. A policy written by legal alone is usually unenforceable.

How long should an AI governance policy be?

Two to four pages for the main policy document. Details (data classification rules, approval workflow steps, audit procedures) belong in supporting standards and procedures, not in the policy itself. Long policies don't get read; concise policies with linked detail get adopted.

How do I enforce an AI governance policy?

Technical enforcement via a modern AI governance platform. Real-time prompt inspection blocks prohibited data. Shadow AI discovery finds personal accounts. Cross-SaaS DLP prevents sensitive data from reaching AI connectors. Evidence generation produces audit-ready logs. Strac combines all four in one platform, deployable in under 10 minutes. Policies without technical enforcement are aspirational; with enforcement, they're controls.

What should an AI governance policy include?
How do I write an AI acceptable use policy?
Is an AI policy legally required?
Who should write the AI policy?
How long should an AI governance policy be?
Discover & Protect Data on SaaS, Cloud, Generative AI
Strac provides end-to-end data loss prevention for all SaaS and Cloud apps. Integrate in under 10 minutes and experience the benefits of live DLP scanning, live redaction, and a fortified SaaS environment.
Users Most Likely To Recommend 2024 BadgeG2 High Performer America 2024 BadgeBest Relationship 2024 BadgeEasiest to Use 2024 Badge
Trusted by enterprises
Discover & Remediate PII, PCI, PHI, Sensitive Data

Latest articles

Browse all

Get Your Datasheet

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Close Icon