AI Acceptable Use Policy: Free Template + Enforcement Guide (2026)
A free, copy-ready AI Acceptable Use Policy template — plus the enforcement playbook that turns the policy into a real control. Covers ChatGPT, Copilot, Claude, Gemini, and shadow AI.
An AI Acceptable Use Policy (AI AUP) defines what employees may and may not do with AI tools — which tools are sanctioned, what data may be shared, what uses are prohibited, and what happens on violation.
Every enterprise needs one in 2026. Regulators, auditors, insurers, customers, and courts now treat the absence of an AI AUP as a governance failure, not a gap.
The policy is the easy part. Enforcement is the hard part. A policy that says "don't paste PHI into ChatGPT" without a control to block it is a wall with no gate.
This post includes a free template (copy-ready, framework-aligned to NIST AI RMF, EU AI Act, ISO 42001, HIPAA, and SOC 2) and the enforcement playbook that turns it into a real control.
AI Acceptable Use Policy: Free Template + Enforcement Guide (2026)
An AI Acceptable Use Policy is only as strong as the enforcement layer running beneath it
✨ What Is an AI Acceptable Use Policy?
An AI Acceptable Use Policy (AI AUP) is a written policy that governs how employees, contractors, and — increasingly — agents and automated systems use AI tools inside an organization. A complete AI AUP answers six questions:
Which AI tools are sanctioned (and which are prohibited)?
What data may be shared with sanctioned AI tools (and which data is off-limits)?
What uses are prohibited even with sanctioned tools (e.g. generating content for regulated decisions)?
What oversight applies (when must a human review AI output before action)?
What happens on violation (reporting, remediation, disciplinary process)?
How is the policy enforced (technical controls, monitoring, audit)?
Most first-draft AI policies answer questions 1–5 in text and skip question 6 entirely. That's the gap this post exists to close.
Policy says "don't share PHI with ChatGPT." Enforcement is what actually stops it.
Why Every Enterprise Needs an AI AUP in 2026
Three shifts made the AI AUP non-optional:
1. Shadow AI is the default state. The average mid-market enterprise has 3–5× more AI tools in use than IT has sanctioned. Employees have already decided which tools they use. The AUP is the chance to intervene before the next data leak.
2. Regulatory and audit scope has caught up. SOC 2 auditors, HIPAA assessors, and PCI QSAs now routinely ask for the AI AUP. The EU AI Act (in force since August 2024) requires documented governance over AI usage for covered systems. NIST AI RMF and ISO 42001 both call out acceptable-use as a required control.
3. Litigation has moved upstream. The Samsung source code leak, multiple hospital HIPAA incidents, and repeated public ChatGPT data exposures all have one thing in common: no documented AI AUP, or one with no enforcement. Post-incident, the absence of a credible policy is the single most damaging fact in the record.
✨ The 8 Components of an Effective AI Acceptable Use Policy
Strong AI AUPs share the same eight components. Any missing component is a known failure mode.
A credible AI AUP covers every surface AI touches: Browser/GenAI, SaaS, Cloud, and Endpoint
Component 1 — Scope and Applicability. Who the policy applies to (employees, contractors, agents, vendors), which AI systems it covers (generative AI, agentic AI, embedded AI features in SaaS), and which devices (corporate-managed, BYOD, personal).
Component 2 — Definitions. Define generative AI, agentic AI, shadow AI, foundation model, prompt, model output, model context protocol (MCP), sensitive data (PII, PHI, PCI, trade secrets, source code, credentials). Ambiguous policies get argued around.
Component 3 — Sanctioned AI Tools List. An explicit list of approved AI tools (enterprise ChatGPT, Copilot, Claude for Business, Gemini for Workspace, internal AI copilots). Anything not on the list is unsanctioned.
Component 4 — Data Classification and Permitted Use. A clear mapping of which data classes may be used with which AI tools. Public data → any tool. Internal → sanctioned tools only. Confidential → sanctioned tools with enterprise contract. Restricted (PHI, PCI, secrets) → prohibited unless explicitly approved in writing for a specific use case.
Component 5 — Prohibited Uses. Specific uses that are prohibited regardless of tool or data class. Common entries: generating legal or medical advice for external parties, making employment decisions, generating regulated disclosures, impersonation, generating code to bypass security controls, circumventing authorization checks.
Component 6 — Oversight and Review. When human review is required before AI output is used for a consequential decision. Typically includes: external communications, legal documents, security and privacy decisions, employment actions, financial decisions above a threshold.
Component 7 — Incident Reporting. How employees report suspected AI-related incidents (inadvertent data sharing, harmful output, prompt injection). Timeline for reporting. Escalation path.
Component 8 — Enforcement. Technical and operational controls used to enforce the policy, and disciplinary consequences for violations. This is the component most policies skip.
✨ AI Acceptable Use Policy — Free Template
Copy this template into your policy system. Replace the [bracketed] placeholders with your organization's details. Sections are framework-aligned to NIST AI RMF, EU AI Act, ISO 42001, SOC 2, and HIPAA.
Live Slack redaction — a template on paper becomes a real control when it runs inline on every channel AI touches
[Company Name] — AI Acceptable Use Policy
Version: 1.0 · Owner: [Chief Information Security Officer] · Effective date: [Date] · Review cadence: Annual or upon material change
1. Purpose
This policy defines how employees, contractors, and other authorized users of [Company Name] may use artificial intelligence (AI) tools in the course of their work. It protects company and customer data, meets regulatory and contractual obligations, and supports responsible AI use aligned with [NIST AI RMF / EU AI Act / ISO 42001].
2. Scope
This policy applies to:
All employees, contractors, interns, and third parties acting on behalf of [Company Name]
All AI tools, generative AI services, agentic AI systems, and AI-embedded features in SaaS applications
All corporate-managed devices and any personal devices used for company work (BYOD)
All data classes (Public, Internal, Confidential, Restricted)
3. Definitions
AI Tool — any software system that uses machine learning, generative AI, or agentic AI capabilities, including but not limited to ChatGPT, Microsoft Copilot, Claude, Gemini, Perplexity, GitHub Copilot, Cursor, and embedded AI features in SaaS tools.
Generative AI — AI systems that generate text, code, images, audio, video, or other content.
Agentic AI — AI systems that act on behalf of a user, including multi-step workflows and model-context-protocol (MCP) agents.
Shadow AI — any AI tool in use at [Company Name] that is not on the sanctioned AI tools list.
Sensitive Data — includes but is not limited to Personally Identifiable Information (PII), Protected Health Information (PHI), Payment Card Information (PCI), trade secrets, source code, cryptographic keys, credentials, non-public financial data, customer data under NDA, and any data classified Confidential or Restricted.
4. Sanctioned AI Tools
Only the following AI tools are sanctioned for work use:
[List approved enterprise tools — e.g., ChatGPT Enterprise, Microsoft 365 Copilot, Claude for Business, Gemini for Workspace, GitHub Copilot Business]
Any AI tool not on this list is unsanctioned and prohibited for work use, including personal accounts of otherwise-sanctioned tools (e.g., personal ChatGPT Plus). Exceptions require written approval from [AI Review Board].
5. Data Classes and Permitted AI Use
Data Class
Permitted AI Use
Public
Any sanctioned AI tool.
Internal
Sanctioned AI tools only.
Confidential
Sanctioned AI tools covered by an enterprise agreement with adequate data protection terms.
Restricted (PHI, PCI, secrets, source code with trade-secret content)
Prohibited unless explicitly approved in writing by [AI Review Board] for a specific use case.
Employees are responsible for classifying data before AI use. Where classification is uncertain, default to the more restrictive class.
6. Prohibited Uses
Regardless of tool or data class, the following are prohibited:
Submitting Restricted data (PHI, PCI, secrets, source code with trade-secret content) without prior written approval
Using AI output to make a consequential decision affecting an individual (employment, credit, healthcare, legal, regulatory) without documented human review
Generating content that misrepresents AI output as human-authored where disclosure is required
Generating code or content designed to bypass security controls, authorization, or compliance obligations
Using personal AI accounts for work (including signing into sanctioned AI tools with personal credentials)
Uploading data subject to customer NDA to AI tools not covered by an enterprise agreement
Using AI to impersonate any individual or organization
Training third-party AI models on [Company Name] data unless contractually permitted
7. Human Oversight and Review
For the following categories, human review is required before acting on AI output:
External communications to customers, partners, regulators, or the press
Legal documents, contracts, and regulatory filings
Security and privacy decisions (access grants, data sharing, policy exceptions)
Reviewers must document (in the relevant system of record) that the review occurred.
8. Shadow AI and Discovery
[Company Name] reserves the right to discover, audit, and remove unsanctioned AI tools from corporate-managed devices and networks. Shadow AI discovery is conducted by [Security Team] using [approved discovery tool — e.g., Strac Endpoint + Browser DLP].
9. Monitoring and Enforcement
[Company Name] uses technical controls to enforce this policy:
Real-time prompt inspection on sanctioned AI tools (Block, Warn, Audit modes)
Cross-SaaS redaction on Slack, Gmail, Google Drive, Zendesk, Salesforce, and other integrations
Endpoint monitoring for unsanctioned AI tools, local LLMs, and personal AI accounts on corporate devices
Audit logs retained for a minimum of [retention period]
Enforcement platform: [Strac / your chosen AI governance platform].
Employees are notified that AI tool usage may be logged and inspected in accordance with applicable law and [Company Name]'s privacy notice.
10. Incident Reporting
Employees must report the following within [24 hours / 1 business day] to [security@company.com]:
Inadvertent submission of Restricted data to any AI tool
Receipt of AI output that appears to contain another party's confidential data
Suspected prompt injection or model manipulation
Any AI-generated content that caused, or could cause, harm
Discovery of shadow AI use by another employee or vendor
Good-faith reporting is protected; failure to report is a violation.
11. Training
All employees complete AI acceptable use training at onboarding and annually thereafter. Role-specific training applies to engineering, legal, security, and customer-facing teams.
12. Consequences for Violation
Violations of this policy may result in, depending on severity: additional training, removal of AI tool access, disciplinary action up to and including termination, and legal action. Certain violations (intentional exfiltration, repeated Restricted data violations) are grounds for immediate termination.
13. Roles and Responsibilities
AI Review Board — approves sanctioned tools, policy exceptions, and high-risk AI use cases
Security Team — operates enforcement controls, incident response, shadow AI discovery
Legal — regulatory mapping, vendor AI contract review
Privacy Officer — data protection impact assessments for AI use cases
People Team — training delivery, disciplinary process
Employees — compliance, reporting, training completion
14. Policy Review
This policy is reviewed annually and upon material change (new AI tool, new regulation, material incident). Material changes require approval by [AI Review Board].
15. Policy Acknowledgment
All employees acknowledge this policy at onboarding and annually thereafter. Acknowledgment is recorded in [HR system].
How to Roll Out an AI AUP (30 / 60 / 90 Days)
Days 0–30: Draft and Align
Customize the template above for your org
Run shadow AI discovery to build the sanctioned tools list based on reality, not wishes
Align with legal, security, privacy, HR, and a business sponsor
Decide on the enforcement platform
Days 30–60: Deploy Controls
Deploy real-time prompt DLP on sanctioned AI tools
Deploy cross-SaaS redaction on Slack, Gmail, Google Drive, and other data-rich channels
Deploy endpoint discovery on corporate-managed devices
Wire audit logs to your SIEM / GRC
Days 60–90: Communicate and Train
Roll out employee training and acknowledgment
Publish the sanctioned tools list and approval workflow
Publish the incident reporting channel
Run the first quarterly metrics review — tools in use, events blocked, incidents, remediation SLA
✨ The Enforcement Gap — Why Most AI AUPs Fail
The AI AUP is the easy part. The hard part is that policy ≠ control.
A policy prohibits sharing PHI with AI. A control actually prevents it — and logs the attempt when someone tries.
Symptoms of a policy-only AI AUP:
No way to answer the question "which AI tools did employees use last week?"
No log of what prompts contained what data
No block event when a policy violation happens
Incident response depends on employees self-reporting violations
Annual acknowledgment, but nothing between acknowledgments
When the first incident happens — and it always happens — "we had a policy" is not a defense. Regulators ask: what control did you operate? Auditors ask: show me the logs. Courts ask: what was your reasonable standard of care?
A working AI AUP runs on five technical controls:
Real-time prompt inspection on sanctioned AI tools (Block / Warn / Audit)
Shadow AI discovery on endpoints, browsers, and networks
Cross-SaaS redaction on data-rich channels before data reaches AI
Endpoint data lineage tracing sensitive files to their AI destination
Continuous compliance evidence pre-mapped to your frameworks
Every one of those controls is what turns a policy binder into a governance program.
✨ How Strac Enforces Your AI Acceptable Use Policy
Strac is the operational layer underneath an AI AUP. The policy you write stays the policy you write — Strac generates the enforcement events and compliance evidence that prove it works.
Rated 5/5 on [G2](https://www.g2.com/products/strac/reviews) — deployed at UiPath, Crypto.com, Underdog Fantasy and 50+ other enterprises
What Strac does for your AI AUP
Enforces Section 4 (Sanctioned Tools): Browser extension enforces tool list across ChatGPT, Copilot, Claude, Gemini, Perplexity, and 50+ AI tools. Endpoint agent discovers shadow AI (personal ChatGPT Plus, local LLMs, unsanctioned extensions) on Mac, Windows, and Linux.
Enforces Section 5 (Data Classes): Real-time prompt inspection blocks, warns, or redacts content based on 100+ sensitive data types (PII, PHI, PCI, secrets, custom patterns) — including inside attachments (PDF, DOCX, XLSX, JPEG, PNG, screenshots).
Enforces Section 6 (Prohibited Uses): Custom detectors for company-specific prohibited patterns. MCP DLP for agentic AI workflows.
Enforces Section 9 (Monitoring): Cross-SaaS redaction on Slack, Gmail, Google Drive, Zendesk, Salesforce, SharePoint, OneDrive, Notion. Audit logs exported to SIEM.
Enforces Section 10 (Incident Reporting): Automated incident creation when Block events fire; SIEM-native log routing for your incident response workflow.
Generates compliance evidence pre-mapped to NIST AI RMF, EU AI Act, ISO 42001, SOC 2, HIPAA, PCI DSS, ISO 27001, GDPR, and CCPA — continuously, not before audits.
Deploys in under 10 minutes with no proxy and no TLS break.
Bottom Line
An AI Acceptable Use Policy is a foundational control for any enterprise using AI in 2026. But a policy is only the first half. The second half is enforcement — real-time prompt inspection, shadow AI discovery, cross-SaaS redaction, and continuous evidence generation.
Copy the template above. Customize it. Deploy the enforcement layer alongside it. The organizations that do both are the ones that survive the first AI incident without a six-figure legal bill.
Book a 15-minute demo to see how Strac enforces your AI AUP — with real controls, continuous evidence, and deployment in under 10 minutes.
Directly: not in most jurisdictions yet. Indirectly: yes, for most enterprises. SOC 2, HIPAA, PCI DSS, GDPR, and EU AI Act all require documented governance over data handling and system use — which covers AI usage. The absence of an AI AUP is routinely called out as a gap in 2026 audits. Several state privacy laws (CCPA / CPRA, Colorado, Virginia) and sectoral regulators (HHS, FTC, SEC, NYDFS) have issued AI-specific guidance that makes a documented AUP effectively mandatory for covered entities.
What's the difference between an AI AUP and an AI Governance Policy?
An AI Acceptable Use Policy governs how people use AI tools — which tools, what data, what prohibited uses, what oversight. An AI Governance Policy is the broader program-level policy governing how the organization governs AI as a whole — risk assessment, vendor review, model inventory, committee structure. The AUP is one document inside a complete AI Governance Policy program. A mature program has both.
Can I just use my existing Acceptable Use Policy for AI?
Generally no. Traditional AUPs focus on network, device, and SaaS use. They don't address data classes and AI, prompt content, model outputs, shadow AI discovery, MCP/agentic AI, or human oversight of AI-assisted decisions. Auditors and regulators increasingly expect a dedicated AI AUP. The template in this post can be adopted as a standalone policy or as an AI-specific annex to your existing AUP.
How do I enforce an AI Acceptable Use Policy in practice?
Enforcement requires five technical controls: real-time prompt inspection on sanctioned AI tools; shadow AI discovery on endpoints and browsers; cross-SaaS redaction on data-rich channels that feed AI connectors; endpoint data lineage; and continuous compliance evidence generation. Together they turn the policy from a written statement into an operational control. Strac is built specifically to operate all five.
What should the sanctioned AI tools list look like?
Short and explicit. Most mature programs sanction 3–6 tools: an enterprise ChatGPT or Copilot for general productivity, a coding assistant (GitHub Copilot, Cursor), and at most one or two specialist tools (Claude for long-context work, a domain-specific AI). Long lists become unenforceable; short lists force intentional choices. Run shadow AI discovery first so the list reflects real usage patterns — otherwise the policy will be violated on day one.
How often should the AI AUP be updated?
At minimum annually. Trigger an out-of-cycle review on any of: new AI tool onboarded, new regulation in your jurisdiction, material AI-related incident, change to data classification scheme, material change to the enterprise contract with any sanctioned AI vendor. AI moves faster than most policy review cycles — be explicit about the triggers that override the annual cadence.
How does Strac map to the AI AUP template in this post?
Strac enforces Sections 4 (sanctioned tools list, via browser extension and endpoint agent), 5 (data classes, via real-time prompt inspection across 100+ data types including inside attachments), 6 (prohibited uses, via custom detectors and MCP DLP), 8 (shadow AI discovery, via endpoint and browser), 9 (monitoring and audit logs, with SIEM-native export), and 10 (incident reporting, via automated incident creation on Block events). Compliance evidence is pre-mapped to NIST AI RMF, EU AI Act, ISO 42001, SOC 2, HIPAA, PCI DSS, ISO 27001, GDPR, and CCPA.
Discover & Protect Data on SaaS, Cloud, Generative AI
Strac provides end-to-end data loss prevention for all SaaS and Cloud apps. Integrate in under 10 minutes and experience the benefits of live DLP scanning, live redaction, and a fortified SaaS environment.