Securing ChatGPT Enterprise Guide

Published on
April 2, 2026
Contributors

TL;DR: 5 actions to take this week

ChatGPT has grown from a single chat interface into a family of products: core chat, custom GPTs, connected apps, Codex (cloud and local), Atlas browser, and agent mode. Each carries a different risk profile. If your team does nothing else this week, prioritize these five controls.

1. Enforce SSO and SCIM, and verify your domain

Configure SAML 2.0/OIDC SSO so all org users authenticate through your identity provider. Enable SCIM provisioning and domain verification to auto-capture shadow accounts. (Enterprise)

2. Integrate the Compliance API with your SIEM

Enable the Compliance Logs Platform to export audit, auth, and Codex usage logs as immutable JSONL files. Route to Splunk, Sentinel, or your SIEM. Set alerts for anomalous usage patterns. (Enterprise)

3. Lock down apps, GPTs, and connectors

Apps are disabled by default on Enterprise. Keep them off until IT reviews each OAuth scope. Use Workspace Settings > Apps to enable only approved integrations. Use RBAC to restrict which roles can access which apps. (All plans)

4. Restrict Codex and Agent Mode to pilot groups first

Codex and Agent Mode carry the highest autonomy of any ChatGPT product. Use RBAC to limit access to vetted pilot groups. Agent Mode navigates websites and takes actions on the user's behalf; it is the highest-exposure surface for prompt injection. The agent mode toggle controls access across both ChatGPT and Atlas simultaneously. (Enterprise)

5. Establish an AI acceptable-use policy and address shadow AI

Define what data categories (PII, IP, regulated data) may never be sent to AI models. Employees using personal ChatGPT accounts bypass all enterprise controls. Over 80% of Fortune 500 companies have ChatGPT accounts; most lack a formal AI use policy. (All plans)

⚠️ Critical foundation: Know which plan tier you are on. 

Consumer plans (Free, Plus, Pro, Go) may use your data for model training by default. Business, Enterprise, and Edu plans never train on your data. For sensitive business use, do not use consumer plans. The contractual protections are fundamentally different.

Understanding the ChatGPT product surface

Before you can prioritize your rollout sequence and control strategy, you need to understand how each product behaves. The risk profile is not uniform across the ChatGPT family.

Risk Profile
Conversational data, uploaded files, and search results all pass through OpenAI's servers. Primary risks are training data exposure, shadow use of personal accounts, and uncontrolled file uploads.
Use Enterprise or Business plan for business dataData Residency
Consumer plans (Free, Plus, Pro, Go) may use data for training by default. Enterprise and Business plans operate under Commercial Terms with no model training, ever.
Enable SSO and enforce domain captureIdentity
Configure SAML 2.0/OIDC SSO so all org users authenticate through your identity provider. Domain verification auto-enrolls users, preventing shadow personal accounts that bypass all enterprise controls.
Set a custom data retention periodCompliance
Enterprise admins can configure org-wide retention periods. Set this to match your compliance obligations. Shorter retention is better for regulated industries. Business plans have limited retention controls.
Restrict which apps and connectors are enabled org-wideIntegration Security
On Enterprise, apps are disabled by default. Manage the allowlist under Workspace Settings > Apps. Review each app's OAuth scopes before enabling. Use RBAC to control which user roles can access which apps.
Enable the Compliance Logs Platform for audit and monitoringAuditability
The Compliance Logs Platform exports audit, authentication, and usage logs as immutable JSONL files with minutes-level latency. Route to your SIEM for anomaly detection and compliance reporting.
Configure Enterprise Key Management (EKM)Encryption
EKM lets you control your own encryption keys, adding a layer of security beyond OpenAI's default AES-256 at-rest encryption. Available on Enterprise plans.
Risk Profile
Custom GPTs can include uploaded knowledge files, API actions that call external services, and custom instructions. Data leakage can occur through improperly scoped actions, over-shared GPTs, and extracted knowledge files.
Restrict GPT creation and sharing to approved rolesAccess Control
Use RBAC to control who can create and publish GPTs within your workspace. Limit sharing to 'Only people in your workspace' by default. Never allow public sharing of enterprise GPTs.
Disable third-party GPTs or allowlist specific onesIntegration Security
Enterprise admins can control access to the GPT Store and third-party GPTs. Disable all third-party GPTs by default, then allowlist only those vetted by your security team.
Review all API actions and OAuth scopes before deploymentData Governance
Any GPT with API actions can send data to external services. Audit every action's OAuth scopes, data flow, and destination before approving. Treat each action like a new SaaS integration.
Do not upload sensitive data as GPT knowledge filesData Exposure
Knowledge files can potentially be extracted through prompt manipulation. Never upload confidential documents, credentials, or regulated data as GPT context. Use API actions to retrieve data at runtime instead.
Audit GPT usage through the Compliance APIMonitoring
Monitor which GPTs are being used, by whom, and how frequently. Watch for GPTs accessing unexpected data sources or generating unusually high volumes of API calls.
Risk Profile
Each connection grants OAuth-scoped access to corporate data. Primary risks are excessive data surfacing, cross-system data mixing, and third-party data flow to external services.
Keep apps disabled by default; enable per roleLeast Privilege
On Enterprise, apps are disabled by default. Enable only specific apps your security team has reviewed. Business plans have apps enabled by default — review immediately.
Review and approve OAuth scopes for every appData Governance
Each app requests specific OAuth scopes (read, write, delete). Microsoft apps (Outlook, Teams, SharePoint) require separate Entra ID scope approvals.
Use 'Manage Actions' to restrict write operationsAccess Control
Navigate to Workspace Settings > Apps > [App] > Manage Actions. New actions are disabled by default. Scrutinize any write or delete actions before enabling.
Implement least privilege across connected data stores firstData Exposure
ChatGPT respects user-level permissions. Ensure Google Workspace, SharePoint, and Slack permissions already follow least privilege before enabling connectors.
Monitor for shadow AI alongside connector usageShadow AI
Even with enterprise connectors enabled, users may still copy sensitive data into consumer AI tools. Deploy visibility tools (Purview DSPM for AI, Harmonic, or equivalent) to detect unauthorized AI interactions.
Risk Profile
Autonomous software engineering agent that can write code, create PRs, and execute shell commands. CLI and Desktop versions run locally with your OS permissions. Recent vulnerabilities (patched Feb 2026) demonstrated DNS exfiltration and GitHub token extraction risks.
Roll out Codex access via RBAC to approved developer groups onlyAccess Control
Do not enable org-wide until your security team has evaluated sandbox configurations and established usage policies.
Configure sandbox network restrictions for Codex CLI and AppExecution Control
By default, agents can only edit files in their working directory and use cached web search. Require explicit permission for network access and elevated commands. Never auto-approve all operations.
Use project-level rules to restrict file access and commandsData Exposure
Define .codex rules files that restrict directory access and permitted commands. Deny access to .env, credentials, SSH keys, and sensitive config files.
Review all Codex-generated PRs before mergingHuman Oversight
Treat Codex output as draft code. Never auto-merge. Monitor for unusual PR patterns that might indicate prompt injection via branch names or code comments.
Monitor Codex usage through the Compliance Logs PlatformAuditability
Route to your SIEM and alert on: bulk file access, elevated permission requests, or access to sensitive repositories.
Rotate credentials after large autonomous Codex sessionsCredential Hygiene
After major runs, rotate any credentials the session could access — especially GitHub tokens and secrets referenced in environment variables.
Risk Profile
Agent Mode lets ChatGPT navigate websites, click buttons, fill forms, and complete multi-step transactions. Highest-exposure surface for prompt injection — any malicious instruction on a webpage can influence the agent's behavior.
Control Agent Mode access via RBAC; restrict to a pilot groupAccess Control
Enterprise admins can toggle Agent Mode on/off in workspace permission settings. This single toggle controls agent mode in both ChatGPT and Atlas.
Use logged-out mode for tasks involving untrusted websitesBrowser Hygiene
In logged-out mode, the agent will not use pre-existing cookies or sessions. This significantly reduces the risk of attackers exploiting agent access to authenticated sites.
Set custom instructions to define agent guardrailsGovernance
Configure instructions that define required approval checkpoints and prohibited actions (no purchases, no form submissions without confirmation).
Design for prompt injection; the agent reads untrusted web contentInjection Risk
Keep task scopes narrow. Avoid enabling agent mode on sites with user-generated content — public inboxes, support queues, forums.
Establish a human review policy for agent-completed transactionsHuman Oversight
Require human confirmation for any irreversible action — purchases, form submissions, account changes. Treat agent outputs as drafts.
Monitor agent usage through Compliance LogsAuditability
Set alerts for unexpected site visits, high-volume actions, or access to sensitive domains via the Compliance API.
Risk Profile
Most flexible and least opinionated about security. Developers control the system prompt, model selection, data flow, and access patterns. Primary risks: PII/IP in prompts, prompt injection from end users, and API key management.
Enable Zero Data Retention (ZDR) for sensitive workloadsCompliance
ZDR ensures no prompts, outputs, or metadata persist on OpenAI's servers. Essential for PHI, PII, financial data, or IP-sensitive applications.
Scope API keys per application; set spend limitsKey Hygiene
Create separate API keys for each integration. Never share a single key across systems. Anomalous token consumption spikes may indicate compromise.
Use a restrictive system prompt to define model behaviorAccess Control
Explicitly limit the model's role, forbid certain topics, and prevent disclosure of system prompt contents. First line of defense against misuse and data extraction.
Pre-screen inputs for prompt injection and PIIDLP & Injection
Route user inputs through a classifier or content filter before the main model. Check for injection patterns, jailbreak attempts, and PII. Block or sanitize before proceeding.
Consider data residency optionsData Residency
OpenAI offers residency across US, Europe, UK, Japan, Canada, Singapore, Australia, India, and UAE. Select the appropriate region for your compliance requirements.


Risk assessment methodology:
Autonomy measures how much independent action the product can take without human approval. Risk level reflects both the breadth of data exposure and the potential for irreversible actions. Products rated “Very High” should be deployed with explicit approval gates and restricted to pilot groups during initial rollout.

Product-by-product ChatGPT security settings

Each section below provides the 5–6 highest-impact security controls for a specific ChatGPT product. Risk profiles describe the primary threat surface; controls are organized by risk category.

A note on Atlas Browser

⚠️ Atlas is in beta and is not in SOC 2 or ISO scope. 

Atlas is OpenAI's AI-native browser, currently in beta for Enterprise. It is off by default for Enterprise workspaces. OpenAI's own documentation states: do not use Atlas with regulated, confidential, or production data during the beta period.

Most organizations are choosing not to enable Atlas at this time. If you do pilot it, restrict access via RBAC, use only low-risk data, and treat it as an evaluation rather than a production deployment. The Agent Mode controls described above apply to Atlas as well; the toggle is shared. Re-evaluate at general availability.

Security feature availability by plan

Understanding which controls are available on your specific plan tier is foundational before any deployment.

Capability ChatGPT GPTs Apps Codex Agent Atlas API
No training on data Biz+ Biz+ Biz+ Biz+ Biz+ Biz+ Always
SAML SSO Enterprise Enterprise Enterprise Enterprise Enterprise Enterprise N/A
SCIM provisioning Enterprise Enterprise Enterprise Enterprise Enterprise Enterprise N/A
RBAC controls Enterprise Enterprise Enterprise Enterprise Enterprise Enterprise N/A
Compliance API Enterprise Enterprise Enterprise Ent. Enterprise N/A Usage API
Custom retention Enterprise Enterprise Enterprise Enterprise Enterprise N/A ZDR avail.
EKM Enterprise Enterprise Enterprise
App/connector ctrl Admin Admin Admin Plugin N/A N/A Dev-defined
SOC 2 in scope Not yet
Data residency Ent. Ent. Ent.

Deployment framework for AI governance teams

Risk classification for ChatGPT products

Map each ChatGPT product to your organization's risk tiers before enabling access.

Risk Tier Products Approval Required Review Cadence
Standard ChatGPT Core (text chat, no connectors) IT admin approval Quarterly
Elevated Custom GPTs, Apps/Connectors, API Security team + data owner Monthly
High Codex, Agent Mode CISO / governance board Bi-weekly during pilot
Not Recommended Atlas (beta — not in SOC 2 scope) Do not enable until GA Re-evaluate at GA

Phased deployment roadmap

Phase 1
Foundation
Weeks 1–4
Start here before enabling any ChatGPT products for your users.
Configure SSO (SAML 2.0/OIDC) and enable SCIM provisioning
Verify company domains to capture shadow accounts
Define RBAC roles: Owner, Admin, Member with feature-specific permissions
Enable Compliance Logs Platform and route to SIEM
Establish an AI acceptable-use policy covering data classification, prohibited inputs, and shadow AI
Phase 2
Controlled Enablement
Weeks 5–8
Once identity and logging foundations are in place.
Enable ChatGPT Core for all managed users
Enable internal custom GPTs with sharing restricted to workspace
Approve limited apps/connectors after OAuth scope review
Deploy Microsoft Purview DSPM for AI (or equivalent) for monitoring
Conduct user training on responsible AI use, prompt hygiene, and data handling
Phase 3
Advanced Capabilities
Weeks 9–12+
Only after Phase 2 is stable and monitored.
Pilot Codex with a vetted developer group; configure sandbox restrictions and .codex rules
Enable Agent Mode for specific RBAC roles with custom instruction guardrails and logged-out mode as default
Establish a human review policy for all agent-completed transactions and irreversible actions
Establish a post-session credential rotation policy for Codex
Review and expand the app/connector allowlist based on pilot feedback
Re-evaluate Atlas browser at GA — do not enable during beta for regulated or production workloads

Key policy recommendations

  • Prohibit sensitive data categories (PII, PHI, financial data, source code, credentials) from being input into any ChatGPT interface unless ZDR is active on the API
  • Require human review for all AI-generated content before external distribution; treat all outputs as drafts
  • Mandate that all Codex-generated pull requests undergo standard code review before merging
  • Address shadow AI explicitly: employees using personal ChatGPT accounts for work bypass all enterprise controls
  • Maintain a register of all enabled apps, GPTs, and connectors; review quarterly or when new apps are released
  • Stay current with OpenAI's Enterprise release notes, which frequently introduce new controls, model transitions, and scope changes

Security practitioner checklists

Use these during initial deployment, quarterly reviews, and when enabling new products.

Atlas is not currently in SOC 2 or ISO scope. Do not enable for regulated or production workloads until GA.

Four universal rules for every ChatGPT product

Regardless of which products you deploy, these four principles apply everywhere.

Never paste credentials into ChatGPT. API keys, passwords, connection strings, and tokens sent as chat context are transmitted to OpenAI's servers. Use placeholder references and inject secrets at the infrastructure layer. This applies to all surfaces: chat, Codex, Agent Mode, and the API.

Treat apps and connectors like SaaS integrations. Vet every app before enabling. Review OAuth scopes, check the publisher, and understand the data flow. Malicious or misconfigured apps can surface sensitive data. Use 'Manage Actions' to disable write operations you do not need.

Establish an AI acceptable-use policy. Define what data categories may not be sent to AI models. Address shadow AI explicitly: employees using personal accounts bypass all enterprise controls. Over 80% of Fortune 500 companies have ChatGPT accounts, but most lack formal AI use policies.

Assume prompt injection is possible and design for it. Any content ChatGPT reads from the web, files, connectors, or emails could contain malicious instructions. For agentic tools (Codex, Agent Mode), apply sandbox restrictions, use logged-out mode, and require human-in-the-loop checkpoints for irreversible actions.

Key sources and references

  • OpenAI Enterprise Privacy: openai.com/enterprise-privacy
  • OpenAI Business Data Security: openai.com/business-data
  • Admin Controls for Apps: OpenAI Help Center (help.openai.com)
  • ChatGPT Enterprise Release Notes: OpenAI Help Center
  • Atlas for Enterprise: OpenAI Help Center
  • Codex Enterprise Guide: OpenAI Help Center
  • Introducing Codex: openai.com/index/introducing-codex
  • Introducing the Codex App: openai.com/index/introducing-the-codex-app
  • Introducing ChatGPT Atlas: openai.com/index/introducing-chatgpt-atlas
  • Microsoft Purview DSPM for ChatGPT Enterprise: Microsoft Learn
  • OpenAI Trust Portal: trust.openai.com
  • February 2026 Codex/ChatGPT Vulnerability Patches: The Hacker News

Build Your AI Guardrails Now

Gain the visibility and control you need to guide AI use with confidence.

Harmonic Security company logo
As every employee adopts AI in their work, organizations need control and visibility. Harmonic delivers AI Governance and Control (AIGC), the intelligent control layer that secures and enables the AI-First workforce. By understanding user intent and data context in real time, Harmonic gives security leaders all they need to help their companies innovate at pace.
© 2026 Harmonic Security