AI Assistant
According to OpenAI’s Data Usage for Consumer Services documentation, personal consumer accounts (Free, Plus, Pro, and personal workspaces) may have their chats, prompts, file uploads, and interactions used to improve OpenAI models unless training is turned off. The Data Controls FAQ clarifies that users can disable this by switching off the “Improve the model for everyone” toggle in Settings → Data Controls.
For Atlas browsing, OpenAI’s ChatGPT Atlas Data Controls and Privacy guide explains that a second toggle, “Include web browsing,” controls whether browsing data can be used for training. This browsing toggle is off by default, and Atlas data contributes to training only when both training toggles are enabled.
By contrast, OpenAI’s Enterprise Privacy policy states that Business, Enterprise, Education, and API users’ data is not used to train OpenAI models. This includes chats, uploads, browsing data, agent actions, and connector data. The ChatGPT Agent documentation reiterates that organizational data is excluded from training by default.
Across all account types, the Data Usage for Consumer Services page notes that content flagged for trust & safety review may be temporarily retained even if training is disabled.
According to the Web Browsing Settings on ChatGPT Atlas, ChatGPT can only access webpage content when ChatGPT Page Visibility is turned on for that site. When enabled, the assistant can read the visible text on the page to provide summaries, context, and Browser Memory entries. The ChatGPT Atlas Data Controls and Privacy guide explains that Atlas briefly processes page content to generate memory summaries, and that sensitive information such as passwords, credit card numbers, and personal identifiers is automatically filtered out before summarization.
For deeper interaction, the ChatGPT Agent documentation outlines that the agent operates inside a virtual browser, where it “sees” the page through screenshots. In this mode, the agent can access what is visibly rendered inside that virtual window, but does not capture screenshots when users are entering sensitive information.
The sidebar surfaces can read selected or visible page text when users actively ask for assistance, but they do not document access to deeper DOM elements, shadow DOM, iframe internals, or embedded PDFs.
OpenAI does not provide any documentation describing whether ChatGPT can access shadow DOM, cross-origin iframes, element-level selectors, or PDF internals, meaning these technical behaviors are not explicitly documented in any official source.
ChatGPT can only take actions inside Agent mode, not in Atlas or the browser sidebars. The ChatGPT Agent documentation explains that the agent operates inside a virtual browser, where it can click buttons, type into fields, navigate between pages, scroll, fill out and submit forms, and upload or download files. The agent bases its actions on screenshots of the virtual browser window, and the documentation notes that when sensitive information, such as passwords, is required, the agent pauses and hands control back to the user. Screenshots are not captured during those sensitive interactions.
In contrast, the ChatGPT Atlas Data Controls and Privacy page confirms that Atlas does not perform actions like clicking or typing. Atlas is limited to reading page content (when Page Visibility is enabled) and generating Browser Memories. No part of the Atlas documentation describes any ability to automate or interact with webpage elements.
Additionally, OpenAI does not document any clipboard access or OS-level actions. Since no official source describes these capabilities, ChatGPT cannot read your clipboard or system files unless you explicitly upload a file.
OpenAI does not publish a complete metadata schema, but several help articles outline the types of data collected across ChatGPT, Atlas, Search, and Agent mode. The ChatGPT Atlas Data Controls and Privacy page explains that if “Help improve browsing & search” is enabled, Atlas may send diagnostic logs such as technical details and publicly known URLs. It also notes that Browser Memories store privacy-filtered summaries of visited pages, with sensitive information removed, and that raw web content is deleted immediately after summarisation.
The ChatGPT Search for Enterprise and EDU documentation states that when using Bing, OpenAI sends disassociated search queries and approximate location data, and that no account IDs, device IDs, or session IDs are shared.
According to the ChatGPT Agent documentation, the system may capture screenshots of the virtual browser window, action logs, and task metadata, but screenshots are not taken during periods where the user enters sensitive information.
For general ChatGPT use, the Data Usage for Consumer Services FAQ and the OpenAI Privacy Policy state that OpenAI may collect device information, IP address, browser details, timestamps, and usage logs.
An important unresolved detail is that OpenAI does not specify whether it collects granular technical elements such as DOM selectors, deep iframe content, or shadow DOM metadata, leaving this level of metadata collection undocumented.
Retention varies depending on the product surface and the type of account. For ChatGPT consumer accounts, the Data Usage for Consumer Services FAQ explains that deleted chats are removed from OpenAI systems within 30 days, and temporary chats created with chat history turned off may be retained for up to 30 days for safety monitoring.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page states that raw webpage content is deleted immediately after summarisation, while Browser Memory summaries are retained for up to 7 days. The Web Browsing Settings on ChatGPT Atlas page confirms that deleting browsing history also removes the associated Browser Memories.
For ChatGPT Agent mode, the ChatGPT Agent documentation notes that screenshots, action logs, and agent-task data remain available until the chat is deleted, after which they are removed from OpenAI systems within 90 days.
For Business, Enterprise, and Education accounts, the OpenAI Enterprise Privacy Policy states that administrators can control retention periods, and that deleted data is removed within 30 days unless required otherwise by organizational policy or law. The same page also explains that API inputs and outputs are stored for up to 30 days for abuse monitoring unless Zero-Data Retention is enabled.
Depending on the account type and compliance needs, data may be stored on OpenAI systems or on trusted service providers located in the United States or globally.
Where are the retention, export, and delete controls for the ChatGPT browser assistant?
The available controls differ depending on whether you are using ChatGPT on the web, the Atlas Browser, or Agent mode. For ChatGPT on the web and desktop, the Data Usage for Consumer Services FAQ and the Data Controls FAQ explain that you can delete individual conversations, clear your entire history, or disable chat history altogether. When history is disabled, chats are not used to improve models, and deleted conversations are removed from OpenAI systems within 30 days. These settings are located under Settings → Data Controls → Chat history & training.
For Atlas Browser, the Web Browsing Settings on ChatGPT Atlas page describes how users can delete browsing history from the History menu or through Settings → Privacy & Security, and that deleting history also removes associated Browser Memories. The ChatGPT Atlas Data Controls and Privacy page adds that page visibility can be enabled or disabled per site or globally from Settings → ChatGPT Page Visibility.
Regarding Browser Memories, the ChatGPT Atlas Data Controls and Privacy page also notes that users can delete individual memories directly from the Browser Memory panel, and that summaries automatically expire after seven days.
For ChatGPT Agent mode, the ChatGPT Agent documentation explains that deleting the chat removes the related logs, screenshots, and task metadata, and that agent data is deleted from OpenAI systems within 90 days.
For Business, Enterprise, and Education accounts, the OpenAI Enterprise Privacy Policy states that retention, export, and deletion settings are controlled by workspace administrators, and that organizational data is not used to train OpenAI models.
For personal consumer accounts, you can disable training without losing your chat history by turning off the “Improve the model for everyone” toggle under Settings → Data Controls. This prevents OpenAI from using your prompts and conversations for training but does not delete any of your saved chats.
For Atlas browsing, the ChatGPT Atlas Data Controls and Privacy page states that you must disable both “Improve the model for everyone” and “Include web browsing” to stop browsing activity from contributing to model training. The Web Browsing Settings on ChatGPT Atlas page confirms that turning these off still allows you to keep your browsing history or Browser Memories.
For Business, Enterprise, and Education accounts, the OpenAI Enterprise Privacy Policy notes that training is disabled by default and cannot be turned on. Organisations can retain conversation history, audit trails, and logs without any of that data being used to train OpenAI models.
For API usage, the OpenAI Enterprise Privacy Policy also clarifies that training is off by default unless explicitly opted into, and that Zero-Data Retention (ZDR) is available when API users require it.
There are significant differences in how training, retention, telemetry, and data boundaries are handled. For Enterprise, Business, Team, and Education accounts, the OpenAI Enterprise Privacy Policy makes clear that prompts, chats, files, browsing data, and agent actions are never used to train OpenAI models. All data remains inside the organisation’s enterprise boundary and is subject to enterprise retention policies, DLP controls, auditing, and governance requirements. Administrators can set retention periods, export logs, disable history, restrict data surfaces, and enforce identity measures such as SSO and MFA. The ChatGPT Search for Enterprise & EDU documentation also notes that ChatGPT Search with Bing sends only disassociated queries and approximate location data.
For personal consumer accounts, the Data Usage for Consumer Services FAQ explains that chats and files may be used for model training unless the user turns off “Improve the model for everyone.” The ChatGPT Atlas Data Controls and Privacy page adds that Atlas browsing data may contribute to training if “Include web browsing” is enabled. Users control their own browsing history, Browser Memories, and chat histories.
For government cloud or regulated environments, OpenAI has not published specific documentation regarding Gov cloud implementations or FedRAMP scope.
Most ChatGPT browser-assistant features depend on cloud processing rather than on-device execution. For ChatGPT on the web or desktop, the OpenAI Privacy Policy and the Data Usage for Consumer Services FAQ explain that prompts, chats, uploads, and images are sent to OpenAI’s cloud systems. These pages also note that device-level information, such as IP address, browser details, and usage metadata, may be transmitted.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page describes how webpage content is temporarily transmitted to OpenAI servers when Page Visibility is enabled, with raw content deleted immediately after Browser Memory summarization. The Web Browsing Settings on ChatGPT Atlas page further notes that technical diagnostics may be transmitted when “Help improve browsing & search” is enabled.
For ChatGPT Agent mode, the ChatGPT Agent documentation states that screenshots of the virtual browser window, agent actions, and task logs are transmitted to OpenAI for processing, with sensitive input screens excluded.
For Enterprise and Education accounts, the OpenAI Enterprise Privacy Policy makes clear that data remains within the organization’s enterprise boundary and does not flow into OpenAI’s model-training pipelines.
OpenAI has not published any documentation listing per-domain endpoints or describing on-device LLM execution for the ChatGPT browser assistant, so this level of detail is not available.
ChatGPT’s browser assistant offers granular controls through Atlas Page Visibility. The Web Browsing Settings on ChatGPT Atlas page explains that users can configure Page Visibility to allow ChatGPT on all sites, allow it only on specific sites, block it on specific sites, or block it on all sites. These options are available under Settings → ChatGPT Page Visibility. Users can also adjust visibility on a per-site basis using the ChatGPT shield icon in the browser’s address bar.
For Incognito Mode, the ChatGPT Atlas Data Controls and Privacy page notes that Atlas does not retain browsing history, Browser Memories, or site-based visibility preferences while in incognito mode. No data persists between sessions.
For Enterprise and Education environments, the OpenAI Enterprise Privacy Policy indicates that administrators can restrict domains and disable features using broader organizational controls such as SSO, DLP systems, and browser-level policies. However, OpenAI does not publish Atlas-specific MDM or GPO configuration options, so this level of fine-grained management is not documented.
Administrative controls mainly apply to Business, Enterprise, Education, and Team accounts rather than personal consumer users. The OpenAI Enterprise Privacy Policy explains that workspace administrators can configure organization-wide settings for conversation retention, data export permissions, identity enforcement (including SSO and MFA), and other workspace policies. These controls apply across all ChatGPT surfaces, including Atlas browsing and Agent mode, and identity integration can be managed through providers like Microsoft Entra ID or Okta.
For ChatGPT Search in Enterprise and Education environments, the ChatGPT Search for Enterprise & EDU documentation notes that administrators can enable or disable external search providers such as Bing at the organizational level.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page clarifies that OpenAI does not provide GPO or MDM-style admin controls for Page Visibility, domain allow/deny lists, or Browser Memory policies. These visibility settings are controlled by end users, not administrators.
For the ChatGPT Desktop App, OpenAI has not published any MDM, plist, or enterprise configuration options for disabling browsing, Agent mode, or other data surfaces. This level of configuration is undocumented.
For browser extensions such as the Chrome and Edge sidebars, organisations can block or restrict extensions through standard browser GPO or MDM controls. These restrictions are enforced by Chrome or Edge, not by OpenAI.
Audit capabilities differ significantly between consumer and enterprise offerings. For Enterprise and Business workspaces, the OpenAI Enterprise Privacy Policy explains that organizational prompts, responses, user-access events, and retention actions may appear in enterprise audit logs depending on how administrators configure identity systems such as Microsoft Entra ID or Okta. OpenAI notes that enterprise data remains within the organisation’s compliance boundary and may be discoverable through eDiscovery-equivalent systems or admin export tools.
For ChatGPT Search in Enterprise and Education environments, the ChatGPT Search for Enterprise & EDU documentation states that search events and Bing query flows may be reflected in enterprise auditing systems, with all Bing-bound queries disassociated from user IDs.
For Agent mode, the ChatGPT Agent documentation notes that agent tasks include screenshots and action logs, which persist until the associated conversation is deleted. When retention is enabled by administrators, these items may appear in enterprise audit exports.
For consumer accounts, the Data Controls FAQ makes clear that there is no enterprise-grade auditing, SIEM connector, or organization-level log export. Users can export only their own chat histories.
OpenAI has not published SIEM-specific integration documentation for tools such as Splunk, Microsoft Purview, Elastic, or Google Chronicle, so this level of enterprise integration is currently not documented.
Assistant behaviour varies significantly across different ChatGPT surfaces. For ChatGPT on the web or desktop, the Data Usage for Consumer Services FAQ explains that chats and uploads may be used for model training unless the user disables the “Improve the model for everyone” toggle. Deleted conversations may be retained for up to 30 days.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page notes that page content is accessed only when Page Visibility is enabled, and raw content is deleted immediately after generating a summary. The Web Browsing Settings on ChatGPT Atlas page adds that Browser Memory summaries are kept for up to seven days, and browsing may contribute to training only when both relevant training toggles are turned on.
For ChatGPT Agent mode, the ChatGPT Agent documentation states that the agent operates inside a virtual browser using screenshots. Agent data is retained for up to 90 days after deletion, and screenshot capture is paused whenever users enter sensitive information. Agent data follows the user’s model-training preference settings.
For API usage, the OpenAI Enterprise Privacy Policy confirms that training is always off by default. Inputs and outputs may be retained for up to 30 days for abuse monitoring unless Zero-Data Retention (ZDR) is enabled, and API requests do not interact with browser or assistant surfaces.
For Enterprise and Education accounts, the OpenAI Enterprise Privacy Policy highlights that organizational data is never used for model training, and retention and telemetry are governed by administrator-defined policies rather than consumer defaults.
Taken together, browsing and agent modes introduce additional content sources such as page visibility and screenshots, but enterprise data never contributes to model training.
OpenAI has not published a dedicated security model describing prompt-injection defenses for Atlas or Agent mode, but some safeguards are outlined in product documentation. For ChatGPT Agent mode, the ChatGPT Agent documentation explains that the agent automatically pauses when sensitive UI elements such as password fields appear. During these moments, screenshot capture is disabled, preventing the exfiltration of credentials or other sensitive inputs while the user is interacting with the page.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page states that sensitive information, including passwords, credit card numbers, government IDs, and financial records, is automatically filtered out during Browser Memory summarisation. The same documentation emphasizes that raw page data is deleted immediately after summarization, reducing the window of exposure.
At the model level, the Data Usage for Consumer Services FAQ notes that Trust & Safety systems evaluate inputs and outputs to detect unsafe, abusive, or harmful content, and that data flagged for policy enforcement may be temporarily retained.
However, OpenAI does not document prompt-injection-specific mitigations, automated exfiltration-prevention rules, or the context-sanitization logic applied to user-provided webpages. These technical details are not available in the current public documentation.
Compliance behavior varies depending on whether the account is a consumer account or part of an enterprise workspace. For Enterprise, Business, and Education customers, the OpenAI Enterprise Privacy Policy explains that all data remains inside the organization’s enterprise boundary and is subject to the organization’s own retention, DLP, auditing, and governance controls. OpenAI notes that enterprise data is never used to train models, and ChatGPT Search uses disassociated queries. Organisations can layer their own DLP tools, conditional access rules, eDiscovery systems, and compliance frameworks on top of ChatGPT usage.
For consumer accounts, the Data Controls FAQ makes clear that there are no built-in DLP controls. Users can delete conversations, disable history, or turn off training, but there are no enterprise-style policy restrictions or centrally enforced protections.
For Atlas Browser, the ChatGPT Atlas Data Controls and Privacy page describes how pages containing sensitive information, such as passwords, financial information, or medical data are automatically filtered before Browser Memory summarization. Raw page content is deleted immediately after processing, reducing exposure.
Regarding HIPAA and BAAs, OpenAI does not claim HIPAA compliance, does not offer a Business Associate Agreement, and does not describe ChatGPT as suitable for HIPAA-regulated workflows. The OpenAI Security page is the recommended reference, and it does not state otherwise.
For GDPR and DPA obligations, users can request deletion, access data-export options, and use transparency tools, while enterprise customers receive additional contractual data-protection terms. These details are outlined in the Data Controls FAQ and the OpenAI Enterprise Privacy Policy.
OpenAI does not document native copy/paste prevention, screenshot restrictions, or DLP enforcement for consumer or Atlas surfaces.
Whether ChatGPT’s browser assistant can operate in restricted contexts depends on Atlas Page Visibility, browser-level controls, and organizational policy, not on OpenAI enforcing internal restrictions. For Atlas Browser, the Web Browsing Settings on ChatGPT Atlas documentation explains that Atlas can only read webpages when ChatGPT Page Visibility is enabled for that domain. If Page Visibility is turned off, Atlas cannot access any content, even for pages involving SSO flows, MFA screens, login prompts, or internal intranet applications. The ChatGPT Atlas Data Controls and Privacy page adds that Page Visibility can be managed per site or globally and can be disabled entirely.
For ChatGPT Agent mode, the ChatGPT Agent documentation clarifies that the agent operates inside a virtual browser environment rather than directly within a user’s VDI session or corporate intranet. When the agent encounters sensitive authentication elements such as passwords or MFA fields, it automatically pauses, transfers control back to the user, and stops capturing screenshots during the interaction.
For Enterprise and Education admins, the OpenAI Enterprise Privacy Policy notes that OpenAI does not provide centralized tools for blocking ChatGPT on specific URLs, internal apps, or intranet paths. Blocking must instead be implemented at the organisational level using identity provider policies (SSO/MFA), browser extension controls, firewall rules, or conditional access policies.
There is no documentation describing OS-level restrictions, VDI detection mechanisms, or automatic blocking of high-security authentication surfaces, so these behaviors are not available.
OpenAI’s documented escalation and incident-response paths differ for enterprise and consumer users. For Enterprise, Business, and Education customers, the OpenAI Enterprise Privacy Policy notes that organizations receive support through their dedicated OpenAI account team and enterprise support channels. SLAs, uptime guarantees, and escalation workflows depend on each organization’s contract and apply to security incidents, model misbehaviour, data-exposure risks, and issues involving browsing or agent mode.
For reporting security vulnerabilities or safety-critical issues, the OpenAI Security page provides a responsible-disclosure program. This channel is used to report vulnerabilities, security incidents, or safety concerns tied to any OpenAI product surface.
For consumer accounts, issues can be reported through the Help Center’s “Contact Us” entry point. The OpenAI Help Center allows users to file support tickets for problems involving ChatGPT behavior, browsing issues, or agent-mode incidents. Unlike enterprise customers, consumer accounts do not have SLAs.
For abuse or safety incidents, the Data Usage for Consumer Services FAQ explains that content flagged for policy violations may be reviewed by Trust & Safety, and such content may be temporarily retained for enforcement.
OpenAI does not publish any consumer-facing kill-switch, emergency shutdown mechanism, or rollback instructions for Atlas or Agent mode. These capabilities are not documented.
OpenAI does not maintain a dedicated changelog for the ChatGPT browser assistant, Atlas browsing, or Agent mode. Instead, updates appear across multiple product surfaces.
Within the Help Center, articles include “Last updated” timestamps that indicate when documentation was modified, not necessarily when product behavior changed. Examples include the ChatGPT Atlas Data Controls and Privacy page and the ChatGPT Agent documentation.
For API models and versioning, updates are documented on the OpenAI Developer Documentation site, which tracks model releases and deprecations but does not specifically address the browser assistant.
For enterprise and education customers, policy updates or changes to contractual data-processing terms are communicated through the enterprise agreement, DPA, or administrator notifications. The OpenAI Enterprise Privacy Policy does not include a public changelog for browser-assistant evolution.
There is no single consolidated changelog or “last reviewed” index for the browser assistant. Changes must be inferred from Help Center timestamps and updates found on OpenAI’s blog and product pages.