/GENAI QS/

Claude Browser Assistant FAQs

AI Assistant

Does Claude for Chrome train on what it reads by default?

Whether Claude for Chrome trains on what it reads depends on your account type and current privacy settings. For consumer accounts (Free, Pro, Max), Claude will only use your chats and coding sessions to train future models if you opt in via the “Help improve Claude” toggle. If this setting remains on or unchanged after you’re prompted, new and resumed sessions may become eligible for training.

If you turn the setting off, your chats are no longer used for training, with limited exceptions such as safety reviews or manually submitted feedback. Opting in permits Anthropic to retain your data for up to five years, while opting out limits retention to approximately 30 days. As of 2025, this opt-in choice is a required part of onboarding for all new accounts.

For enterprise, commercial, and API-based deployments, model training is disabled by default. In these environments, including integrations via Amazon Bedrock or Google Cloud Vertex AI, Claude operates under customer-defined data use agreements and typically retains data for 30 days or less, unless explicit consent for training is provided.

Anthropic also clarifies that any training data used is de-identified, never sold, and never used for advertising or profiling. If you reopen an old chat after opting in, that conversation may fall under the current opt-in policy and be considered for training.

Exactly what can Claude for Chrome access on a page?

According to Anthropic’s official documentation, Claude for Chrome can view what’s visible in your active browser tab, execute clicks, navigate websites, and capture screenshots to provide context-aware responses. The assistant has access to all tabs in its assigned group and can interpret visual input from your screen, including uploaded screenshots or images. It includes built-in navigation support for popular platforms like Slack, Gmail, Google Docs, and GitHub.

You can manage how Claude acts on websites using permission settings - choosing between “Ask before acting” and “Act without asking.” Site-specific access is controlled through Settings → Site Permissions, where you can revoke access or set default behavior for particular domains.

What remains unclear is whether Claude can access hidden elements such as unsubmitted form fields, background iframes, shadow DOM nodes, or locally stored files unless you explicitly upload them. While the assistant takes screenshots of what’s visible in your browser, the documentation does not confirm that it parses or extracts hidden structures, embedded content, or unread DOM areas that aren’t actively displayed.

Can Claude for Chrome take actions?

Yes, Claude for Chrome is capable of taking certain actions within your browser when permission is granted. The official “Getting Started” article states that Claude can read, click, and navigate websites on your behalf. It can also handle multi-step tasks like filling out forms or navigating between pages, and even submit files or images that you upload as part of a workflow.

Claude's behavior is guided by your selected permission level. When set to “Ask before acting,” Claude will seek your confirmation for each action. When set to “Act without asking,” it can execute steps autonomously, such as typing or clicking, within the boundaries of your configured permissions. This setup allows Claude to complete end-to-end tasks with user oversight or automation, depending on your preference.

That said, certain actions are not documented in public materials. There’s no official confirmation that Claude can automatically download files from your system or read clipboard content. Similarly, it does not claim persistent access to local files unless those files are directly uploaded by the user within an active session.

What metadata is captured by Claude for Chrome?

Claude for Chrome captures visual context from your browser in the form of screenshots, allowing it to interpret and respond to the content you’re currently viewing. This means any visible page elements, including URL, title, text, and on-screen images, may be included in its analysis. Anthropic notes that the extension uses granular site permissions and blocklists to prevent unauthorized access and requires user approval before acting.

However, the documentation does not offer a detailed breakdown of which metadata fields are explicitly captured or logged. It remains unclear whether Claude stores structured metadata such as CSS selectors, iframe identifiers, timestamps, or user/account IDs. The current support materials do not confirm how or if these elements are retained or associated with user sessions.

Given the lack of specificity, it is safe to assume that Claude captures what is visible in your tab, but does not provide full transparency into its metadata logging practices. Without more detailed public documentation, it’s not possible to confirm the scope of metadata fields being stored beyond screenshot-derived visual context.

How long does Claude for Chrome keep page/context and chat logs, and where are they stored?

Claude’s data retention policies vary depending on your account type and whether you’ve opted into model training. For users on Free, Pro, or Max plans, opting in to the “Help improve Claude” setting allows Anthropic to retain your chats and coding sessions for up to five years. This applies to both newly created sessions and any resumed chats after opting in.

If you opt out of model training, Claude still retains your session data, but typically for a much shorter window, about 30 days from the time of deletion, or from when the session becomes inactive. This opt-out state significantly limits data retention but does not completely eliminate temporary storage.

For enterprise, team, or API accounts, Anthropic enforces a stricter policy. By default, data is retained for 30 days or less, and in some contractual scenarios, organizations may enter into a zero-data retention agreement. These arrangements ensure that both input and output data are deleted more rapidly than in consumer contexts.

Anthropic’s privacy policy states that personal data is only retained “as long as reasonably necessary” based on your tier and configuration. This means there is no single retention rule that applies to all users; duration and storage depend on your plan and your chosen settings.

Where are retention, export, and delete controls for this assistant?

Claude provides different retention and data control options depending on whether you’re using a consumer or enterprise account. For individual users on Free, Pro, or Max plans, data export can be initiated by navigating to Settings → Privacy → Export Data. Once triggered, you’ll receive a download link via email, typically valid for 24 hours. For Team plan Primary Owners, the same export process applies, using the Admin Settings under the Data & Privacy section.

For enterprise users, the Admin Settings panel allows Primary Owners to export organization-level data and to configure retention settings to suit specific policy or compliance requirements. These tools give enterprise customers direct access to both user data and administrative audit records.

Delete controls vary by account. Personal users can delete their account manually, which triggers the removal of associated data in accordance with Anthropic’s deletion policy. 

For organizations, data controls are managed centrally by administrators. These include retention configuration, deletion rules, and export capabilities, all found in the enterprise-level Admin dashboard.

How do I opt out of model training and still retain organization history/logs?

If you are using Claude for Chrome on a consumer plan - Free, Pro, or Max - you can opt out of model training by switching off the “Help improve Claude” setting found in your Privacy Settings. Once this toggle is disabled, your future chats and code interactions will no longer be included in Anthropic’s model training datasets. Past conversations remain accessible unless manually deleted, although resuming older chats after opting in could potentially make them training-eligible again.

For enterprise, API, or commercial accounts, no opt-out is needed because model training is disabled by default. These deployments operate under contractual terms that prohibit data from being used to train or fine-tune models. This includes prompts, file uploads, and assistant responses across all usage contexts covered by the agreement.

Organizations can retain their chat logs and session history based on their configured retention policies, which are controlled through the Admin dashboard. This enables enterprises to keep data for auditing or compliance purposes without participating in model training workflows.

What changes for Enterprise/Work/Gov vs personal accounts?

The main differences between Claude for Chrome’s enterprise and personal plans lie in how training, retention, and administrative controls are handled. In enterprise, work, or government contracts, Anthropic guarantees that user data is not used to train or fine-tune models by default. These agreements apply across tools like Claude for Chrome, Claude for Work, and API integrations. By contrast, personal accounts (Free, Pro, Max) enable training by default unless the user explicitly opts out through the privacy settings.

Retention policies also differ significantly. Enterprise deployments support configurable retention windows that can be adjusted by administrators and are governed by contractual terms. These controls may include options for zero-data retention or fixed durations based on regulatory needs. Consumer accounts follow standard retention protocols, where opting in can extend storage to five years, and opting out limits it to around 30 days.

Enterprise accounts also unlock advanced administrative features. These include centralized permission management, support for SSO and identity integration, audit logs, and broader organizational governance. Personal users, on the other hand, are limited to individual account-level settings, such as toggling training or deleting specific threads and uploads.

What leaves the device when using Claude for Chrome? 

Claude for Chrome processes user interactions in the cloud, not locally on your device. When active, the assistant captures a screenshot of your current browser tab to understand the visible context and provide relevant responses. This screenshot is then uploaded to Anthropic’s servers for interpretation. Based on official documentation, Claude can also perform navigation and suggest actions using the captured visual data, implying that elements like tab structure and typed content may be included in the data sent to the cloud.

The Claude blog confirms that the assistant can ‘see what you’re looking at in the browser which includes content like emails, documents, shopping carts, and other webpage elements visible on the active tab. However, there is no detailed documentation confirming whether Claude collects or accesses elements beyond what’s visible, such as hidden iframes, shadow DOMs, or system clipboard data. The tool’s behavior appears to be limited to what is shown on screen or directly interacted with by the user.

Anthropic does not publish a full manifest of the network endpoints or domains that Claude for Chrome connects to during operation. While traffic is clearly routed through Anthropic’s infrastructure for processing, the exact list of contacted domains or third-party services remains unspecified in public-facing materials.

How do we scope or restrict site access for Claude for Chrome?

Claude for Chrome includes per-site permission controls that let users manage where the assistant is allowed to act. Within the extension’s settings, users can access the Site Permissions section to view which domains are approved, revoke access to individual sites, or set defaults like “Always allow” or “Ask before acting.” These tools allow users to control when and where Claude can interact with webpage content on a domain-by-domain basis.

When visiting a new site with “Ask before acting” enabled, Claude will prompt you before performing any action. You’ll have the option to allow the action once, always allow on that site, or decline entirely. These prompts offer a safeguard against accidental activation on sensitive or unfamiliar websites, reinforcing user control over the assistant’s behavior.

Anthropic’s blog also notes that Claude has built-in blocks for specific categories of high-risk websites, including adult content, financial services, and pirated content domains. This default restriction reduces the likelihood of Claude engaging in sensitive or potentially harmful environments without explicit approval. However, incognito behavior for Claude for Chrome is not extensively detailed in the current public documentation.

What admin controls exist for Claude for Chrome?

Claude’s enterprise ecosystem includes a variety of administrative controls, but the documentation for browser extension-specific governance remains limited. For enterprise deployments, Anthropic allows admins to configure tool permissions, restrict file access, and define Model Context Protocol (MCP) server connections. These controls are available within the admin dashboard and can be enforced across all users in a Team or Enterprise plan.

In addition, Anthropic offers a Compliance API that provides programmatic access to usage data and user interactions. This enables organizations to monitor activity, manage content visibility, and build custom governance workflows. Site-level permissions are also referenced in Claude’s browser extension documentation, where users can control access to specific domains via the Site Permissions panel.

What is not publicly documented are GPO (Group Policy Object) templates or MDM (Mobile Device Management) controls for centrally deploying or restricting the Claude extension across enterprise-managed browsers. Similarly, there is no mention of domain allow-lists or deny-lists enforced at the browser extension level, nor are update channels for extension versions, such as beta/stable release control, currently listed in official resources.

What auditability do we get?

Claude’s enterprise offerings include detailed audit log capabilities designed to support monitoring, compliance, and usage analysis. Organization owners or primary admins can export audit logs from the Admin dashboard by navigating to Data & Privacy → Export Logs. These logs cover a rolling 180-day period and include key metadata such as timestamps, IP addresses, user agents, device IDs, and event types like sign-ins, chat creation, and file uploads.

A separate guide titled “Creating Usage Analytics with Claude for Enterprise Audit Logs” explains how exported logs in CSV or JSON format can be used to build custom dashboards. These may include metrics like daily active users, files submitted, or assistant-triggered actions, allowing organizations to derive insights from usage patterns while meeting regulatory or operational reporting needs.

Anthropic also references audit logging in its broader enterprise documentation, noting the availability of data export and retention configuration for customers in regulated industries. While the core logging system is described in detail, the documentation does not currently mention direct integrations with third-party platforms like SIEM systems, Microsoft Purview, or Splunk. It is unclear whether log delivery to those platforms is natively supported or requires manual configuration.

How is assistant behavior different from the web/app or API usage?

Claude for Chrome behaves differently depending on how you use it, whether through the browser extension, the consumer web app, or via API/enterprise deployment. For consumer accounts on the Free, Pro, or Max plans, your data may be used to improve Claude’s models if you opt in via the “Help improve Claude” setting. This includes data from browser extension use as well as chat or coding sessions on the website. By contrast, if you use Claude in an enterprise or API context, such as through Claude for Work or Claude via Amazon Bedrock, then data is never used for model training unless your contract explicitly states otherwise.

Retention policies also differ by usage tier. When training is enabled, data from consumers can be retained for up to five years; when training is disabled, retention typically lasts around 30 days. Enterprise or API use cases may include shorter or zero-retention policies, depending on your agreement. Anthropic allows organizations to define specific data handling terms, such as limiting how long session inputs and outputs are stored on servers.

In terms of telemetry, prompts, responses, and context from the extension or app are sent to Anthropic’s cloud infrastructure for processing. These logs may be included in product telemetry unless the user or organization opts out. API documentation tends to offer greater detail on telemetry controls and data flow transparency than the consumer-facing experience.

What protections exist against prompt-injection and data exfiltration? 

Claude for Chrome includes multiple layers of built-in protection to reduce the risk of prompt injection and data exfiltration, especially in its current “research preview” state. Users have fine-grained control over which websites Claude can access through the Site Permissions settings. This allows you to grant or revoke access to specific sites or configure Claude to always ask for permission before acting on a new page.

Anthropic also enforces a system of confirmation prompts for sensitive actions. Before Claude submits forms, shares personal information, or takes action that could lead to data disclosure, such as making a purchase or posting content, the assistant requires you to explicitly confirm the instruction. This interaction model helps ensure that actions are not triggered by manipulated prompts or hidden instructions embedded in webpage content.

Further, Anthropic has implemented classifier-based protections to mitigate prompt injection attempts. The Claude team conducted red-teaming tests and trained detection models that screen for suspicious patterns like deceptive tab titles, hidden form fields, malicious DOM manipulations, or code disguised as user input. These classifiers help reduce the success rate of prompt injection attacks by filtering unsafe instructions before action.

Although these mechanisms are effective in many cases, the assistant’s documentation acknowledges that the protections are still evolving. There is no fully centralized enterprise-grade safe-mode dashboard yet, and the detailed breakdown of every mitigation layer has not been made public. As the product matures beyond preview, more controls may be introduced for regulated environments.

How do DLP and compliance apply?

Claude for Chrome supports basic compliance through contract-backed enterprise controls, but it does not include native DLP features like clipboard blocking or screenshot suppression. Instead, Anthropic recommends organizations pair the extension with endpoint-level or OS-level tools such as Microsoft Purview or third-party DLP agents if they require copy/paste or screen recording restrictions while using Claude.

Anthropic emphasizes data-boundary best practices across its products. This includes session isolation, limitations on telemetry when opted out, and safeguards to prevent inadvertent retention of sensitive data such as medical or financial records. These principles guide the Claude Enterprise architecture and inform Anthropic’s compliance framework with global standards such as GDPR.

Anthropic also supports HIPAA compliance, but only under strict conditions. A Business Associate Agreement (BAA) is available to commercial customers using HIPAA-eligible Claude APIs under a Zero Data Retention (ZDR) contract. The BAA does not apply to consumer accounts or general Claude for Chrome usage. Organizations must complete a legal and security review with Anthropic to enter into a BAA or similar arrangement.

In terms of GDPR and general data protection, Anthropic offers a Data Processing Addendum (DPA) to enterprise clients entering into commercial agreements. This DPA covers user rights, deletion, retention, and subprocessor transparency. While Claude for Chrome is not turnkey-ready for highly regulated industries by default, Anthropic provides contractual paths and optional controls that help enterprise customers manage compliance obligations through coordinated reviews and data processing terms.

Does Claude for Chrome run in restricted contexts?

Claude for Chrome allows users and admins to limit its access in restricted contexts, though it does not yet include automatic detection or lockdown modes for highly sensitive environments such as MFA flows, or virtual desktop interfaces (VDI). The extension provides per-site permission settings under Settings → Site Permissions, where you can configure whether Claude is allowed to act, prompt for confirmation, or be blocked entirely on specific websites.

The assistant also uses built-in category filtering to block access to high-risk domains, including financial sites, adult content, and pirated material. This helps reduce potential data exposure in domains commonly associated with sensitive information or higher security requirements. However, it does not currently include specific triggers that would automatically disable Claude when visiting SSO login pages or internal dashboards unless such sites are manually restricted.

To block Claude in restricted contexts, users can revoke access through the extension settings, while enterprises may apply Chrome enterprise policies via GPO or MDM to prevent installation or to scope use to whitelisted domains. In practice, this often requires combining Claude’s settings with external browser-level security policies for full control in managed IT environments.

What’s the incident path?

Claude for Chrome provides a basic user-facing incident response mechanism through its reporting tools and support documentation. If you encounter errors, unsafe responses, or misuse, you can flag conversations directly within the Claude interface. You can also contact Anthropic Support using the guidance provided in their Help Center or initiate content removal from your device by adjusting extension or browser settings.

There is no published kill-switch or central rollback system dedicated specifically to Claude for Chrome. However, you can disable or uninstall the extension at any time through the browser's extension manager. You may also revoke Claude’s access to specific websites from the Site Permissions menu if you're concerned about behavior on certain domains. These tools functionally serve as immediate containment or rollback options in the absence of a centralized kill mechanism.

As of now, Anthropic has not disclosed any formal Service Level Agreements (SLAs) covering Claude for Chrome. No guaranteed response times or issue resolution timelines are published in official documentation. However, Claude’s release notes are actively updated with bug fixes, feature enhancements, and version-level changes, which serve as an indirect form of ongoing support transparency.

In the absence of formal SLAs, users can still track incident resolution progress by monitoring changelogs or communicating with Anthropic support. The available controls, while limited, offer a path for escalating issues and regaining control over the assistant’s actions if needed.

Where’s the changelog and ‘last reviewed’ date for this assistant’s behaviour/policy?

Anthropic provides release notes for Claude for Chrome through a dedicated changelog page, which includes clearly dated entries for new features, improvements, and bug fixes. These entries may also reference model updates (e.g., Sonnet 4.5 or Haiku 4.5) tied to browser extension capabilities. This changelog represents the best public source for tracking changes to assistant behavior over time.

However, the changelog does not centralize all policy or permissions changes. Behavioral documentation for Claude, including how it interacts with browser content, handles privacy, and processes inputs, is spread across multiple support articles and blog posts. Each of these pages may include its own “last updated” timestamp, which indicates recent revisions but does not offer a unified “last reviewed” date across all functionality.

Anthropic’s Terms of Service and Privacy Policies for consumer and enterprise users typically include an “Updated on” label, helping users track changes to retention, training, or usage terms. While helpful, these legal documents are not specific to the extension’s behavior in real-time web environments.

In summary, users can rely on the changelog and dated support documents to track how Claude for Chrome evolves. Still, there is no single dashboard or consolidated policy timeline that summarizes all behavior and permission changes. Staying informed requires monitoring multiple locations, including the release notes and Help Center.