/GENAI QS/

Perplexity Browser Assistant FAQs

AI Assistant

Does Perplexity Extension train on what it reads by default? 

For individual users on Free or Pro plans, Perplexity enables data use for model training by default. This means your interactions, such as prompts and responses, may be used to train its AI models unless you opt out. The setting responsible for this, labelled “AI Data Retention,” is turned on by default in these accounts and governs whether your session data is used to improve the product.

You can opt out by toggling off the AI Data Retention option in your Account Settings. Once you disable it, any new data you generate will no longer be eligible for training. However, it’s important to note that any interaction data collected before opting out may still have been included in training datasets prior to the change.

For enterprise and organizational customers, Perplexity makes it clear that model training is never performed on user data. These contracts operate under enterprise-specific privacy commitments, which exclude training by default and offer enhanced data handling assurances.

Exactly what can Perplexity Extension access on a page?

Perplexity's documentation confirms that users can upload various file types, including PDFs, DOCX, TXT, code files, images, audio, and video, via the extension or interface. Once submitted, these uploads are processed by the assistant, which can extract and respond based on the contents of those files. This functionality forms the core of Perplexity’s file analysis capability.

However, the documentation does not state that the extension automatically scrapes or interacts with form fields, iframes, shadow DOM elements, or locally stored files on a web page. Perplexity appears to restrict its data access to explicitly submitted content and doesn’t infer or extract contextual data unless the user provides it directly.

For example, when a user uploads a PDF, the assistant can analyse the full document content. But if you’re simply viewing a page with embedded forms or interactive DOM elements, there’s no indication in public resources that the extension reads or collects from those elements in the background.

Can Perplexity Extension take actions?

Current documentation does not confirm that the Perplexity Extension has the capability to perform active page interactions such as clicking, typing, submitting forms, or reading clipboard contents. The extension’s permissions, as described in a listing, allow access to “Your data for perplexity.ai,” which implies it can read site content but not that it can manipulate it.

Perplexity's help center and support articles focus instead on supported manual actions, like file uploads. These can be done through the interface, where users explicitly provide documents or text for analysis, but there is no mention of automated or background actions like clipboard monitoring or dynamic DOM interaction.

As a result, users should assume the extension does not have built-in or authorized functionality to drive interactions on web pages, nor to monitor content like form entries or clipboard data, unless explicitly provided via UI.

What metadata is captured by the Perplexity Extension?

Perplexity states that it collects certain site and device interaction metadata, such as usage data and search queries, to improve service functionality, support feature development, and troubleshoot issues. However, the documentation does not provide a complete breakdown of the specific metadata fields that are captured.

While the help center article mentions aggregated and anonymized usage data for enterprise customers, and more general data for personal users, it stops short of confirming whether detailed metadata such as page URLs, selectors, clipboard contents, timestamps, or account/user IDs are included in logs.

As of now, there is no public-facing specification that enumerates each metadata category collected through the extension or browser interaction. Without this transparency, it’s unclear which exact metadata points are processed beyond general usage metrics.

How long does Perplexity keep page/context and chat logs, and where are they stored? 

Perplexity retains user data based on the account type and activity status. For personal accounts, chat Threads are stored in your Library for as long as your account remains active. Users may delete individual Threads manually at any time, and if the account itself is deleted, Perplexity states that all associated personal data is removed from its systems within approximately 30 days. For users who access the service without signing in, conversations are stored as anonymous Threads and automatically deleted after 14 days.

Enterprise, Enterprise Pro, and organizational accounts follow a different retention policy, where data storage is governed by administrative settings. In these cases, the organization's administrator can configure a specific retention window, commonly set to 30 days or a custom value. Once the set duration is reached, the corresponding search or chat history becomes inaccessible and is permanently deleted within about 7 days.

This separation ensures that personal users manage their own data lifecycle, while enterprise customers have centralized control over data retention in line with internal policies. No additional information is publicly disclosed about physical storage locations or server geography.

Where are retention, export, and delete controls for the Perplexity Extension? 

For Enterprise / Work / Organization accounts, Perplexity delegates data retention and deletion control to administrators. Admins have the ability to define how long Threads are stored and to enforce file deletion policies. Perplexity documentation states that uploaded files in Enterprise accounts are deleted automatically after seven days unless otherwise configured. These controls are accessible through organizational settings, allowing for granular lifecycle governance.

In the case of personal users, Perplexity allows deletion of individual Threads from the Library as well as full account deletion. If a user deletes their account, all personal data is removed from Perplexity’s systems within approximately 30 days. However, there is limited public documentation about whether personal users can export their entire data history or configure long-term retention preferences beyond manual deletion.

Users who need full access to stored data are advised to either use account-level deletion options or contact Perplexity support for assistance with export or removal requests. Current public-facing resources do not describe a formal self-serve export flow.

How do I opt out of model training and still retain organization history/logs? 

Personal users can prevent their activity from being used for model training by disabling the “AI Data Retention” setting within Account Settings. When this toggle is turned off, future prompts and responses are excluded from Perplexity’s model improvement processes. Opting out does not delete your previously created Threads or uploads, so you can continue using the product with full access to your history unless you choose to remove it manually.

For enterprise and organizational users, Perplexity makes clear that training is disabled by default. Under its enterprise privacy policy, data submitted through enterprise plans is never used to train or fine-tune models. This policy applies to all prompts, Threads, and uploads, and does not require any opt-out action by the user or administrator.

In addition to training exclusions, enterprise admins can configure data retention settings to preserve or delete logs based on operational needs. For example, administrators might retain Threads for 30 days before triggering permanent deletion. These configurations enable organizations to preserve logs without impacting the model training boundary.

What changes for Enterprise/Work/Gov vs personal accounts?

Perplexity applies different policies to enterprise and personal accounts with respect to model training, data retention, and administrative control. For training, enterprise and organizational data are never used to train or fine-tune models. This restriction covers all prompts, uploads, and usage interactions, and is enforced by default. On the other hand, Free and Pro consumer accounts have model training enabled by default, requiring users to manually disable it through the “AI Data Retention” setting if they wish to opt out.

When it comes to retention, Enterprise plans support configurable retention windows for Threads and uploads. Common durations include 7, 30, or 90 days, and certain features like granular retention control may require at least 50 seats. Files attached to Threads are automatically deleted after seven days unless otherwise specified. By contrast, personal user data is retained indefinitely while the account remains active, and is purged within 30 days of account deletion.

Enterprise accounts also include advanced permission management through an admin-level dashboard. This allows for configuration of upload restrictions, retention policies, export settings, and sharing permissions, often aligning with compliance frameworks such as GDPR or internal security standards. Personal accounts are limited to user-level control, such as deleting Threads or toggling training preferences.

What leaves the device when using Perplexity Extension?

Perplexity’s architecture relies on cloud-based processing rather than performing operations locally on the user’s device. Whether using the browser extension, web interface, or Enterprise Pro, all prompts, uploaded files, and contextual inputs are sent to Perplexity’s servers for processing and response generation. There is no indication in available documentation that any computation occurs directly on the user’s machine.

The types of data transmitted from the device include user-entered queries, uploaded files and their associated metadata, page context shared via the Lens feature, and technical metadata such as device type, browser information, referrer URLs, and diagnostic logs. Perplexity may also collect account identifiers like user ID and IP address for session management and service analytics. For enterprise users, this data is aggregated and anonymized, and it is not used for model training under current policies.

While Perplexity confirms the flow of data to its servers, it does not publish a full list of network endpoints or domains used by the extension or underlying services. Known domains include perplexity.ai and related API endpoints, but third-party infrastructure providers and analytics tools are not listed in detail. The Lens feature, which extracts content from user-provided URLs, sends those URLs to Perplexity's backend for summarization.

How do we scope or restrict site access in Perplexity Extension?

There is currently no publicly documented support for per-site allow or deny lists, domain-level access controls, or administrative scoping within the Perplexity Chrome extension. The extension does not appear to offer built-in configuration for limiting its operation to specific domains or excluding sensitive web environments by default.

With respect to Incognito mode, Perplexity’s documentation states that searches conducted in private browsing sessions are not saved, and no browsing history or download activity is collected in that context. This policy applies to both the browser-based Comet assistant and, by extension, the Chrome plugin, although the extension-specific behavior is not explicitly restated. For users concerned about session privacy, Incognito mode offers a temporary layer of protection by design.

Perplexity’s API-based tools do offer advanced content filtering via domain configuration for developers, but these controls are not exposed in the standard browser extension interface. Organizations seeking strict content governance or scoped deployment should consider implementing browser-level enforcement via Chrome Enterprise policies or other admin tooling outside of Perplexity's native environment.

What admin controls exist?

Perplexity Enterprise Pro includes several organization-level administrative controls through its Security Hub, allowing enterprise IT teams to configure and manage the assistant’s behavior across users. Available tools include connector permissions for integrations like Google Drive and Notion, visibility and sharing rules for internal and external content, and customizable data retention policies, typically available for enterprise accounts with 50 or more seats. These controls give admins operational oversight while aligning with internal compliance frameworks.

However, certain controls commonly requested in enterprise settings are not currently documented. These include the use of GPO (Group Policy Objects) or MDM (Mobile Device Management) systems to govern browser extension deployment, support for domain blocklists or allowlists, and management of update channels (such as switching between stable or beta releases or controlling auto-updates). As of now, these features are not publicly described for either the browser extension or the broader Enterprise product suite.

Organizations requiring centralized deployment governance may need to use external browser-level admin tools, such as Chrome's GPO templates or Edge’s enterprise settings, to apply restrictions or enforce installation policies for the Perplexity extension. These tools operate independently of Perplexity's internal controls but can help IT teams manage access in regulated environments.

What auditability do we get?

Perplexity Enterprise provides audit logging capabilities designed for organizational transparency and compliance. Enterprise admins can access detailed audit logs that record user activity and system events, including timestamps, user emails, IP addresses, browser and device metadata, and, in some cases, file upload details. These logs offer visibility into how the assistant is used across an organization and help enforce accountability.

Organizations with at least 50 seats or an “Enterprise Max” license can enable audit logging through the admin dashboard. Within the settings menu, admins can activate logging and set up a webhook endpoint to receive logs in real time. This webhook-based export enables integration with external systems and allows the organization to ingest logs into third-party security tools or compliance dashboards.

Perplexity also notes that its audit logs are compatible with SIEM platforms. According to the company’s security documentation, logs are processed through systems like Panther SIEM and can include AWS CloudTrail data and application-level events. While full documentation for all integrations isn’t publicly listed, the architecture supports enterprise-grade log ingestion and review workflows.

How is assistant behavior different from the web/app or API usage? 

The assistant’s behavior in Perplexity varies significantly depending on whether it is accessed through the consumer web app, enterprise interface, or API. For API usage, including the Sonar API, Perplexity applies a Zero Data Retention Policy, which means that prompts and responses are not stored after processing. In addition, content submitted through the API is not used to train or fine-tune models, and telemetry is kept minimal to support privacy-sensitive or enterprise-integrated workflows.

When using the assistant via the standard web or app interface in a personal account, data retention and training policies differ. Chat logs and interactions are retained while the account is active, and training is enabled by default unless the user disables it via the “AI Data Retention” setting. In these environments, telemetry includes interaction logs and other derived signals that may be used to improve model performance unless opted out.

Enterprise environments introduce further distinctions. Retention is fully configurable by the organization’s administrator, who can set limits such as 7, 30, or 90 days for data storage. Enterprise data is never used for model training by default, and any telemetry collected is limited to operational metrics necessary for service monitoring and compliance, not for training purposes.

What protections exist against prompt injection and data exfiltration?

Perplexity’s assistant, specifically in its Comet browser tool, incorporates several safeguards to reduce the risk of prompt injection and unauthorised data exfiltration. These include tool-specific execution boundaries that prevent the assistant from performing unintended actions. Each tool, such as the summarizer or Q&A module, is designed to only perform functions it is explicitly allowed to, limiting how any manipulated prompt might affect behavior.

To further protect users, Perplexity requires explicit confirmation before executing sensitive actions. When Comet is prompted to analyse selected page content or run a summarization tool, it will ask for user confirmation before proceeding. This ensures that hidden or manipulated prompts embedded in page content cannot silently trigger tool behaviors without the user’s knowledge.

Comet also includes a structured prompt interpretation layer, which filters and interprets user inputs before executing any action. This design helps separate the visible user interface from the tool execution logic, reducing the potential for prompt abuse or data leakage from embedded or adversarial content.

Together, these layered defences are intended to mitigate prompt injection risks and prevent unintentional assistant actions, especially in environments where users interact with complex or untrusted page content.

How do DLP and compliance apply?

Perplexity Enterprise Pro is built with several compliance and data protection features that address common enterprise concerns such as DLP, auditability, and regulatory alignment. It has undergone a SOC 2 Type II audit, indicating that it adheres to secure operational practices and data handling standards over time. Additionally, Perplexity has completed a HIPAA gap assessment to evaluate its readiness for handling health-related data.

To support data loss prevention (DLP), enterprise admins can control file access, sharing rules, and connector permissions for tools such as Google Drive and OneDrive. These controls help limit the distribution of sensitive outputs and govern which users or groups can upload and share data within the platform. Admin dashboards allow for the enforcement of content-sharing boundaries that align with organizational policies.

Perplexity also states that it complies with GDPR, including lawful bases for data processing and user rights management. However, full public documentation on its Data Processing Agreements (DPAs), sub-processor disclosures, or cross-border transfer mechanisms like SCCs is not currently available. Similarly, certifications such as ISO 27001 are not referenced in available documentation.

It’s important to note that these features primarily apply to enterprise accounts. Consumer users (Free or Pro) have limited control over data-sharing settings, and their data may be used for training unless explicitly opted out.

Does the assistant run in restricted contexts?

Perplexity Enterprise Pro offers some administrative control over where the assistant can be used, but specific enforcement mechanisms for restricted browser contexts such as SSO, MFA, or VDI environments are not comprehensively documented. While tools like Comet operate with user-defined access levels, there is no indication that they automatically disable themselves on high-sensitivity web environments by default.

Administrators can define user roles and restrict access to specific tools, datasets, or connectors. This allows for scoped access within an organization, providing a layer of protection in controlled environments. In addition, Perplexity supports both local and remote Managed Connector Plugins (MCPs), which can help delineate where and how data is accessed, whether locally or via cloud-based integrations.

Although there is no built-in sandboxing or explicit detection of environments such as HRMS dashboards, ERP systems, or SSO pages, users can limit tracking in Incognito mode, and organizations are encouraged to apply browser-level controls or domain filtering to manage extension behavior.

What’s the incident path?

To report an issue related to the Perplexity Extension or any other Perplexity product, users can contact support through email at support@perplexity.ai or by using the in-app Intercom chat feature. When submitting a report, it’s recommended to include your account email, browser or device details, and a clear description of the issue, along with screenshots or videos if possible, to assist with troubleshooting.

For security-related issues, Perplexity provides a dedicated vulnerability disclosure channel, often referred to as the Security Center or VDP, which is accessible via the Help Center. This allows researchers or users to report bugs or vulnerabilities responsibly and receive follow-up from the appropriate team.

There are no publicly documented Service Level Agreements (SLAs) defining specific response or resolution times for reported issues. Language in the documentation generally commits to responding “as soon as possible,” without stating formal deadlines or guarantees. However, enterprise customers do have access to stronger incident controls.

For Enterprise users, Perplexity includes a rollback or kill-switch capability via the Security Hub. Admins can disable features, restrict connector access, or globally shut down functions in real time to mitigate potential risks while the issue is under review.

Where’s the changelog and “last reviewed” date for this assistant’s behavior/policy?

Perplexity displays “last updated” or “last reviewed” dates across several of its key policy and help center documents, but it does not maintain a centralized changelog for the browser extension or assistant behavior. This means version-specific updates to permissions, UI behavior, or data-handling rules are not tracked publicly in a structured release log.

Some documents, such as the Data Collection at Perplexity page, include last updated date as “Updated over 2 weeks ago”. The Comet Privacy Notice is explicitly dated, showing “Last updated: July 08, 2025,” and outlines how the assistant interacts with user content in the browser. Similarly, the general Privacy Policy is dated October 31, 2025, providing a reference point for its latest revision.

Despite these visible timestamps, there is currently no dedicated page listing historical changes, behavioral updates, or patch notes for the assistant or Chrome extension. As a result, users looking for update history must rely on revisiting documentation pages manually or subscribing to policy update notices where available.