AI Engine
It depends on your account type and settings. For consumer users (Free, Pro, Max), Claude will use your chats and coding sessions to train future models only if you opt in via the “Help improve Claude” toggle. If this toggle is left on or unchanged after the consent prompt, new and resumed sessions may be used for training.
If you disable the setting, your chats won’t be used for training, except in limited cases like safety reviews or when you explicitly submit feedback. Opting in allows Anthropic to retain data for up to five years; opting out limits retention to about 30 days. This opt-in became mandatory for all users in 2025 and is now part of the sign-up flow for new accounts.
For enterprise, commercial, and API users, data is not used for model training by default. When deployed through API, Amazon Bedrock, or Google Cloud Vertex AI, Claude operates under customer-defined data terms and typically retains inputs for 30 days or less, unless training use is explicitly permitted.
Anthropic states that any training data is de-identified, never sold, and never used for advertising or profiling. If you resume an old session after opting in, that session may become eligible for training under the new policy.
In short, consumer users are included only if they opt in, while enterprise and API users are excluded unless they opt in.
Retention depends on your account type and privacy settings. For individual users (Free, Pro, Max), if you opt in to allow chats or coding sessions to be used for model improvement, Anthropic may retain that data for up to five years for new or resumed sessions.
If you opt out of training use, session data is typically retained for about 30 days, either from the time of deletion or after a period of inactivity.
For enterprise, team, or API usage, the default retention window is 30 days for both inputs and outputs, unless a separate agreement, such as a zero-data retention agreement, is in place.
More broadly, Anthropic says it retains personal data “as long as reasonably necessary” under the terms outlined in its Privacy Policy. There’s no single retention rule across all use cases; it depends on your tier and configuration.
Yes, you can choose to prevent your chats and coding sessions from being used for future model-training by Anthropic, and still retain access to your chat history in your account. When you sign up or when the update prompt appears, you’ll see a toggle labelled something like “Help improve Claude” in the Privacy Settings. By switching this toggle off, you opt out of having new or resumed chats and sessions used for model training.
After opting out, your past history remains accessible in your account and isn’t automatically purged; it simply means those chats will no longer (moving forward) be included in training data sets, apart from the limited exceptions (for example, chats flagged for safety review) that the policy still allows.
Do note: the opt-out setting covers only future or resumed chats. If you opt out but later reopen an older conversation (thus “resuming” it), that chat may become eligible for training under the policy for training-eligible data.
There are several key differences between how Claude AI operates under enterprise/commercial/government terms versus standard consumer plans (Free, Pro, Max).
In enterprise or API use, Anthropic acts as a data processor, while the organization (the customer) serves as the data controller, meaning the organization governs how user data is handled and bears responsibility under applicable laws and internal policies.
Commercial customers also retain ownership of their outputs, and Anthropic includes indemnification protections, such as coverage against third-party copyright claims tied to authorized use of Claude’s responses. These terms aren’t offered under consumer plans.
Enterprise agreements include stricter compliance and legal frameworks, such as support for HIPAA-eligible use via a Business Associate Agreement (BAA), which is only available under commercial contracts.
Data usage, model training, and retention policies also differ significantly. Consumer accounts may allow training (if opted in), with retention windows up to five years. Enterprise defaults to no model training, shorter retention windows (typically ~30 days), and tighter control over input handling, unless otherwise agreed in writing.
Commercial terms often include additional contractual restrictions, such as limits on creating competing products, reverse-engineering, resale, or redistribution of the service.
Claude AI gives you direct access to tools for exporting, deleting, and managing your data, but the exact controls differ slightly by account type.
Exporting your data:
For consumer users on the web or desktop app, go to Settings → Privacy (via your avatar or initials, bottom-left) and select “Export data”. This will include your chat history and profile metadata. For Team or Enterprise plans, the export path is Admin Settings → Data and Privacy → Export Data. Once requested, you’ll receive a download link (usually via email) that expires after a set window - typically 24 hours.
Deleting chat history:
You can delete individual threads by opening a conversation, clicking its name or top bar, and selecting Delete. For bulk deletion, go to your Chats list, select multiple conversations, and choose Delete Selected. Claude notes that deleted consumer conversations are fully removed from their backend within 30 days.
Deleting your account:
To permanently delete your account and all associated data, go to Account Settings → Account → Delete Account. This action is typically irreversible and will remove all stored chats.
Training opt-out (linked to retention):
Under Settings → Privacy, you can toggle off the “Help improve Claude” option to prevent your chats from being used for model training. This also reduces the length of time your data is retained.
For enterprise and API users:
Export, deletion, and retention controls may be governed by contractual terms and are not fully detailed in public documentation. Customers should refer to their agreements or admin dashboards for specifics.
Yes, Claude’s AI’s API usage is governed by different data handling, training, and retention policies than the consumer app.
When used via API or under enterprise, commercial, or government contracts, Anthropic acts as a data processor, not a data controller. The organization using Claude is the customer, and it defines how data is processed under its own policies and agreements.
In these environments, including Claude API, Team/Enterprise plans, government deployments, and integrations like Amazon Bedrock or Google Cloud Vertex AI, user data (prompts, outputs, code) is not used for model training by default. Training is only allowed if the customer explicitly opts into a Development Partner Program or gives written permission.
Data retention is also stricter: the default window is 30 days, with an available zero-data-retention configuration for high-sensitivity use cases.
By contrast, the consumer plans (Free, Pro, Max) fall under different terms: unless opted out, user data from chats and code sessions may be used for training, and retained for longer periods (up to five years if opted in).
Anthropic has explicitly stated that these extended training and retention policies do not apply to API, enterprise, or government users.
In short, if you're using Claude via the API or under a commercial/government contract, you're working under a different, more controlled set of terms. For full clarity, organizations should review the Commercial Terms of Service and their specific contractual agreements with Anthropic.