AI Engine
For personal Google accounts using the Gemini app, your chats and uploads may be used to improve Google services, including training AI models, unless you opt out. This is controlled through your Web & App Activity / “Help improve Gemini Apps” settings. When activity tracking is on, Google may use your interactions to enhance model performance. When activity tracking is off, the data from your chats is not used for model training, and Google states that such conversations may be stored only briefly and treated as temporary chats, which are not used for training.
However, this changes in enterprise, education, and Workspace environments. In Google Workspace (including paid tiers, organizational accounts, and Google Workspace for Education), Google states that prompts, responses, and associated data from Gemini for Workspace are not used to train Google’s foundation models unless the organization explicitly opts in. That means enterprise and school administrators control whether data is allowed for training, and by default, it is not used.
The retention period for Gemini conversations depends on your account type and whether activity saving is enabled. For personal Google accounts, if you turn “Help Improve Gemini Apps” / Web & App Activity (Keep Activity) off, the Gemini Apps Privacy Hub explains that your chats may be stored only temporarily (typically up to around 72 hours) for service reliability and safety checks, and they are not used to train models in this state. If you turn activity saving on, your chat history is stored and can be set to auto-delete after 3, 18, or 36 months, or kept indefinitely, depending on your chosen Web & App Activity auto-delete settings. In this case, your chats may be used to improve Google services, including AI model training, unless you opt out.
For Google Workspace / Enterprise / Education users, data handling is different. The Generative AI in Google Workspace Privacy documentation states that Gemini in Workspace does not use prompts or responses to train Google’s foundation models by default, unless an organization explicitly opts in. Additionally, retention is controlled by the Workspace admin. If conversation history is turned off, responses are generally not retained beyond the session (other than temporary short-term storage for safety and reliability, similar to the 72-hour window). If conversation history is on, retention follows the organization-level auto-delete policy, typically 18 months by default, but adjustable to 3 or 36 months.
For personal Google accounts, you can prevent future chats from being used to improve Google’s models by turning off the “Keep Activity” (also referred to as Gemini Apps Activity / Web & App Activity) setting. When this setting is off, Google states that your chats are not used to train models, although they may still be stored temporarily (typically up to ~72 hours) for safety and operational reliability. However, turning this setting off limits how your history is saved: chats generally will not be stored in your visible conversation history for later reference, because activity saving is what powers persistent history. If you want to remove previously stored chats from being used for training, you can also delete past activity, which revokes their eligibility for model improvement.
It is important to note that disabling “Keep Activity” does not mean chats disappear instantly; short-term retention still applies for abuse prevention and service quality review. Additionally, some personalization features may decrease, since Gemini no longer remembers past context across sessions once activity saving is off.
For Google Workspace / Enterprise / Education accounts, these settings work differently: prompts and responses in Workspace Gemini are not used for model training by default, and the organization administrator controls retention and history settings. Individual users in Workspace cannot override admin-imposed data policies.
The policies for Gemini differ significantly depending on whether you are using a personal Google account or a managed Google Workspace account. In the personal (consumer) version of Gemini, chats, uploads, and other interactions can be used to improve Google services and train underlying generative models by default, unless you manually turn off “Keep Activity”. When activity saving is on, your history is retained according to your own auto-delete setting, which you can choose to keep for 3, 18, or 36 months, with 18 months as a common default.
In contrast, Google Workspace, work, school, and government accounts operate under a different privacy and data-use model. The Generative AI in Google Workspace Privacy Hub specifies that prompts and responses in Gemini for Workspace are not used to train Google’s foundation models unless an organization explicitly opts in. Additionally, retention and chat-history controls in enterprise environments are set by the organization’s administrator, not by the individual user. If conversation history is enabled, retention typically follows the organization’s auto-delete schedule (again, 3, 18, or 36 months, with 18 months often the default). If conversation history is disabled, chats generally are not saved and are retained only temporarily (usually up to ~72 hours) for operational safety.
Google also notes that Workspace accounts provide enhanced data protection assurances, including stronger restrictions on human review and guarantees that customer data is not used to train other customers’ models. Users can typically identify that enterprise-grade controls are active when they see features such as a shield icon in the interface or when using Gemini via a managed work or school account.
For personal Google accounts, you can manage how long Gemini keeps your chat history by adjusting your Activity / Gemini Apps Activity settings. In the Gemini app or through Google account activity controls, you can choose to auto-delete activity after 3, 18, or 36 months, with 18 months commonly set as the default, or turn activity saving off entirely. When activity saving is turned off, chats may no longer appear in your visible history but can still be held briefly (typically up to ~72 hours) for safety and operational purposes. You can also manually delete individual chats or all Gemini activity, and you may export your data at any time using Google Takeout, which allows you to download your Gemini-related account data.
For Google Workspace / Work / School / Government accounts, these controls are handled at the organization (admin) level, not the individual user. The Workspace admin can open the Admin Console → Generative AI → Gemini and choose whether conversation history is enabled, and if enabled, specify the conversation retention window (3, 18, or 36 months, with 18 months typically the default). If conversation history is disabled, chats are generally not stored beyond the temporary operational window. Export and deletion capabilities for Workspace data also depend on the admin’s configurations, and data export is typically done either through Workspace data export tools or Takeout, if permitted by the organization’s access policies.
In short, personal users control their own retention and deletion, while enterprise retention and export policies are controlled by the organization, and cannot be overridden by end users.
Yes, Gemini used through the API is governed by different data-use and training terms than the consumer Gemini app. When using Gemini in the consumer app, chats and uploads may be used to improve Google services and train generative models by default, unless you turn off the Keep Activity / Gemini Apps Activity setting. In contrast, when using Gemini via the API, the policies depend on the billing and deployment context. The Gemini API Additional Terms of Service and Google AI developer documentation explain that API usage is covered under separate terms, and when Gemini is used in a paid, project-based, or enterprise environment, prompts and outputs are not used to train models unless there is explicit permission to do so. Additionally, for Google Cloud-based enterprise deployments (for example, using Gemini through Google Cloud services, Vertex AI, or Workspace integrations under covered agreements), Google states that Gemini does not use customer prompts or responses as model training data, and customer data is processed under cloud and organizational privacy controls rather than consumer-app training defaults.
However, some free, non-billed developer or “try in browser/AI Studio” usage may behave more like the consumer version, where data can be used for model-improvement unless the environment is tied to a billed project or an enterprise data governance boundary. Because not all public documentation covers every possible account/billing tier combination, users relying on the API, especially in business, research, compliance, or production contexts, should review the Gemini API Terms, the Google Cloud privacy terms, or the Workspace/enterprise agreement associated with their project.
Google’s documentation (Gemini Workspace and Privacy documents) does not currently address prompt injection or jailbreak attacks as part of its Gemini security posture. These attacks occur when hidden or manipulated prompts are embedded in user-visible content (like emails or documents) and then interpreted by the model in unintended ways.
Because prompt injection relies on the model’s behavior rather than malicious file attachments or links, it often bypasses traditional email filtering, endpoint protection, and DLP tools. In tools like Gemini’s Gmail summarizer, users may be exposed to altered or misleading summaries without any warning or visibility.
Enterprises should evaluate whether current monitoring and policy controls are sufficient to detect or mitigate such behaviors, especially when they occur inside trusted applications like Gmail.