Gemini can read your emails, look at your photos, and search your YouTube and Search history. But should you let it?
Google has enabled a feature in its ecosystem that allows its AI assistant Gemini to read your emails, see your Photos, and view your YouTube and Search history to give contextual responses.
It’s an opt-in feature, meaning it’s turned off by default. It’s being rolled out to paid subscribers on the Google AI Plus plan and up with free users to follow.
So should you switch on the new feature? Here’s what you need to know.

What does Personal Intelligence do?
Personal Intelligence connects Gmail, Google Photos, YouTube, and Search to Gemini with your permission.
The additional context allows Gemini to answer questions that span multiple apps. For example, you could ask, “What is on the itinerary for my trip to Melbourne?” and it would display the arrangement from your email, any restaurants you looked up in Search, and related images or screenshots from Photos.
The function is built around additional reasoning from the added sources and is designed to retrieve specific details from your contextual history.
Personal Intelligence opt-in is on a per-app basis. For example, you could connect only your Search history and Gmail, leaving your Photos and YouTube history untouched.
It’s available on Android, iOS, and web platforms.
Gemini Personal Intelligence and Privacy
An important thing to note up front is that this feature is turned off by default. You have to manually turn it on before your accounts are affected.
Google says your emails and photos are not used to train the AI model and are only used to answer specific requests.
The company admits to using “limited information” from your prompts and responses to improve functionality after personal data is filtered out.
You can disconnect apps, delete chat history, and use temporary chats to improve privacy, but by its very nature Gemini Personal Intelligence is a privacy risk.

Google’s Privacy Distinction
Google’s explanation of the boundaries Gemini uses is nuanced and relies on users to take the company at its word.
While your data is used to answer your question, Gemini doesn’t use it for training. Instead, the model is trained on “prompts and responses in the Gemini app” as well as “summaries, excerpts, and inferences used to help answer your prompts”.
The distinction is narrower than it sounds. Your data isn’t going into the training set with personal information, but the summaries and inferences drawn from these responses might be.
That said, if you’re already using Gmail, Google Photos, and YouTube, then Google already has this information and the opt-in only changes what Gemini can do with it.
Re-identification
One big issue to discuss here is that Gemini doesn’t even need personal information to identify you. Re-identification is a well-documented privacy risk whereby non-identifying information can be used to identify a user.
A peer-reviewed study on web browsing habits found that just knowing the four most visited web domains is enough to identify 95 percent of individuals.
In a separate study, researchers from the University of Texas in 2007 found that matching as few as two movie reviews from a Netflix dataset with publicly available reviews from IMDb could identify individual users with a 68 percent success rate. Six or more reviews identified 99 percent of people.
While Google’s claim that they don’t train on your personal data is true, it’s not a privacy guarantee because your pattern of usage alone is identifying, let alone the inferences and summaries Gemini derives from it.

Gemini Personal Intelligence Limitations
Google has stated that the software is still in beta and there are still bugs to be ironed out. Some of these include:
- Overpersonalisation: If you enjoy coffee shops and ask it to plan a vacation, it may plot an entire trip around coffee shops.
- Mistaken preferences: If you buy a concert ticket as a gift, it may treat that as your preference rather than the recipient’s preference.
- Mixed timelines: LLMs have a limited concept of time. Past deadlines may be flagged as upcoming and future events may be tagged as past.
- Corrections don’t always stick: If you tell the model something specific about you like becoming vegetarian, it may not remember this and continue to recommend steakhouses.
The function is also not available in the European Union or the United Kingdom due to the regulatory environment under GDPR data protection laws.
Our Take: Should you turn on Gemini Personal Intelligence?
If it were me, I would have to simply say no – this is a solution looking for a problem.
I just don’t see the value in using Gemini to perform these functions when it’s the same level of interaction to do the job manually. Why use the LLM to summarise my itinerary when I could just go and open the email myself? I’m opening an app either way.
The use case for getting the AI to make recommendations on a vacation based on contextual knowledge as an example is fine, but again it doesn’t accomplish anything you can’t do by telling an AI your preferences in a prompt. You don’t need Gemini to read your inbox to get this function.
The people most likely to benefit are already so deep in Google’s ecosystem that Personal Intelligence might add a marginal convenience to tasks they’re already doing.
Otherwise, the use cases that justify the privacy trade off are narrower than Google would like you to think.








Comments