Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

As AI jitters rattle IT stocks, Infosys partners with Anthropic to build ‘enterprise-grade’ AI agents

February 17, 2026

Consulting Firms Shift From AI Adoption to Measuring Its Actual Value

February 17, 2026

Office Food Perks Are Getting Better — and They’re Here to Stay

February 17, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » I Work at Google in AI Security: Things I Would Never Tell Chatbots
Tech

I Work at Google in AI Security: Things I Would Never Tell Chatbots

IQ TIMES MEDIABy IQ TIMES MEDIADecember 13, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This as-told-to essay is based on a conversation with 31-year-old Harsh Varshney, who works at Google and lives in New York. The following has been edited for length and clarity.

AI has quickly become a silent partner in our daily lives, and I can’t imagine life without AI tools.

Day-to-day, they help me with deep research, note-taking, coding, and online searches.

But my job means I’m very aware of the privacy concerns associated with using AI. I’ve worked at Google since 2023 and spent two years as a software engineer on the privacy team, building infrastructure to protect user data. I then switched to the Chrome AI security team, where I help secure Google Chrome from malicious threats, like hackers and those who use AI agents to conduct phishing campaigns.

AI models use data to generate helpful responses, and we users need to protect our private information so that harmful entities, like cybercriminals and data brokers, can’t access it.

Here are four habits I’ve made that I believe are essential for protecting my data while using AI.

Treat AI like a public postcard

Sometimes, a false sense of intimacy with AI can lead people to share information online that they never would otherwise. AI companies may have employees who work on improving the privacy aspects of their models, but it’s not advisable to share credit card details, Social Security numbers, your home address, personal medical history, or other personally identifiable information with AI chatbots.

Depending on the version being used, the information shared with public AI chatbots can be used to train future models and generate responses that are more relevant. This could result in “training leakage,” where the model memorizes personal information about one user and later regurgitates it in responses to another. Plus, there’s the risk of data breaches, which would expose what you’ve shared with a chatbot.

I treat AI chatbots like a public postcard. If I wouldn’t write a piece of information on a postcard that could be seen by anyone, I wouldn’t share it with a public AI tool. I’m not confident about how my data could be used for future training.

Know which ‘room’ you’re in

It’s important to identify whether you’re using a more public AI tool or an enterprise-grade one.

While it’s uncertain how conversations are used for training public AI models, companies can pay for “enterprise” models. Here, models aren’t typically meant to train on user conversations, so it’s safer for employees to talk about their work and company projects.

Think of it like having a conversation in a crowded coffee shop where you could be overheard, versus a confidential meeting in your office that stays within the room.

There have reportedly been instances where employees have accidentally leaked company data to ChatGPT. If you work on unreleased company projects or are trying to get a patent, you probably don’t want to discuss your plans with a non-enterprise-grade chatbot due to the risk of leakage.

I don’t discuss projects I’m working on at Google with public chatbots. Instead, I use an enterprise model, even for tasks as small as editing a work email. I’m much more comfortable sharing my information because my conversations aren’t used for training, but I still minimize the personal information I share.

Delete your history regularly

AI chatbots usually keep a history of your conversations, but I recommend deleting it on both enterprise and public models regularly to protect your user privacy in the long term. Due to the risk of your account being compromised, it’s a good precautionary habit to have, even if you’re confident you aren’t putting private data into the tools.

Once, I was surprised that an enterprise Gemini chatbot was able to tell me my exact address, even though I didn’t remember sharing it. It turned out, I had previously asked it to help me refine an email, which included my address. Because the tool has long-term memory features, enabling it to remember information from previous conversations, it could identify what my address was and retain it.

Sometimes, if I’m searching for things I don’t want the chatbot to remember, I’ll use a special mode, a bit like incognito mode, where the bots don’t store my history or use the information to train models. ChatGPT and Gemini call this the “temporary chat” feature.

Use well-known AI tools

It’s better to use AI tools that are well-known and are more likely to have clear privacy frameworks and other guardrails in place.

Other than Google’s products, I like to use OpenAI’s ChatGPT and Anthropic’s Claude.

It’s also helpful to review the privacy policies of any tools you use. Sometimes, they’ll explain more about how your data is used to train the model. In the privacy settings, you can also look for a section with the option to “improve the model for everyone.” By making sure that setting is turned off, you’re preventing your conversations from being used for training.

AI technology is incredibly powerful, but we must be cautious to ensure our data and identities are safe when we use it.

Do you have a story to share about using AI to help you at work? Contact this reporter at ccheong@businessinsider.com



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Consulting Firms Shift From AI Adoption to Measuring Its Actual Value

February 17, 2026

Office Food Perks Are Getting Better — and They’re Here to Stay

February 17, 2026

Salesforce Cofounder Criticizes Benioff’ ICE Jokes.

February 17, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Skrilla: 6-7 craze almost didn’t happen

February 16, 2026

How the Siege of Boston shaped the legacy of George Washington

February 16, 2026

Tre’ Johnson, the former NFL offensive lineman who became a high school history teacher, dies at 54

February 15, 2026

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026
Education

Skrilla: 6-7 craze almost didn’t happen

By IQ TIMES MEDIAFebruary 16, 20260

Skrilla said the “6-7” craze connected to his drill rap hit almost didn’t happen.His 2024…

How the Siege of Boston shaped the legacy of George Washington

February 16, 2026

Tre’ Johnson, the former NFL offensive lineman who became a high school history teacher, dies at 54

February 15, 2026

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.