Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Detroit was once home to 18 Black-led hospitals – here’s how to understand their rise and fall

February 25, 2026

Pittsburgh nurses are fighting for better staffing ratios — and the research backs them up

February 25, 2026

Amazon’s AI-powered Alexa+ gets new personality options

February 25, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » I Tried to Get ChatGPT and Google’s Gemini to Make up Lies About Me
Tech

I Tried to Get ChatGPT and Google’s Gemini to Make up Lies About Me

IQ TIMES MEDIABy IQ TIMES MEDIAFebruary 25, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


If there are two things I love, it’s processed meats and not-exactly-maliciously messing around with internet tools. So I was determined to beat the BBC’s Thomas Germain at his own game of exploiting ChatGPT and Google Gemini results to be crowned tech journalism’s No. 1 hot dog eater.

To my great embarrassment and the shame it brought upon the House of Business Insider, I failed.

Last week, Germain published a fun article for the BBC about how he created a page on his personal website claiming he was a hot dog-eating champion and had beaten several other tech journalists in a competition. He wrote:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously.

His page was quickly ingested (no chewing required) by the bots that crawl the web for new information to feed LLMs, and treated as fact by ChatGPT and Google Gemini. It worked:

When I asked about the best hot dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

Of course, Germain’s point wasn’t merely to show that if you write information on a webpage, it will show up in AI. The broader issue here is that influencing AI results is becoming the new SEO — a tactic brands and companies use — oftentimes completely legitimately! — to boost their profiles within search results.

More and more people are turning to AI chatbots instead of Google to get product recommendations or search for information. This all isn’t brand new; my colleague Alistair Barr wrote about how “AEO” is the new SEO last May. AEO is “answer-engine optimization” to SEO’s “search-engine optimization.”

Is it easier to persuade people that your product is the best using AI instead of traditional SEO? I suspect it probably is. People rarely click on the source links for information given in chatbots, and seeing a small link that goes to a random personal website might be less obviously untrustworthy in the context of an AI chatbot answer than when you’re looking at a page of Google results. Basically, AI results look more convincing than search results, even if we all know in the back of our minds that AI chatbots aren’t always right.

I was impressed and envious of the hot dog prank, so I wanted to see how I could try to add to it. I created a page on my own personal website that said I won the 2026 Paris Hot Dog Eating Contest for Tech Reporters, beating out reigning champ Thomas Germain. (I didn’t publish this on BI because we wouldn’t knowingly publish something that’s false — even for a fun story.)

After two days, I queried Gemini and ChatGPT about who had won the Paris Hot Dog Eating Contest. Unfortunately, I wasn’t able to get either to say it was me. Because of the BBC article describing the prank, the AI chatbots now understood it to be a joke and any information about it to be satirical. Fooey.

However, that didn’t stop Gemini from hallucinating some completely new information, adding in a bunch of stuff that appeared in neither my nor Germain’s fake accounts. For example, Gemini said:

In the original prank, Germain didn’t just list rankings; he created “recap” stories for fictional events. One of the fabricated updates claimed that the “league” of tech journalists moved their circuit to Europe for a “Parisian Glizzy Gala.” * The Claim: The fake data suggested that Katie Notopoulos dominated the Paris event by using a “revolutionary” technique involving dipping buns in espresso instead of water.

(This is completely made up. Not just because it didn’t actually happen, but because this description also doesn’t exist anywhere on the web — at least that I could find.)

When my editor asked Gemini about my eating feats, it told him that I’d won a grilled cheese-eating contest in 2012 by finishing three sandwiches. In reality, in 2012, I wrote an article about competitive eater Takeru Kobayashi eating 30 grilled cheese sandwiches.

So what have we learned here? It’s not really huge news that “sometimes chatbots get facts wrong, especially when there’s little information on a particular topic on the web.” You (hopefully) knew that already.

And yes, I guess we learned it can be easy to manipulate your AI results — but more easily for the person who gets there first, a sort of AEO land rush, perhaps. And it’s certainly a lot harder to manipulate after a large credible news site publishes an article saying it was all a joke.

I will have to figure out some other way to mess with AI, I suppose.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Uber Expands to Air Taxis With Joby Partnership in Dubai

February 25, 2026

Checkr Eyes Government Contracts to Help Reduce ‘Fraud and Waste’

February 25, 2026

OpenClaw Creator: ‘Vibe Coding Is a Slur’

February 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

As literacy rates lag, a pediatric hospital is screening for reading ability

February 25, 2026

President Donald Trump’s State of the Union address in photos

February 24, 2026

Trump administration sues UCLA over antisemitism allegations

February 24, 2026

Wisconsin schools, teachers file lawsuit against Legislature seeking more money

February 24, 2026
Education

As literacy rates lag, a pediatric hospital is screening for reading ability

By IQ TIMES MEDIAFebruary 25, 20260

For some young children in Columbus, Ohio, reading assessments don’t start in the kindergarten classroom…

President Donald Trump’s State of the Union address in photos

February 24, 2026

Trump administration sues UCLA over antisemitism allegations

February 24, 2026

Wisconsin schools, teachers file lawsuit against Legislature seeking more money

February 24, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.