Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Apple at 50: Former Leaders Recall Innovation and Iconic Products

March 31, 2026

Instagram Is Testing a Feature That Lets You Watch Stories Secretly

March 31, 2026

Allbirds, Once Valued at $4 Billion, Has a Buyer for $39 Million

March 31, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » AI slop and fake reports are exhausting some security bug bounties
AI

AI slop and fake reports are exhausting some security bug bounties

IQ TIMES MEDIABy IQ TIMES MEDIAJuly 24, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


So-called AI slop, meaning LLM-generated low quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events. 

The world of cybersecurity is not immune to this problem, either. In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup. 

“People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch. 

“It turns out it was just a hallucination all along. The technical details were just made up by the LLM,” said Ionescu. 

Ionescu, who used to work at Meta’s red team tasked with hacking the company from the inside, explained that one of the issues is that LLMs are designed to be helpful and give positive responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” said Ionescu. 

“That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” said Ionescu. 

Just in the last year, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open source security project Curl received a fake report. “The attacker miscalculated badly,” Sintonen wrote in a post on Mastodon. “Curl can smell AI slop from miles away.”

In response to Sitonen’s post, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, said that they have the same problem: that their inbox is “flooded with AI garbage.” 

One open-source developer, who maintains the CycloneDX project on GitHub, pulled their bug bounty down entirely earlier this year after receiving “almost entirely AI slop reports.”

The leading bug bounty platforms, which essentially work as intermediaries between bug bounty hackers and companies who are willing to pay and reward them for finding flaws in their products and software, are also seeing a spike in AI-generated reports, TechCrunch has learned. 

Contact Us

Do you have more information about how AI is impacting the cybersecurity industry? We’d love to hear from you. From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email.

Michiel Prins, the co-founder and senior director of product management at HackerOne, told TechCrunch that the company has encountered some AI slop. 

“We’ve also seen a rise in false positives — vulnerabilities that appear real but are generated by LLMs and lack real-world impact,” said Prins. “These low-signal submissions can create noise that undermines the efficiency of security programs.”

Prins added that reports that contain “hallucinated vulnerabilities, vague technical content, or other forms of low-effort noise are treated as spam.”

Casey Ellis, the founder of Bugcrowd, said that there are definitely researchers who use AI to find bugs and write the reports that they then submit to the company. Ellis said they are seeing an overall increase of 500 submissions per week. 

“AI is widely used in most submissions, but it hasn’t yet caused a significant spike in low-quality ‘slop’ reports,” Ellis told TechCrunch. “This’ll probably escalate in the future, but it’s not here yet.”

Ellis said that the Bugcrowd team who analyze submissions review the reports manually using established playbooks and workflows, as well as with machine learning and AI “assistance.”

To see if other companies, including those who run their own bug bounty programs, are also receiving an increase in invalid reports or reports containing non-existent vulnerabilities hallucinated by LLMs, TechCrunch contacted Google, Meta, Microsoft, and Mozilla. 

Damiano DeMonte, a spokesperson for Mozilla, which develops the Firefox browser, said that the company has “not seen a substantial increase in invalid or low quality bug reports that would appear to be AI-generated,” and the rejection rate of reports — meaning how many reports get flagged as invalid — has remained steady at 5 or 6 reports per month, or less than 10% of all monthly reports.

Mozilla’s employees who review bug reports for Firefox don’t use AI to filter reports, as it would likely be difficult to do so without the risk of rejecting a legitimate bug report,” DeMonte said in an email.

Microsoft and Meta, companies that have both bet heavily on AI, declined to comment. Google did not respond to a request for comment. 

Ionescu predicts that one of the solutions to the problem of rising AI slop will be to keep investing in AI-powered systems that can at least perform a preliminary review and filter submissions for accuracy. 

In fact, on Tuesday, HackerOne launched Hai Triage, a new triaging system that combines humans and AI. According to HackerOne, this new system leveraging “AI security agents to cut through noise, flag duplicates, and prioritize real threats.” Human analysts then step in to validate the bug reports and escalate as needed.

As hackers increasingly use LLMs and companies rely on AI to triage those reports, it remains to be seen which of the two AIs will prevail.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

15% of Americans say they’d be willing to work for an AI boss, according to new poll

March 30, 2026

Popular AI gateway startup LiteLLM ditches controversial startup Delve

March 30, 2026

15% of Americans say they’d be willing to work for an AI boss

March 30, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

College instructor turns to typewriters to curb AI-written work

March 31, 2026

Clowns in Bolivia protest government decree limiting extracurricular activities

March 30, 2026

Chile vows tighter school security after weapons incidents

March 30, 2026

California school district must hire qualified teachers, court sets statewide precedent

March 30, 2026
Education

College instructor turns to typewriters to curb AI-written work

By IQ TIMES MEDIAMarch 31, 20260

The scene is right out of the 1950s with students pecking away at manual typewriters,…

Clowns in Bolivia protest government decree limiting extracurricular activities

March 30, 2026

Chile vows tighter school security after weapons incidents

March 30, 2026

California school district must hire qualified teachers, court sets statewide precedent

March 30, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.