Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Checkr Eyes Government Contracts to Help Reduce ‘Fraud and Waste’

February 25, 2026

I Tried to Get ChatGPT and Google’s Gemini to Make up Lies About Me

February 25, 2026

OpenClaw Creator: ‘Vibe Coding Is a Slur’

February 25, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Checkr Eyes Government Contracts to Help Reduce ‘Fraud and Waste’
Tech

Checkr Eyes Government Contracts to Help Reduce ‘Fraud and Waste’

IQ TIMES MEDIABy IQ TIMES MEDIAFebruary 25, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


An AI startup in SF focused on identity verification has set a lofty goal: securing government contracts.

Daniel Yanisse, the CEO of Checkr, told Business Insider that the company wants to help the government reduce “fraud and waste” by not only screening new employees but also verifying people’s eligibility for benefits such as Medicare and Social Security.

Though Yanisse said the company isn’t ready to make any product announcement yet, he said a frictionless government assistance system may be just years away.

AI and safety experts, however, told Business Insider that there are legal and technical hurdles for any company to undertake the task of automating benefit and welfare systems with AI.

Checkr primarily uses AI to run background checks and surface information such as criminal records and motor vehicle reports. The company has major contracts with Uber and Lyft to screen new drivers, and is valued at more than $5.7 billion after raising $120 million in funding in 2022. In 2025, Checkr reported over $800 million in revenue and surpassed 120,000 customers.

Katherine Li, West Coast breaking news reporter at the Business Insider.

Every time Katherine publishes a story, you’ll get an alert straight to your inbox!

Stay connected to Katherine and get more of their work as it publishes.

When asked what Checkr wants to do for the government, Yanisse said that for Medicare and other programs, “there’s a lot of fraud happening and just bad actors getting the government dollars instead of the right people who need help,” adding that it’s very hard for the government to actually verify people’s employment status and income.

The Medicare Fee-for-Service program estimated that there were $28.83 billion in “improper payment” in 2025 at a rate of 6.55%, though not all such cases are the result of intentional fraud. Payments made to individuals who did not submit sufficient documentation and have unverified income levels are also considered improper by Medicaid.

“With AI, unfortunately, there’s going to be even more fraud, identity theft, and scams,” said Yanisse. “It’s a lot of friction, it’s a lot of repetition, and now there are also deepfakes.”

Checkr’s spokesperson told Business Insider that the company’s potential involvement in government is “still conceptual at this point.”

The company also pointed toward a study by Middesk, a business identity verification platform, that out of $1.09 trillion in Medicaid payments that went to around 1.6 million providers between 2018 and 2024, $563 million in payouts went to providers that are blacklisted from federal healthcare programs for criminal activity or misconduct.

Automating identity verification can be challenging

Stuart Russell, professor of computer science at UC Berkeley and an AI pioneer, told Business Insider that he is “not optimistic” that the plan to use AI to determine benefits eligibility will work as advertised.

“An AI system of this kind, some version of an LLM, is incapable of producing veridical explanations of its decisions, making it impossible to challenge false decisions,” Russell said.

Russell also cited the General Data Protection Regulation in the European Union, which bars decisions with significant legal effects on individuals from being made entirely by automated systems.

Baobao Zhang, the Maxwell Dean associate professor of the politics of AI at Syracuse University, told Business Insider that though she cannot assess exactly how good Checkr’s verification system is right now, past government attempts to mix people’s benefits with an automated system are cautionary tales.

“If the federal government or other state governments are trying to contract with a vendor to automate welfare fraud detection, they need to have a serious evaluation in the real world before they deploy it, because the stakes are high, as history has proven,” said Zhang.

In Indiana, an attempt to streamline and automate its welfare eligibility system by outsourcing a contract to IBM ended in a legal battle in which the state sued the company for $1.3 billion for the scrapped project in 2010. Based on court records, the Indiana Family and Social Services Administration said that processing errors from IBM led to faulty benefits denials that brought harm to the needy.

In Australia, an automated government plan called Robodebt, designed to detect fraud, told welfare recipients to repay benefits and sent letters claiming they owed thousands of dollars in debt, based on an incorrect algorithm. A royal commission, which is Australia’s highest form of public inquiry, found that at least three people died by suicide after being falsely told to pay back debt they don’t owe by Robodebt. The system was ruled illegal by a court in 2019.

Ifeoma Ajunwa, the founding director of the AI and the Future of Work Program at Emory University, told Business Insider that if any government agency is to adopt AI, there should be an advisory council made up of technologists and social scientists, and affected constituencies should be given a say.

“I think we need to move cautiously when delegating governmental functions to AI technologies,” said Ajunwa. “While these tools are touted to increase efficiency and lower costs, we also need to establish guardrails for their use to protect citizens.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

I Tried to Get ChatGPT and Google’s Gemini to Make up Lies About Me

February 25, 2026

OpenClaw Creator: ‘Vibe Coding Is a Slur’

February 25, 2026

Trump: Tech Companies Must Build Their Own Power Plants for Data Centers

February 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

As literacy rates lag, a pediatric hospital is screening for reading ability

February 25, 2026

President Donald Trump’s State of the Union address in photos

February 24, 2026

Trump administration sues UCLA over antisemitism allegations

February 24, 2026

Wisconsin schools, teachers file lawsuit against Legislature seeking more money

February 24, 2026
Education

As literacy rates lag, a pediatric hospital is screening for reading ability

By IQ TIMES MEDIAFebruary 25, 20260

For some young children in Columbus, Ohio, reading assessments don’t start in the kindergarten classroom…

President Donald Trump’s State of the Union address in photos

February 24, 2026

Trump administration sues UCLA over antisemitism allegations

February 24, 2026

Wisconsin schools, teachers file lawsuit against Legislature seeking more money

February 24, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.