Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Apple at 50: Former Leaders Recall Innovation and Iconic Products

March 31, 2026

Instagram Is Testing a Feature That Lets You Watch Stories Secretly

March 31, 2026

Allbirds, Once Valued at $4 Billion, Has a Buyer for $39 Million

March 31, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » A new AI coding challenge just published its first results – and they aren’t pretty
AI

A new AI coding challenge just published its first results – and they aren’t pretty

IQ TIMES MEDIABy IQ TIMES MEDIAJuly 24, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A new AI coding challenge has revealed its first winner — and set a new bar for AI-powered software engineers. 

On Wednesday at 5pm PST, the nonprofit Laude Institute announced the first winner of the K Prize, a multi-round AI coding challenge launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian prompt engineer named Eduardo Rocha de Andrade, who will receive $50,000 for the prize. But more surprising than the win was his final score: he won with correct answers to just 7.5% of the questions on the test.

“We’re glad we built a benchmark that is actually hard,” said Konwinski. “Benchmarks should be hard if they’re going to matter,” he continued, adding: “Scores would be different if the big labs had entered with their biggest models. But that’s kind of the point. K Prize runs offline with limited compute, so it favors smaller and open models. I love that. It levels the playing field.”

Konwinski has pledged $1 million to the first open-source model that can score higher than 90% on the test.

Similar to the well-known SWE-Bench system, the K Prize tests models against flagged issues from GitHub as a test of how well models can deal with real-world programming problems. But while SWE-Bench is based on a fixed set of problems that models can train against, the K Prize is designed as a “contamination-free version of SWE-Bench,” using a timed entry system to guard against any benchmark-specific training. For round one, models were due by March 12th. The K Prize organizers then built the test using only GitHub issues flagged after that date.

The 7.5% top score stands in marked contrast to SWE-Bench itself, which currently shows a 75% top score on its easier ‘Verified’ test and 34% on its harder ‘Full’ test. Konwinski still isn’t sure whether the disparity is due to contamination on SWE-Bench or just the challenge of collecting new issues from GitHub, but he expects the K Prize project to answer the question soon.

“As we get more runs of the thing, we’ll have a better sense,” he told TechCrunch, “because we expect people to adapt to the dynamics of competing on this every few months.”

Techcrunch event

San Francisco
|
October 27-29, 2025

It might seem like an odd place to fall short, given the wide range of AI coding tools already publicly available – but with benchmarks becoming too easy, many critics see projects like the K Prize as a necessary step toward solving AI’s growing evaluation problem.

“I’m quite bullish about building new tests for existing benchmarks,” says Princeton researcher Sayash Kapoor, who put forward a similar idea in a recent paper. “Without such experiments, we can’t actually tell if the issue is contamination, or even just targeting the SWE-Bench leaderboard with a human in the loop.”

For Konwinski, it’s not just a better benchmark, but an open challenge to the rest of the industry. “If you listen to the hype, it’s like we should be seeing AI doctors and AI lawyers and AI software engineers, and that’s just not true,” he says. “If we can’t even get more than 10% on a contamination free SWE-Bench, that’s the reality check for me.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

15% of Americans say they’d be willing to work for an AI boss, according to new poll

March 30, 2026

Popular AI gateway startup LiteLLM ditches controversial startup Delve

March 30, 2026

15% of Americans say they’d be willing to work for an AI boss

March 30, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

College instructor turns to typewriters to curb AI-written work

March 31, 2026

Clowns in Bolivia protest government decree limiting extracurricular activities

March 30, 2026

Chile vows tighter school security after weapons incidents

March 30, 2026

California school district must hire qualified teachers, court sets statewide precedent

March 30, 2026
Education

College instructor turns to typewriters to curb AI-written work

By IQ TIMES MEDIAMarch 31, 20260

The scene is right out of the 1950s with students pecking away at manual typewriters,…

Clowns in Bolivia protest government decree limiting extracurricular activities

March 30, 2026

Chile vows tighter school security after weapons incidents

March 30, 2026

California school district must hire qualified teachers, court sets statewide precedent

March 30, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.