Technology & AI

“Tokenmaxxing” makes developers less productive than they think

There is an old saw in management: what you measure matters. And, usually, you get more of what you measure.

Software developers have discussed productivity metrics for decades, starting with lines of code. But as a new generation of AI coding agents deliver more code than ever before, what their managers should measure is less clear.

Large token budgets—essentially, the amount of AI processing power a developer is authorized to use—have become a badge of honor among Silicon Valley engineers, but that’s a weird way to think about productivity. Measuring the input to a process makes no sense if you care more about the output. It might make sense if you’re trying to encourage more AI adoption (or sell tokens), but not if you’re trying to be more efficient.

Consider the evidence from a new class of companies operating in the “engineering knowledge generation” space. They found that developers using tools like Claude Code, Cursor, and Codex were producing more accepted code than ever before. But they also find that developers have to come back to revise that accepted code more often than before, reducing claims of productivity gains.

Alex Circei, CEO and founder of Waydev, built an intelligence layer to track these changes; his company works with 50 different clients employing more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter had never met her before.)

He says engineering managers see code acceptance rates of 80% to 90%—meaning the share of AI-generated code that engineers approve and maintain—but they miss what happens when engineers have to review that code in the weeks that follow, reducing the real-world acceptance rate to between 10% and 30% of generated code.

The rise of AI coding tools led Waydev, founded in 2017 to provide developer analytics, to completely overhaul its platform in the past six months to deal with the rapid proliferation of coding tools. Now, the company is releasing new tools that track the metadata generated by AI agents, providing statistics on the quality and cost of their code to give engineering managers more insight into both AI adoption and effectiveness.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

While analytics companies have incentives to highlight the problems they find, evidence is mounting that large organizations are still figuring out how to use AI tools properly. Big companies are taking notice—Atlassian acquired DX, another engineering intelligence startup, for $1 billion last year, to help its customers understand the return on investment in coding agents.

Data from across the industry tells a consistent story: More code is being written, but a disproportionate amount of it isn’t sticking.

GitClear, another company in this space, published a report in January that found that AI tools have increased productivity, but also that its data showed “typical AI users code 9.4x better than their non-AI counterparts”—more than double the productivity gains with the given tools.

Faros AI, an engineering analytics platform, took two years of customer data in its March 2026 report. The finding: code churn—lines of code removed versus lines added—increased 861% under high AI adoption.

Jellyfish, which bills itself as an AI-integrated engineering platform, collected data from 7,548 engineers in the first quarter of 2026. The company found that developers with larger token budgets produced more pull requests (proposed changes to the shared codebase), but productivity improvements did not scale. They reached two exits at ten times the cost of the tokens. In other words, tools produce volume, not value.

These kinds of statistics ring true when you talk to developers, who find that code reviews and technical bills pile up as they enjoy the freedom of new tools. A common finding is the difference between big and small developers, with the latter adopting more AI-generated code, and facing a greater amount of rewrites as a result.

Still, as developers work to better understand what their agents are doing, they don’t expect to go back any time soon.

“This is a new era of software development, and you have to adapt, and you’re forced to adapt to the company,” Circei told TechCrunch. “It’s not like it’s going to be a cycle that’s going to pass.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button