On March 19, Cursor launched its new coding AI model, Composer 2.
The announcement called it “frontier-level coding intelligence.” It beat Anthropic’s Claude Opus 4.6 on coding benchmarks. It costs one-tenth the price of the competition. Developers celebrated.
Within hours, a developer named Fynn decided to look under the hood.
He set up a simple debug proxy on his computer, routed Cursor’s traffic through it, and looked at the model ID in the API. What he found was this:
accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast
Kimi K2.5. An open-source model from Moonshot AI. A Chinese company backed by Alibaba and HongShan.
Fynn posted it on X. Elon Musk quote-tweeted it: “Yeah, it’s Kimi 2.5.”
The post got 2.6 million views. The celebration was over.
What Cursor Actually Built
Cursor’s VP of Developer Education, Lee Robinson, responded publicly within hours. He confirmed the model ID was real.
He explained that Cursor had taken Kimi K2.5 as its starting point, then added continued pretraining and reinforcement learning on top of it.
About 25% of the compute in the final model came from Kimi. The other 75% was Cursor’s own training work.
Co-founder Aman Sanger followed up on X: “It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model.”
Moonshot AI, to its credit, was gracious about it. “We are proud to see Kimi K2.5 provide the foundation,” the company posted. “Seeing our model integrated effectively through Cursor’s continued pretraining and high-compute RL training is the open model ecosystem we love to support.”
That statement was more generous than many in the developer community felt was warranted.
Why Cursor Didn’t Say Anything
This is the question everyone is asking. And the answer is uncomfortable.
Using Kimi K2.5 as a base model is entirely legal. The model is released under an open-source license. Cursor accessed it through a licensed commercial partner called Fireworks AI. No laws were broken.
But Kimi K2.5’s modified MIT license has a specific requirement. Any product using the model must prominently display “Kimi K2.5” in its interface if it exceeds one million monthly active users or $20 million in monthly revenue.
Cursor has over one million daily active users. Its annualized revenue exceeds $2 billion, which is more than $166 million per month. That is more than eight times the revenue threshold that triggers the branding requirement.
Cursor’s interface does not display “Kimi K2.5” anywhere.
This is not just a transparency problem. It may be a licensing violation.
Then there is the geopolitical dimension. AI development is routinely framed as a US vs. China race. Admitting that your “frontier-level” US product is built on a Chinese foundation model is not a great look in that environment. The silence was almost certainly a choice, not an oversight.
Why Kimi K2.5 Was the Right Call Technically
Here is the part that explains why Cursor made the decision it did, even if its disclosure was wrong.
Kimi K2.5 is genuinely excellent. It is a one-trillion-parameter model with 32 billion active parameters and a 256,000-token context window.
It scored first among all models on the MathVista benchmark when it was released. For complex, multi-step coding tasks across large codebases, it has an advantage over many Western open-source alternatives.
The Western open-source alternatives have been disappointing. Meta’s Llama 4 Behemoth, the flagship model everyone was waiting for, has been indefinitely delayed.
As of this week, there is still no release date. Google’s Gemma 3 tops out at 27 billion parameters, which is useful but not frontier class.
When Cursor needed a strong open-weight foundation model for continued pretraining and reinforcement learning, the best option available was made in Beijing.
“Kimi K2.5 had the best perplexity scores,” Sanger said. “So we chose it.”
That is a straightforward technical decision. The problem was the silence about it.
This Is Not the First Time for Cursor
One accidental omission is a mistake. Two raises a pattern.
When Cursor’s previous model, Composer 1.5, was released last year, developers noticed it was using DeepSeek’s tokenizer underneath.
DeepSeek is another Chinese AI company. Cursor did not disclose that either. It also only came out because developers looked for it.
“The bigger question is why Cursor kept quiet in the first place,” The Decoder wrote. “The most likely answer: admitting it would mean conceding that, unlike Anthropic and OpenAI, Cursor can’t build its own frontier model.”
That is a direct and uncomfortable assessment. Cursor is valued at $29.3 billion. It has raised $2.3 billion in funding. Its customers, who pay $20 a month for Pro or hundreds per seat for Business, believed they were using a product built by a cutting-edge American AI research lab.
The reality is that Cursor is a very good product layer on top of other people’s models.
That is still valuable. But it is not what the company’s marketing implies.
The Bigger Problem This Exposes
Cursor is not alone in this. It is just the one that got caught.
Chinese open-source models, primarily from Moonshot AI, Alibaba’s Qwen team, and DeepSeek, have become some of the best freely available foundation models in the world. They are technically excellent. They are commercially permissive. They are free to use.
American startups building AI products are using them. Quietly. Without saying so. Because saying so creates PR problems in an environment where “AI is a US-China competition” is the dominant political narrative.
The result is a strange double standard. US policymakers warn about Chinese AI dominance. US companies quietly build on Chinese AI foundations. Nobody discloses it. A developer with a proxy server finds out in minutes.
“The industry needs an AI Bill of Materials,” one commentator wrote this week. Something that makes the underlying models in any AI product visible, the same way food labels list ingredients. Right now, there are no such requirements. Companies tell you what their product does, not what it is built on.
Cursor says it will build its own foundation model for Composer 3 and will be transparent about what it uses. That is a better answer than silence.
Whether the industry’s broader transparency problem gets fixed is a separate question. For now, the only people reliably finding out what models power the AI tools you pay for are the developers curious enough to set up a proxy and look.
