According to 1M AI News monitoring, Cursor released the Composer 2 technical report, revealing the complete training scheme for the first time. The base model Kimi K2.5 is built on MoE architecture, with a total of 1.04 trillion parameters and 32 billion activated parameters. The training consists of two phases: first, continued pretraining on code data to enhance encoding knowledge, then improving end-to-end coding ability through large-scale reinforcement learning. The RL environment fully simulates real Cursor usage scenarios, including file editing, terminal operations, code search, and tool calls, allowing the model to learn under conditions close to production environments.
The report also publicly shared the construction method of the self-developed benchmark CursorBench: tasks are collected from real coding sessions of the engineering team, rather than artificially created. The base Kimi K2.5 scored only 36.0 on this benchmark, but after two-phase training, Composer 2 reached 61.3, a 70% improvement. Cursor states that its inference cost is significantly lower than cutting-edge models like GPT-5.4 and Claude Opus 4.6, achieving Pareto optimality between accuracy and cost.