According to Beating, Microsoft recently open-sourced the Phi-Ground model family, designed to solve the problem of where AI should click on a computer screen. The 4-billion-parameter version, paired with larger language models for instruction planning, exceeded the clicking accuracy of OpenAI Operator and Claude Computer Use in the Showdown benchmark and ranked first among all sub-100-billion-parameter models across five evaluations including ScreenSpot-Pro.
The team trained on over 40 million data samples and found that three common training techniques used in academic papers became ineffective at scale. The key approach proved simple: output coordinates as regular numbers, such as “523, 417.” Previous research invented specialized position vocabularies for coordinates, but these failed to scale. The team also discovered that placing text instructions before images improved performance, as models could identify targets while processing pixels. Additionally, reinforcement learning methods like DPO improved accuracy even after fine-tuning.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Anthropic Code Mode’s MCP vs CLI battle: tools pin runtime, tokens drop from 150K to 2K
Throughout all of 2025, AI engineering communities have been debating endlessly over the question of whether “MCP vs CLI” is better suited for Agent tool calling. In November 2025, Anthropic’s paper “Code execution with MCP” redefined the problem from first principles. akshay\pachaar 5/10 summarized the thread explaining that the issue has never been the protocol itself, but the old habit of stuffing the descriptions of all tools into the context at the start of a session. Anthropic’s solution i
ChainNewsAbmedia18m ago
ByteDance Plans 25% Increase in AI Infrastructure Spending to 200 Billion Yuan This Year
According to ChainCatcher citing Golden Data, ByteDance plans to increase AI infrastructure spending by 25% to 200 billion yuan this year, driven by rising memory chip costs and accelerated artificial intelligence
GateNews55m ago
Enterprise AI Platform Pit Closes $16M Series Funding Led by a16z
According to Odaily, enterprise AI platform Pit announced the completion of a $16 million funding round led by a16z, with participation from Lakestar and executives from OpenAI, Anthropic, Google, Deel, and Revolut. Pit positions itself as "AI product team as a service," designed to replace
GateNews1h ago
Google Pilots Hiring Exams That Let Engineers Use AI Tools
According to The Chosun Daily, Google is piloting hiring exams that let US software engineer candidates use AI tools in selected entry-level and mid-level positions. The trial includes code comprehension tasks where applicants review existing code, fix bugs, and improve performance. Interviewers
GateNews3h ago
OpenAI Discontinues Fine-tuning API Effective Immediately, Existing Users Can Access Until January 6, 2027
According to OpenAI's official announcement monitored by Beating, the company is discontinuing its self-serve Fine-tuning API for developers effective immediately. New users can no longer create fine-tuning tasks, while existing active users can access the service until January 6, 2027. Deployed fin
GateNews3h ago
Sakana AI and Nvidia Achieve 30% Faster H100 Inference by Skipping 80% of Invalid Computations
Sakana AI and Nvidia have open-sourced TwELL, a sparse data format that enables H100 GPUs to skip 80% of invalid computations in large language models without sacrificing accuracy. The solution delivers up to 30% faster inference and 24% faster training on H100s while reducing peak memory usage.
GateNews4h ago