Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"How much computing power to buy? All of it": OpenAI co-founder says $110 billion still can't meet demand; pretraining has shifted toward joint cost optimization
According to 1M AI News monitoring, OpenAI co-founder Greg Brockman, in an interview, looked back on a step-change improvement in AI programming capabilities in December 2025. He measured progress using a test prompt he kept for years: getting the AI to build a website that, when he first learned to program years earlier, took him several months to complete. Throughout all of 2025, this task required multiple rounds of prompts and about four hours to get done; by December, a single prompt was enough, and the quality was good. He said the new model made the AI jump from “able to complete about 20% of tasks” to “about 80%,” and that leap forced everyone to “reorganize their workflows around AI.”
As for where the $11 billion in funding goes, Brockman likened computing power to “hiring salespeople”: as long as the product has a scalable sales channel, hiring more salespeople brings in more revenue. Computing power isn’t a cost center—it’s a revenue center. He recalled a conversation with his team on the eve of ChatGPT’s launch: “They asked, ‘How much compute should we buy?’ I said, ‘All of it.’ They said, ‘No no no, seriously—how much should we buy?’ I said, ‘No matter how we build, we can’t keep up with demand.’” That judgment still holds today, and compute procurement needs to be locked in 18 to 24 months in advance.
On how to use this computing power, Brockman revealed that OpenAI is no longer simply chasing the largest possible scale of pretraining. Instead, it treats pretraining capability and inference costs as jointly optimized targets: “You don’t necessarily want to go as large as possible, because you also have to account for the many downstream inference use cases. What you really want is the optimal solution for intelligence multiplied by cost.” But he explicitly opposed the claim that “pretraining is no longer important.” He believes that the smarter the base model is, the more efficient the subsequent reinforcement learning and inference phases are, and that it still “absolutely” requires NVIDIA GPUs to support large-scale centralized training.