Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Co-Founder Claims $110 Billion Still Can't Meet Demand, Pre-Training Shifts to Cost Joint Optimization
According to monitoring by 1M AI News, OpenAI co-founder Greg Brockman reflected on the leap in AI programming capabilities expected by December 2025 during an interview. He used a test prompt he had kept for years to measure progress: asking the AI to build a website that took him months to complete when he was learning programming. Throughout 2025, this task required multiple prompts and about four hours to accomplish; by December, it could be completed with a single prompt and with high quality. He stated that the new model allowed AI to jump from ‘being able to complete about 20% of tasks’ to ‘about 80%’, a shift that forces everyone to ‘reorganize workflows around AI.’ Regarding the allocation of the $110 billion in funding, Brockman likened computing power to ‘hiring salespeople’: as long as the product has a scalable sales channel, hiring more salespeople can generate more revenue. Computing power is not a cost center but a revenue center. He recalled a conversation with his team on the eve of ChatGPT’s release: 'They asked, ‘How much computing power should we buy?’ I said, ‘All of it.’ They replied, ‘No, no, no, seriously, how much should we buy?’ I said, ‘No matter how we build, we won’t keep up with demand.’ This judgment still holds true today, and computing power procurement needs to be locked in 18 to 24 months in advance. On how to utilize this computing power, Brockman revealed that OpenAI is no longer solely pursuing the largest scale of pre-training but is instead jointly optimizing pre-training capabilities and inference costs: ‘You don’t necessarily have to make it as large as possible, because you also need to consider the numerous downstream inference use cases; what you really want is the optimal solution of intelligence multiplied by cost.’ However, he firmly opposed the notion that ‘pre-training is no longer important,’ believing that the smarter the foundational model, the higher the efficiency of subsequent reinforcement learning and inference stages, and that there is still an ‘absolute need’ for Nvidia GPUs to support large-scale centralized training.