Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I've noticed an interesting pattern over the past few weeks. While AI company stocks on Nasdaq remain relatively stable, a significant recalibration is happening beneath the surface. DeepSeek R1 uses Nvidia Blackwell for training, creating a curious paradox: the cutting-edge chip is being used to demonstrate that advanced chips can be much cheaper than investors think.
It all starts with a simple question of cost. DeepSeek has shown that competitive AI models can be built at significantly lower expenses. This directly threatens the investment thesis that only hyper-scale companies with huge budgets can play this game. If models become cheaper, the entire AI supply chain must rethink its revenue expectations.
For chip manufacturers, this means potential pressure. Nvidia, AMD, and Broadcom may face questions: do we really need the most expensive accelerators? If DeepSeek uses Blackwell more efficiently than expected, it could slow demand for premium hardware and delay procurement timelines. However, there's a nuance: switching from H100 to Blackwell might support short-term demand, but margins could be squeezed.
What’s truly interesting: if training and inference become cheaper, capital expenditures might shift from GPUs to software, orchestration, and scaling. Microsoft, Alphabet, and Meta could benefit while chipmakers reassess their valuations. This isn’t a disaster for hardware but rather a shift from a “more chips” model to a “smarter use of chips” approach.
What to watch next: the capital expenditure guide for AI in 2026-2027 will be key. The ratio between Blackwell and older H100/H200 will show how serious hyper-scale companies are about optimization. Plus, regulatory dynamics: if export controls tighten, it could change the entire demand landscape for American chips.
For now, this looks like a correction, not a crash. The market is rethinking but not rejecting AI. The question is which companies will win in this new cost model.