Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google’s TurboQuant breakthrough is rattling memory chip stocks
Shares of memory hardware producers took a hit this week following Alphabet $GOOGL -1.40%'s announcement of a technology designed to drastically lower the working memory requirements for artificial intelligence models.
South Korean markets saw Samsung drop by nearly 5 percent, and SK Hynix lost 6 percent. Kioxia, a manufacturer of flash storage based in Japan, experienced a stock decline of almost 6 percent. Wednesday’s trading session in the United States yielded downward movement for shares of both Sandisk and Micron $MU -4.45%.
Related Content
SpaceX targets $1.75 trillion valuation in landmark planned IPO
BlackRock CEO warns oil at $150 could trigger a global recession
Google Research published the technology on March 24. The algorithm operates without degrading model precision, focusing its compression on the key-value cache—the area responsible for retaining historical calculations to bypass redundant processing. According to the researchers, performance on tasks such as code generation, question answering, and text summarization remained fully intact despite the cache storage shrinking by a factor of at least six.
Comparisons quickly emerged between this development and the industry-wide shockwaves caused last year by DeepSeek, a China-based AI firm. Posting on the social media platform X $TWTR 0.00%, the head of Cloudflare, Matthew Prince, likened the new algorithm to “Google’s DeepSeek.” He added that the industry still has vast potential to improve “speed, memory usage, power consumption, and multi-tenant utilization” when it comes to artificial intelligence inference.
Analysts cautioned against reading too much into the sell-off. Addressing CNBC, SemiAnalysis researcher Ray Wang pointed out that alleviating technical constraints frequently paves the way for advanced models that ultimately demand increased hardware support. “When the model becomes more powerful, you require better hardware to support it,” he said.
The recent drop in share prices is likely the result of shareholders cashing out after a period of sustained growth in a cyclical market, Quilter Cheviot technology research lead Ben Barringer explained to CNBC. TurboQuant “added to the pressure, but this is evolutionary, not revolutionary,” he said. “It does not alter the industry’s long-term demand picture.”
The algorithm has limits. A TechCrunch analysis noted the technology offers no relief for the massive RAM needed for AI model training, as it strictly compresses data during the inference stage. Currently, the compression tool lacks widespread deployment and exists purely as a laboratory development.
An analysis published by Forbes theorized that decreasing hardware barriers might actually accelerate localized artificial intelligence projects, a shift that could paradoxically drive up total long-term chip consumption.
Details of the algorithm are slated for a formal presentation at the upcoming ICLR 2026 event in April.
📬 Sign up for the Daily Brief
Our free, fast and fun briefing on the global economy, delivered every weekday morning.
Sign me up