Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Rising against the trend! Storage chips, breaking news! Institutions: A buying opportunity
A Google paper on a new algorithm has severely impacted storage chip concept stocks!
On Friday, amidst a significant downturn in major U.S. stock indices, storage chip concept stocks rose against the trend. During the trading session, SanDisk briefly rose over 5%, and Micron Technology increased by over 3%. By the close, SanDisk was up 2.10%, Micron Technology rose 0.50%, Seagate Technology gained 0.34%, and Western Digital increased by 0.73%. The day before, these stocks had already faced a major sell-off. By the close on Thursday, SanDisk plummeted over 11%, Seagate Technology fell over 8%, Western Digital dropped over 7%, and Micron Technology was down nearly 7%.
Some analysts commented that the significant drop in storage chip stocks on Thursday might have been due to a market misinterpretation. The ultra-efficient AI memory compression algorithm TurboQuant mentioned in Google’s paper only affects the key-value cache during the inference stage and does not impact the high-bandwidth memory (HBM) used for model weights, nor is it related to AI training tasks.
Another analyst stated that advanced compression technology merely reduces bottlenecks and will not destroy the demand for DRAM/flash memory. Investors may have taken profits based on Google’s announcement, but the market for memory consumption remains very strong. The short-term pullback in memory stocks is an “entry opportunity,” rather than a turning point in stock prices.
Storage chip stocks are impacted by Google’s new algorithm.
AI market “ghost stories” are back; Google has publicly released research on a new algorithm that can significantly reduce memory usage, leading to a heavy decline in storage chip stocks recently.
On Thursday, SanDisk fell over 11%, Micron Technology dropped nearly 7%, SK Hynix decreased over 6%, Samsung Electronics fell nearly 5%, and Kioxia declined nearly 6%. Estimates suggest that the market capitalization of major global memory manufacturers evaporated by over $90 billion in a single day on Thursday. On Friday, in the U.S. stock market, storage chip concept stocks rose against the trend, with SanDisk up over 2% and Micron Technology rising 0.50%.
In the past few months, storage chip companies performed strongly due to a surge in investments in AI infrastructure leading to supply shortages, causing chip prices to soar and profits to grow. As of this Wednesday, the stock prices of SK Hynix and Samsung Electronics have surged over 50% this year, while Kioxia’s stock price has more than doubled.
The trigger for this decline is the paper “TurboQuant,” which Google’s research team will officially present at the International Conference on Learning Representations (ICLR 2026). The Google team claims that through two innovative technologies, PolarQuant (polar coordinate quantization) and QJL (quantized JL transform), they achieved a compression of KV Cache to 3-bit precision under the premise of “zero loss,” reducing memory usage by at least six times. The algorithm also achieved up to an eightfold performance improvement on H100 GPU accelerators compared to unquantized key-value pairs.
Google promoted this research on the X platform this week, despite the fact that the research was initially published last year. Investors may be concerned that this will reduce the demand for memory from hyperscale data center operators, thereby lowering the prices of components used in smartphones and consumer electronics.
Institutions: The market may have misinterpreted.
Morgan Stanley stated in its latest research report that the market may have misread the situation. The technology only affects the key-value cache during the inference stage and does not impact the high-bandwidth memory (HBM) used for model weights, nor is it related to AI training tasks. Analysts emphasized that the so-called “sixfold compression” does not indicate a reduction in total storage demand, but rather an increase in throughput per GPU due to efficiency improvements.
Morgan Stanley analyst Shawn Kim pointed out that Google’s research should be viewed as having a more positive impact on the industry since it addresses a key bottleneck. This technology improves the efficiency of the so-called key-value cache used for inference (i.e., running AI models). He wrote, “If models can run with significantly reduced memory requirements without sacrificing performance, then the service cost per query will drop significantly, making AI deployment more profitable.” Kim stated that TurboQuant is a positive development for hyperscale enterprises considering the investment return opportunities. In the long run, this could also benefit memory manufacturers, as “lower per-token costs could lead to higher product adoption demand.”
Morgan Stanley referenced the “Jevons Paradox” in economics to explain the long-term impact: while technological efficiency improvements lower unit costs, they often lead to an expansion in overall demand due to reduced usage barriers.
KC Rajkumar, an analyst at Lynx Equity Strategies, noted that some media reports exaggerate the situation. Current inference models widely use 4-bit quantized data, and Google’s so-called “eightfold performance improvement” is based on comparisons with outdated 32-bit models. “However, due to extreme supply constraints, this will hardly reduce the demand for memory and flash in the next three to five years,” Rajkumar wrote, emphasizing that advanced compression technology merely reduces bottlenecks and will not destroy the demand for DRAM/flash memory.
Wells Fargo analyst Andrew Rocha pointed out that the existence of compression algorithms has never fundamentally changed the overall scale of hardware procurement. By significantly lowering the service cost per query, such technologies allow models that could only run on expensive cloud clusters to migrate locally, effectively lowering the barriers to AI scaling deployment.
Four hyperscale enterprises, led by Amazon and Google, plan to invest approximately $650 billion this year in building data centers and purchasing NVIDIA’s AI accelerators and related storage chips. SK Group Chairman Chey Tae-won recently stated that the tight supply of storage chips will continue until 2030.
From a supply chain perspective, DRAM demand for servers is expected to grow by 39% in 2026, and HBM demand is projected to increase by 58% annually. The optimization effects of TurboQuant may be overshadowed by the wave of industry growth.
Jordan Klein, an expert at Mizuho Securities, believes that the current pullback in memory stocks is more of an “entry opportunity” rather than a turning point in stock prices. In a report, Klein wrote that after a strong rise in late 2025 and early 2026, the bulls in memory stocks have started to waver. Although the memory industry is known for its dramatic cyclical fluctuations, he emphasized that the recent sell-off follows a familiar pattern.
Mizuho noted that such sell-offs occur every few months and are neither a signal of reaching a peak nor a reason for selling. In fact, buying on dips can yield profits.