Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic has developed the most powerful AI model in history but is hesitant to release it...
Original | Odaily Planet Daily(@OdailyChina)
Author|Azuma(@azuma_eth)
On April 8, Anthropic, the AI development company behind Claude, officially announced that it will launch a new initiative called “Project Glasswing,” which will be jointly advanced with multiple leading tech companies including Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Anthropic said that this is an urgent move aimed at protecting the world’s most critical software, and all parties will jointly use the Mythos Preview version to discover and fix potential flaws in the systems that the current world depends on for survival.
So-called Mythos is the next-generation AI model that Anthropic is developing. It is the first model in human history to break through the trillion-parameter level (by comparison, the mainstream models on the market today are all in the hundreds of billions to around the one-trillion range), and its training cost reached a staggering $10 billion. Compared with Claude’s currently most powerful model Opus 4.6, Mythos scores much higher across tests such as software coding, academic reasoning, and network security.
Rumors about Mythos had already been circulating in the market last week. At the time, the industry’s general concern was—will Mythos, which is specialized in cybersecurity, affect the current landscape of security offense and defense? If it is misused maliciously, will it lead to larger-scale security incidents? Odaily also reported on this at the time and discussed the potential impact on the security offense and defense of the crypto industry with security experts in the field, including余弦, the founder of SlowMist (see《Odaily Exclusive Interview with 余弦: Anthropic’s Nuke-Level New Model Leak—How Will It Affect Crypto Security Offense and Defense?》). But at that time, Anthropic had not publicly acknowledged the existence of Mythos, so the relevant information remained limited.
On April 8, with the unveiling of the “Glasswing” plan, Anthropic also disclosed more details about Mythos. Based on the real-world case studies Anthropic has published, Anthropic has not exaggerated Mythos’s capabilities—so much so that the company even did not dare to directly release the model publicly, fearing it could be maliciously exploited by hacker groups. Instead, it plans to let leading companies trial and troubleshoot it through the “Glasswing” plan first, so potential vulnerabilities can be patched in advance.
Mythos flexes its muscles: In just a few weeks, it dug up thousands of “zero-day vulnerabilities”
When talking about Mythos’s strength, Anthropic said plainly that the model’s creation means a harsh reality has arrived: the coding ability of AI models has reached an extremely high level, and in finding and exploiting software vulnerabilities, they can almost surpass everyone except the most skilled humans.
According to Anthropic’s disclosure, within just a few weeks, Anthropic used Mythos to identify thousands of zero-day vulnerabilities (that is, defects that even software developers themselves had never found before), many of which are high-severity issues. The problems span all mainstream operating systems and mainstream browsers, and also affect a range of other critical software.
Anthropic provided several representative cases:
Even more worrying is that Anthropic said that most of these vulnerabilities were “discovered and constructed exploitation paths autonomously by Mythos” with almost no human intervention. This may mean that AI has begun to possess automated offense-and-defense capabilities similar to those of top-tier hacker teams.
On evaluation benchmarks, Mythos also shows a leapfrog-style level of evolution compared with Opus 4.6. For example, in network security vulnerability reproduction tests, Mythos reached 83.1%, while Opus 4.6 was 66.6%; across multiple coding and reasoning tests, Mythos also achieved a clear lead.
Perhaps because Mythos’s capabilities are simply too powerful, Anthropic did not choose to open the model directly. Instead, it rolled out the “Glasswing” plan first, so that the entire internet could be “hardened” in advance.
Under this plan, Anthropic will provide participating parties with early access to the Mythos Preview version for discovering and fixing vulnerabilities or weaknesses in their underlying systems—focusing in particular on tasks such as local vulnerability detection, binary program black-box testing, terminal security hardening, and system penetration testing.
Anthropic also promises to provide participating parties with a total of $100 million in model usage credits to support usage throughout the research preview phase. After that, the Mythos Preview version will be made available to participants at prices of $25 per million input tokens and $125 per million output tokens (participants can also access the model via the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry). In addition to the model usage credits, Anthropic will also donate $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and donate $1.5 million to the Apache Software Foundation to help open-source software maintainers deal with the ever-changing security environment.
Anthropic plans to gradually expand the scope of participation in “Glasswing,” continue pushing the effort for months, and share experience as much as possible so that other organizations can apply the related experience to their own security build-out. Within 90 days, Anthropic will publicly release interim reports on results from the reporting stage, including the vulnerabilities that have been fixed and security improvement measures that can be disclosed.
Technology will only keep upgrading, but there’s no need to worry too much
AI is changing the world we know in an irreversible way, including the cybersecurity area this article focuses on. As the barriers to discovering and exploiting vulnerabilities have dropped dramatically, people can’t help but worry: will AI become a blade in the hands of malicious actors, threatening the current balance in cybersecurity? (PS: For crypto users who need to put real money into wallet systems or on-chain protocols, this concern will be especially intense.)
In response, Anthropic believes “we still have reasons to stay optimistic.” AI models are dangerous precisely because they have the ability to cause harm in the hands of bad actors. But at the same time, AI also has immeasurable value in finding and fixing critical software defects and developing safer new software.
It’s reasonable to expect that in the coming years, AI capabilities will continue to evolve rapidly. But when new attack methods emerge, new defense mechanisms will also appear in parallel. Technological upgrades are inevitable, but that doesn’t mean risks will necessarily spiral out of control—so long as the defense system evolves in sync, and even can leverage AI to build a stronger cybersecurity moat.