Gate News message: On April 12, Moore Threads announced that its AI training and inference integrated GPU MTT S5000 has completed Day-0 adaptation for the MiniMax M2.7 large model. This adaptation demonstrates the technical support capability of domestic GPUs for AI large models.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Protum Raises $2 Million Seed Round for AI Governance Platform, Targeting June 2026 Close
According to TechCrunch Startup Spotlight, Protum, an AI governance startup, is raising a $2 million seed round aimed at closing by June 2026. Founded by Sandeep J., who brings 25 years of enterprise transformation experience, Protum provides a platform designed to give enterprises continuous
GateNews24m ago
The wave of corporate layoffs could lead to a lose-lose situation for both labor and management, a study suggests: implement an AI automation tax
Research points to the externalities of AI layoffs: layoff costs are borne exclusively by companies, but the loss in consumer purchasing power is shouldered by the entire market. The more layoffs occur, the more demand shrinks—leaving both sides worse off. It proposes imposing an AI automation tax to internalize these external costs, and using tax revenues to fund retraining, restoring demand and stabilizing the economy.
ChainNewsAbmedia28m ago
Claude will charge a language tax? Research reveals that translating Chinese, Japanese, and Korean content consumes the most tokens—nearly three times more
Researcher Komatsuzaki said on X that mainstream LLM tokenizers impose a non-English language tax. In tests translating 《The Bitter Lesson》, Claude’s token increases in Hindi, Arabic, Russian, and Chinese were approximately 3.24×, 2.86×, 2.04×, and 1.71× respectively, clearly higher than OpenAI. China-based local models are more friendly to Chinese, indicating that training data skewed toward English creates cost inequality and has become a barrier to adoption.
ChainNewsAbmedia31m ago
Microsoft AI Business Doubles to $370B ARR; Plans $190B Capex for 2026
On April 29, Microsoft reported third-quarter fiscal 2026 results for the period ended March 31, beating market expectations. Q3 revenue reached $82.886 billion, up 18% year-over-year and above the expected $81.4 billion; GAAP net profit grew 23% to $31.778 billion; non-GAAP diluted earnings per
GateNews44m ago
OpenAI DevDay 2026 will be held in San Francisco on 9/29
OpenAI’s official announcement on April 29 said that its flagship developer conference, DevDay 2026, will be held on September 29 in San Francisco, returning to an in-person conference format after many years. It also announced a submission campaign: developers use GPT-5.5 and Image Gen to create works and submit them. Each week, Codex will select 2-3 creative submissions, and the submitters will receive free DevDay tickets (including cross-city airfare and hotel expenses).
Conference theme: building a developer ecosystem around GPT-5.5 + Image Gen
The core application stack for this DevDay is clearly centered on GPT-5.5. GPT-5.5 launched on April 23 and its API was fully opened on April 24; as of late April, GPT-5.4
ChainNewsAbmedia1h ago
BioMysteryBench: Mythos expert untangles unsolved questions 29.6%
Anthropic published a BioMysteryBench evaluation benchmark for AI bioinformatics analysis capabilities on April 29 in its official research announcement. It consists of open-ended questions from real research scenarios. The most notable data is: among the questions that a panel of human experts still could not solve, Anthropic’s flagship model Mythos solved 29.6%, while Opus solved 27.0% (4.7).
Benchmark design: two tracks—questions solvable and unsolvable by experts
BioMysteryBench is made up of two types of questions. The first is “solvable questions”—analysis tasks designed by bioinformatics researchers with standard answers for comparison. The second is “expert-unsolvable questions”—questions that human expert panels tried but still could not find a credible solution for, used to test whether models can go beyond the boundaries of current domain knowledge.
ChainNewsAbmedia1h ago