Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
[Trend Spotlight] Asset Management AI Investment Research Model Needs to Break Out of the "Black Box Trap"
(Original title: [Times Windfall] Asset Management AI Research and Investment Mode Needs to Escape the “Black Box Trap”)
Jiang Guangxiang
Starting in 2026, the development of AI intelligent agents has advanced by leaps and bounds under the guidance of the “crayfish” (AI intelligent agent OpenClaw nickname). If you don’t raise one, you’ll feel a sense of anxiety about falling behind—public funds and other asset management industries are no exception. The core value of AI intelligent agents represented by OpenClaw lies in filling the “last mile” from massive data to practical investment research and application, allowing us to sense up close the extreme improvement in work efficiency. Taking core investment research positions as an example, AI intelligent agents can automatically capture data, clean information, extract factors, and generate reports 24/7, freeing investors from heavy repetitive labor and focusing on strategy thinking at higher dimensions.
However, everyone who has been using DeepSeek, Yuanbao, Doubao, and then all the way up to the “crayfish,” may still vividly remember the feeling of being shaken by moments when an AI model “talks nonsense with a straight face.” In the asset management sector, where parties act on someone else’s behalf and manage money for clients, such “talking nonsense” could very likely cause investors real monetary losses. The somewhat darkly humorous part is that the boundary of legal responsibility between current AI tools and users is still a gray area. If a personal or institutional investment advisor’s recommendations are unsatisfactory, investors still have places to make their case. But if investors are “hurt” by the AI tool they themselves downloaded, it seems they can only swallow the bitter loss—silently. Compared with “talking nonsense” that’s easy for people to recognize immediately, some investment research conclusions that are plausible-sounding but logically coherent—and whose errors take time to verify—are more harmful. The “black box risk” that industry insiders are particularly wary of is a representative example, and it is also currently widely recognized as the most core and most need-to-be-wary risk for AI models.
To put it plainly, most of today’s advanced AI models—especially deep learning models—still operate on an “unexplainable” logic. We know what data is fed in and what results come out, but we know almost nothing about the process inside the model for reasoning and reaching conclusions. This “black box” characteristic may translate into fatal risks in the investment research domain. For example, when these AI models “learn” massive amounts of internet text and data, it is unavoidable that they inherit cognitive biases, market noise, and even incorrect information present in that material. Many of the “mysterious factors” that get unearthed are merely statistical coincidences, yet they create a false impression of having found a “holy grail.” Unfortunately, when AI generates an investment analysis report or recommendation with the characteristics above, investors without professional knowledge find it difficult to identify the fallacies. Even if it’s an experienced professional investment manager, blindly relying on such recommendations could still lead to disastrous investment decisions.
For AI models, a deeper layer of challenge comes from the complexity of the financial market itself. The market is not a static laboratory; it is a complex adaptive system in which the behaviors of all participants interact with each other and continuously evolve. The twist is that the historical data used to train AI models already contains the behaviors of all market participants from the past. Once the model begins trading based on the patterns it has discovered, its trading behavior in turn becomes new data for the market, thereby influencing and changing the market’s future trajectory. This forms a self-referential feedback loop. Setting aside whether the model has been “poisoned,” this “adaptive” feature of AI leads to a harsh reality: any effective pattern based on public data that AI can quickly mine has an extremely short lifecycle for generating excess returns. Without exclusive insights and without the deeper logic understood by the market, isn’t it just a pipe dream to imagine relying on AI tools for everyone to get rich together?
Although the deep integration of AI and investment research is irreversible, for the asset management industry, the key to easing anxiety is not about raising a few “crayfish,” but about building a new ecosystem that balances efficiency and risk and deeply integrates human and machine capabilities. Currently, whether it’s top-tier financial institutions or online financial regulators, they all take a cautious stance toward installing and using open-source AI intelligent agents such as OpenClaw in company devices and internal networks. For financial institutions with asset management scales in the hundreds of billions or even trillions, an uncontrollable “black box” tool is a threat that risk control systems simply cannot tolerate.
The asset management industry isn’t about whether to use AI; it’s about who can combine AI with research, data, engineering, and risk control more deeply and better. Whether it’s today’s “crayfish” or other AI new species in the future, the top priority is to hold fast to human control over core judgments—being the “commander” of the AI strategy group and the one who manages the “risk switch.”
This column article only represents the author’s personal views.