Introduction: A Systemic Conflict Triggered by “Proxy Operations”
Recently, a subtle user experience issue has sparked heightened tension between the AI industry and internet platforms—some smartphones equipped with AI assistants, when attempting to automatically complete tasks like WeChat red envelopes and e-commerce orders via voice commands, are being flagged by platform systems as “suspected use of third-party plugins,” triggering risk alerts and even account restrictions.
On the surface, this appears to be a technical compatibility issue; but in the broader industry context, it essentially unveils a structural conflict over “who has the right to operate the phone and who controls user access.”
One side comprises smartphone manufacturers and large model teams aiming to deeply embed AI into operating systems to achieve “seamless interaction”; the other side consists of internet platforms that rely long-term on app entry points, user pathways, and data closed loops to build their business ecosystems.
When the “万能助手” (universal assistant) begins to “act on behalf of” users, is it merely an efficiency tool or a rule breaker? This question is now being pushed to the legal front.
“Future Has Arrived” or “Risk Warning”—a “Code War” Behind the Smartphone Screen
Recently, users with the latest AI smartphones may experience a dramatic scene: “One second in the future, one second warning”—just marveling at its convenience, they receive risk alerts from platforms like WeChat.
All this started with ByteDance’s “Doubao” large model and deep cooperation with some smartphone manufacturers. Today’s voice assistants are no longer just for weather checks but are super butlers capable of “seeing the screen and simulating operations.”
Imagine this scenario: simply saying to your phone “Send a red envelope in the Qingfei football group” or “Help me buy the most cost-effective Adidas new football shoes,” the phone automatically jumps to apps, compares prices, and pays—all without your manual input.
This technology based on “simulated clicks” and “screen semantic understanding” has, for the first time, truly allowed AI to take over the phone. However, this “smoothness” quickly collided with the “iron plate” of internet platforms.
Many users find that when using Doubao AI to operate WeChat, it triggers account restrictions or even warnings of “suspected use of third-party plugins.” E-commerce platforms like Taobao are also highly vigilant about such automated access. A blogger likened AI to a butler running errands for you, only to be stopped by mall security: “We do not serve robots.”
Users are puzzled: Why can’t I use my own phone, with my authorized AI, to do tasks for me?
Platform defense: My ecosystem, my security—external “proxy operations” are not allowed.
What seems like a minor technical compatibility friction is actually a milestone in China’s internet history—a direct confrontation over “digital sovereignty” between operating systems (OS) and super apps.
The Dimensionality Reduction of Business Logic—When “Walled Gardens” Meet “Wall Breakers”
Why are giants like Tencent and Alibaba reacting so fiercely? The answer lies in the core business model of mobile internet—“walled gardens.”
The business foundation of social, e-commerce, and content platforms depends on exclusive access points and user engagement time. Every click, every browsing step, is crucial for ad monetization and data accumulation. The emergence of “system-level AI assistants” like Doubao directly challenges this model.
This is a profound game over “entry points” and “data.” AI smartphones threaten the core commercial lifelines of internet giants, mainly through three points:
The “Iconless” Crisis:
When users only need to speak, AI can directly complete tasks, potentially bypassing apps altogether. Users no longer need to open apps to browse products or watch ads, significantly weakening the platform’s reliance on ad exposure and the attention economy.
Parasitic Acquisition of Data Assets:
AI operates by “seeing” the screen to read and execute commands, without requiring platform open APIs. This is akin to bypassing traditional cooperation rules, directly accessing content, products, and data invested heavily by platforms. From the platform’s perspective, this is a “free ride” behavior, and such data might even be used to train AI models themselves.
The “Gatekeeper” of Traffic Distribution Changes Hands:
In the past, the power to distribute traffic was held by super apps. Now, system-level AI is becoming the new “main switch.” When users ask “what to recommend,” the AI’s answer will directly determine the flow of business traffic, potentially reshaping the competitive landscape.
Therefore, platform warnings and defenses are not merely technical rejection but fundamental protection of their business ecology. This reveals an unresolved deep contradiction between technological innovation and platform rules.
Preparing for the Storm—An In-Depth Analysis of the Four Legal Risks of AI Smartphones
As legal practitioners, when we look at this conflict between AI smartphones and big tech companies, four unavoidable core legal risks emerge:
1. Competition Boundaries: Technical Neutrality Does Not Equal No Responsibility
The current controversy centers on whether AI operations constitute unfair competition. According to the Anti-Unfair Competition Law, using technical means to interfere with the normal services of other network products may constitute infringement.
“Plugin” Risks: In cases like “Tencent v. 360” and several recent “auto red envelope grabbing plugin” cases, judicial practice has established a principle: unauthorized modification or interference with other software’s operation logic, or increasing server load through automation, may constitute unfair competition. If AI’s “simulated clicks” skip ads or bypass interaction verification, affecting platform services or business logic, infringement may also be recognized.
Traffic and Compatibility Issues: If AI guides users away from the original platform to use its recommended services, it may involve “traffic hijacking.” Conversely, if the platform bans all AI operations outright, it must justify whether such bans are necessary and reasonable self-protection measures.
2. Data Security: Screen Information as Sensitive Personal Data
AI needs to “see” screen content to execute commands, directly touching on the strict regulations of the Personal Information Protection Law.
Sensitive Information Handling: Screen content often includes chat logs, account details, location traces, and other sensitive personal data, which legally requires separate user consent. The common “bundled authorization” for AI smartphones is questionable in validity. If AI “sees” and processes private chat information during ticket booking commands, it may violate the “minimum necessary” principle.
Blurred Responsibility: Does data processing occur locally on the phone or in the cloud? If data leaks, how are responsibilities divided between phone manufacturers and AI service providers? Current user agreements often lack clear definitions, creating compliance risks.
3. Antitrust Disputes: Can Platforms Refuse AI Access?
Future litigation may revolve around “essential facilities” and “refusal to deal.”
AI smartphone manufacturers may argue: WeChat, Taobao have become public infrastructure-like entities; unjustified refusal to allow AI access constitutes abuse of market dominance and hampers technological innovation.
Platforms may defend: Data openness must be based on security and property rights. Unauthorized access by AI to read data may breach technical protections and harm user and platform interests.
4. User Responsibility: Who Pays When AI Makes Mistakes?
As AI shifts from a tool to an “agent,” a series of civil liability issues arise.
Agency Effectiveness: If AI misinterprets and purchases the wrong product (e.g., executes “cheap phone” as counterfeit), is it a major misunderstanding or improper agency? Can users claim refunds on the grounds of “non-personal operation”?
Account Bans and Losses: If a user’s third-party account is banned due to AI use, they may seek compensation from the phone manufacturer. The key is whether such risks were clearly disclosed at the point of sale. Insufficient disclosure could lead to collective rights protection.
This contest is not only a technological battle but also a legal redefinition of data property rights, platform responsibilities, and user authorization in practice. Both AI vendors and platforms need to find a clear balance between innovation and compliance.
Conclusion: Boundaries of Rights and the Spirit of Contracts
The friction between Doubao and big tech companies, on the surface, is a product conflict, but it actually reveals a rupture between the old and new orders: the past centered on apps is being challenged by AI-driven interconnected experiences.
As legal practitioners, we clearly see that the existing legal system is increasingly inadequate in addressing the involvement of general artificial intelligence. Pure “bans” or “bypasses” cannot be sustainable solutions. The future may lie not in continuing to evade with “simulated clicks” but in promoting the establishment of standardized AI interaction interface protocols.
In the current unclear regulatory environment, we pay tribute to those pioneers who persist in exploring at the forefront of AI, upholding the spirit of technological kindness. At the same time, we must also recognize: respecting boundaries often leads to longer-term progress than outright subversion.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
From Doubao controversy to big tech battles: Decoding the legal and compliance dilemmas of AI phones
Author: Mankiw
Introduction: A Systemic Conflict Triggered by “Proxy Operations”
Recently, a subtle user experience issue has sparked heightened tension between the AI industry and internet platforms—some smartphones equipped with AI assistants, when attempting to automatically complete tasks like WeChat red envelopes and e-commerce orders via voice commands, are being flagged by platform systems as “suspected use of third-party plugins,” triggering risk alerts and even account restrictions.
On the surface, this appears to be a technical compatibility issue; but in the broader industry context, it essentially unveils a structural conflict over “who has the right to operate the phone and who controls user access.”
One side comprises smartphone manufacturers and large model teams aiming to deeply embed AI into operating systems to achieve “seamless interaction”; the other side consists of internet platforms that rely long-term on app entry points, user pathways, and data closed loops to build their business ecosystems.
When the “万能助手” (universal assistant) begins to “act on behalf of” users, is it merely an efficiency tool or a rule breaker? This question is now being pushed to the legal front.
“Future Has Arrived” or “Risk Warning”—a “Code War” Behind the Smartphone Screen
Recently, users with the latest AI smartphones may experience a dramatic scene: “One second in the future, one second warning”—just marveling at its convenience, they receive risk alerts from platforms like WeChat.
All this started with ByteDance’s “Doubao” large model and deep cooperation with some smartphone manufacturers. Today’s voice assistants are no longer just for weather checks but are super butlers capable of “seeing the screen and simulating operations.”
Imagine this scenario: simply saying to your phone “Send a red envelope in the Qingfei football group” or “Help me buy the most cost-effective Adidas new football shoes,” the phone automatically jumps to apps, compares prices, and pays—all without your manual input.
This technology based on “simulated clicks” and “screen semantic understanding” has, for the first time, truly allowed AI to take over the phone. However, this “smoothness” quickly collided with the “iron plate” of internet platforms.
Many users find that when using Doubao AI to operate WeChat, it triggers account restrictions or even warnings of “suspected use of third-party plugins.” E-commerce platforms like Taobao are also highly vigilant about such automated access. A blogger likened AI to a butler running errands for you, only to be stopped by mall security: “We do not serve robots.”
What seems like a minor technical compatibility friction is actually a milestone in China’s internet history—a direct confrontation over “digital sovereignty” between operating systems (OS) and super apps.
The Dimensionality Reduction of Business Logic—When “Walled Gardens” Meet “Wall Breakers”
Why are giants like Tencent and Alibaba reacting so fiercely? The answer lies in the core business model of mobile internet—“walled gardens.”
The business foundation of social, e-commerce, and content platforms depends on exclusive access points and user engagement time. Every click, every browsing step, is crucial for ad monetization and data accumulation. The emergence of “system-level AI assistants” like Doubao directly challenges this model.
This is a profound game over “entry points” and “data.” AI smartphones threaten the core commercial lifelines of internet giants, mainly through three points:
When users only need to speak, AI can directly complete tasks, potentially bypassing apps altogether. Users no longer need to open apps to browse products or watch ads, significantly weakening the platform’s reliance on ad exposure and the attention economy.
AI operates by “seeing” the screen to read and execute commands, without requiring platform open APIs. This is akin to bypassing traditional cooperation rules, directly accessing content, products, and data invested heavily by platforms. From the platform’s perspective, this is a “free ride” behavior, and such data might even be used to train AI models themselves.
In the past, the power to distribute traffic was held by super apps. Now, system-level AI is becoming the new “main switch.” When users ask “what to recommend,” the AI’s answer will directly determine the flow of business traffic, potentially reshaping the competitive landscape.
Therefore, platform warnings and defenses are not merely technical rejection but fundamental protection of their business ecology. This reveals an unresolved deep contradiction between technological innovation and platform rules.
Preparing for the Storm—An In-Depth Analysis of the Four Legal Risks of AI Smartphones
As legal practitioners, when we look at this conflict between AI smartphones and big tech companies, four unavoidable core legal risks emerge:
1. Competition Boundaries: Technical Neutrality Does Not Equal No Responsibility
The current controversy centers on whether AI operations constitute unfair competition. According to the Anti-Unfair Competition Law, using technical means to interfere with the normal services of other network products may constitute infringement.
“Plugin” Risks: In cases like “Tencent v. 360” and several recent “auto red envelope grabbing plugin” cases, judicial practice has established a principle: unauthorized modification or interference with other software’s operation logic, or increasing server load through automation, may constitute unfair competition. If AI’s “simulated clicks” skip ads or bypass interaction verification, affecting platform services or business logic, infringement may also be recognized.
Traffic and Compatibility Issues: If AI guides users away from the original platform to use its recommended services, it may involve “traffic hijacking.” Conversely, if the platform bans all AI operations outright, it must justify whether such bans are necessary and reasonable self-protection measures.
2. Data Security: Screen Information as Sensitive Personal Data
AI needs to “see” screen content to execute commands, directly touching on the strict regulations of the Personal Information Protection Law.
3. Antitrust Disputes: Can Platforms Refuse AI Access?
Future litigation may revolve around “essential facilities” and “refusal to deal.”
AI smartphone manufacturers may argue: WeChat, Taobao have become public infrastructure-like entities; unjustified refusal to allow AI access constitutes abuse of market dominance and hampers technological innovation.
Platforms may defend: Data openness must be based on security and property rights. Unauthorized access by AI to read data may breach technical protections and harm user and platform interests.
4. User Responsibility: Who Pays When AI Makes Mistakes?
As AI shifts from a tool to an “agent,” a series of civil liability issues arise.
This contest is not only a technological battle but also a legal redefinition of data property rights, platform responsibilities, and user authorization in practice. Both AI vendors and platforms need to find a clear balance between innovation and compliance.
Conclusion: Boundaries of Rights and the Spirit of Contracts
The friction between Doubao and big tech companies, on the surface, is a product conflict, but it actually reveals a rupture between the old and new orders: the past centered on apps is being challenged by AI-driven interconnected experiences.
As legal practitioners, we clearly see that the existing legal system is increasingly inadequate in addressing the involvement of general artificial intelligence. Pure “bans” or “bypasses” cannot be sustainable solutions. The future may lie not in continuing to evade with “simulated clicks” but in promoting the establishment of standardized AI interaction interface protocols.
In the current unclear regulatory environment, we pay tribute to those pioneers who persist in exploring at the forefront of AI, upholding the spirit of technological kindness. At the same time, we must also recognize: respecting boundaries often leads to longer-term progress than outright subversion.