[The Path of Rule of Law] The Legal Risks Behind "AI Workers" Cannot Be Ignored

robot
Abstract generation in progress

(Original headline: 【The Path of the Rule of Law】 The Legal Risks Behind “AI Office Workers” Should Not Be Taken Lightly)

Liu Shaohua

Recently, a game media company in Shandong has drawn attention for attempting to train departing employees into “AI people” so they can continue working. The company employee, Xiaoyu, told reporters that the colleague in the incident truly resigned. The attempt was carried out with that colleague’s consent, and the colleague himself also thought it was quite fun. Xiaoyu said that before resigning, the colleague was an HR specialist, and his digital doppelganger can currently handle simple tasks such as consultations, outreach/booking, and creating PPTs and spreadsheets.

On the surface, this is a harmless technical experiment. The departing employee “consented” and “found it fun,” while the company gains low-cost, high-efficiency “digital labor.” But if we look past the appearance, this seemingly mild attempt actually touches on a gray area in workplace rights and technical ethics in the AI era, and deserves our calm scrutiny.

From a legal perspective, although this incident appears to sidestep compliance risks by obtaining “the individual’s consent,” that does not mean we can be careless. An employee’s chat logs, work emails, and personal work habits all fall under the category of personal information as defined by the Personal Information Protection Law, not as a company’s “asset.” For a departing employee, if they arbitrarily transfer their rights merely because it is “fun,” it may well create security risks. Because such a “digital doppelganger” can easily be connected by others to the person himself, if the digital doppelganger infringes others’ rights, the relevant party may also be required to bear joint liability.

In addition, whether this so-called “consent” is truly adequate and freely given also needs to be questioned. In labor-employer relationships, employees often occupy a relatively weaker position. At the time of resignation, will “consent” be influenced by unspoken rules of “keeping things friendly and parting well,” or concerns about future recommendation letters and industry reputation? Where are the boundaries of this “consent”? Is it limited only to the “awkward” form of the doppelganger in its current state, or does it include a “higher-end” version that may appear after future technological iteration—one that more deeply simulates his thinking and emotions? When a person’s work habits, communication style, and even part of their thought logic are turned into data and stored permanently, does this “digital immortality” strip workers of the right to “say goodbye to the past and begin anew”?

When a company turns departing employees into “AI people,” it blurs the line between “person” and “tool,” further “commodifying” the laborer. Employees are no longer individuals with unique emotions, creativity, and non-replicable traits; instead, they become “functional modules” that can be broken down, analyzed, recombined, and reused indefinitely. When a company can easily “distill” an employee’s experience and style into AI, the message it conveys is cold: the individual is replaceable, and their core value lies in the parts that can be turned into data. Over time, the workplace may devolve into a tasteless algorithmic assembly line, with the worker’s agency seriously weakened.

The Interim Measures for the Administration of Information Services for Digital Virtual Humans (abbreviated as the “Measures”), which are currently soliciting public input, provides important guidance for regulating such conduct. The “Measures” emphasize that providing digital human services requires obtaining personal consent and establishing mechanisms such as risk identification and graded, tiered, and categorized control, especially to protect special groups such as minors. This serves as a reminder that even if “consent” is obtained, the company still must bear corresponding management responsibilities to ensure that the use of a “digital doppelganger” stays within bounds and is not abused. Otherwise, if a “digital doppelganger” infringes others’ rights or if related data is leaked, not only could the involved party be drawn into disputes, but the company would also face significant legal risks.

In the final analysis, technological advancement is a double-edged sword, and the handle of the sword should be held by the individual themselves. In the face of the wave of artificial intelligence, workers need to learn to protect their own data rights and interests, and when resigning, proactively sign data-use restriction clauses. Companies need to find a balance between pursuing efficiency and respecting human dignity. Regulatory authorities also need to accelerate the improvement of relevant laws and regulations, to build a line of defense for personal dignity in the digital age.

This column article represents only the author’s personal views

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin