menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

China’s AI Regulation Debate Enters The Agent Era

6 0
13.03.2026

China’s annual Two Sessions gathering has long served as a showcase for growth priorities, industrial strategy and technological ambition. This year, however, one theme is becoming harder to ignore: artificial intelligence is increasingly being treated as a governance problem.

That shift matters. For much of the past decade, China’s AI story was framed around scale — who had the most data, the strongest engineering talent, the deepest industrial base and the fastest path to commercialization. Now the debate is changing because the technology itself is changing. The rise of AI agents — systems designed to take actions across apps, devices and services — is forcing policymakers to confront a more complicated set of legal, economic and social questions.

That challenge is especially urgent as Beijing promotes its broader “AI+” strategy to accelerate adoption across the economy. The state wants AI to raise efficiency, strengthen industrial upgrading and support growth. But the more widely these systems are deployed, the more complicated the governance questions become.

The urgency is already visible in the OpenClaw frenzy that has swept China in recent days. In Shenzhen, crowds lined up outside Tencent’s office for help installing the viral open-source agent. Local governments in Shenzhen and Wuxi moved to subsidize OpenClaw-related projects.

Meanwhile, A U.S. federal judge has just issued a preliminary injunction in the case of Amazon v. Perplexity AI, ordering the company to stop its Comet browser AI agent from accessing password-protected Amazon accounts. The court signaled that user permission alone may not be enough for AI agents to operate on third-party platforms, suggesting that both user consent and platform authorization could be required. This "dual authorization" principle may mark the beginning of a new legal framework for agentic AI.

Ukraine’s Flamingo Cruise Missile Will Reshape The War’s Dynamics

All 6 U.S. Service Members Killed In Military Plane Crash Over Iraq

One Unnecessary Change Just Made ‘Marathon’ A Battle Royale

Traditional chatbots mostly stayed within a single interface. They summarized documents or generated text, for instance. AI agents promise something more consequential. They are being built to compare products, organize calendars, book trips, summarize meetings, manage files and interact with multiple applications with limited human supervision.

That may sound like a natural evolution of the digital assistant. But it fundamentally changes the risk profile. Once an AI system can move across apps, access permissions, read screens and trigger actions, it becomes an operational layer between the user and the digital economy.

The Prompt: Get the week’s biggest AI news on the buzziest companies and boldest breakthroughs, in your inbox.

AI Agents Raise More Difficult Policy Issues Than Chatbots

How much data should an AI assistant be allowed to access? What counts as meaningful user consent when permissions are bundled into complex app ecosystems? If an AI agent makes a purchase, cancels a booking, mishandles sensitive data or gives a flawed recommendation that causes financial harm, who is legally responsible — the developer, the platform, the device maker or the user?

Over the past several years, Chinese regulators have already developed a visible framework for emerging digital technologies. Rules on recommendation algorithms, deep synthesis and generative AI services have established a broader pattern: Beijing generally allows innovation to move forward, but under a clear structure of state supervision, cybersecurity compliance and content-related responsibility.

China’s Governance Approach: Balancing Innovation and Control

That model reflects a familiar balancing act. China sees AI as strategically important for robotics, manufacturing, semiconductors, consumer electronics and long-term productivity growth. It wants domestic firms to compete globally while deploying quickly at home. But powerful digital systems can create risks if commercialization moves too far ahead of governance. The result is a regulatory philosophy that tries to combine rapid adoption with political and institutional control.

That balancing act makes China’s AI debate important beyond China itself. Around the world, governments are still struggling to answer the same question: What exactly should AI regulation regulate? Europe has moved furthest with the EU AI Act, using a risk-based framework. The United States has taken a more fragmented route, relying on executive actions, agency intervention and sector-specific enforcement. China is hardly alone in facing that challenge, but the scale of its digital economy gives the issue unusual weight.

Major Governance Challenges Emerging in China

The first is data governance. AI agents are only as useful as the information they can access. But that same logic also raises the risk of overcollection, weak consent and misuse. In the agent era, privacy is about what AI systems can infer, combine and act upon.

The second pressure point is market structure. If AI assistants become the main interface through which users discover products, compare services, make bookings or manage daily tasks, then the power dynamics of China’s platform economy could shift again. Search, e-commerce, payments and local services could all be reshaped by whoever controls the assistant layer. That would raise competition questions about access, ranking, self-preferencing and the distribution of traffic between large platforms and smaller developers.

The third is liability. The more autonomy AI systems gain, the harder it becomes to assign responsibility when something goes wrong. Minor recommendation errors are one thing. Financial losses, privacy breaches or security failures are another. Regulators need to define clearer boundaries of accountability, especially as AI systems begin making decisions in more sensitive commercial or industrial settings.

Security is another concern. AI systems can be manipulated by malicious prompts, compromised through poisoned data or exploited through software vulnerabilities. As they spread into enterprise tools, connected devices and industrial environments, the consequences of those weaknesses become much greater. An unreliable chatbot is inconvenient. A compromised AI agent embedded in logistics, finance or critical systems is far more serious.

The larger point is that China’s AI debate is no longer just about innovation. It is about institutional readiness. The rise of increasingly autonomous systems is testing whether current legal and regulatory frameworks are robust enough for the next phase of the technology cycle.

That is why the conversation now unfolding matters. As these systems begin to act, not just generate, policymakers are being forced to confront a harder reality: the countries that lead in AI should be the ones that learn fastest how to govern them.


© Forbes