https://carnegieendowment.org

China’s New AI Governance Initiatives Shouldn’t Be Ignored

Over the past six months, the Chinese government has rolled out a series of policy documents and public pronouncements that are finally putting meat on the bone of the country’s governance regime for artificial intelligence (AI). Given China’s track record of leveraging AI for mass surveillance, it’s tempting to view these initiatives as little more than a fig leaf to cover widespread abuses of human rights. But that response risks ignoring regulatory changes with major implications for global AI development and national security. Anyone who wants to compete against, cooperate with, or simply understand China’s AI ecosystem must examine these moves closely.

These recent initiatives show the emergence of three different approaches to AI governance, each championed by a different branch of the Chinese bureaucracy, and each at a different level of maturity. Their backers also pack very different bureaucratic punches. It’s worth examining the three approaches and their backers, along with how they will both complement and compete with each other, to better understand where China’s AI governance is heading.

Three Approaches to Chinese AI Governance

Organization

Focus of Approach

Relevant Documents

Cyberspace Administration of China

Rules for online algorithms, with a focus on public opinion

- Internet Information Service Algorithmic Recommendation Management Provisions
- Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms

China Academy of Information and Communications Technology

Tools for testing and certification of “trustworthy AI” systems

- Trustworthy AI white paper
- Trustworthy Facial Recognition Applications and Protections Plan

Ministry of Science and Technology

Establishing AI ethics principles and creating tech ethics review boards within companies and research institutions

- Guiding Opinions on Strengthening Ethical Governance of Science and Technology
- Ethical Norms for New Generation Artificial Intelligence

The strongest and most immediately influential moves in AI governance have been made by the Cyberspace Administration of China (CAC), a relatively new but very powerful regulator that writes the rules governing certain applications of AI. The CAC’s approach is the most mature, the most rule-based, and the most concerned with AI’s role in disseminating information.

The CAC made headlines in August 2021 when it released a draft set of thirty rules for regulating internet recommendation algorithms, the software powering everything from TikTok to news apps and search engines. Some of those rules are China-specific, such as the one stipulating that recommendation algorithms “vigorously disseminate positive energy.” But other provisions break ground in ongoing international debates, such as the requirement that algorithm providers be able to “give an explanation” and “remedy” situations in which algorithms have infringed on user rights and interests. If put into practice, these types of provisions could spur Chinese companies to experiment with new kinds of disclosure and methods for algorithmic interpretability, an emerging but very immature area of machine learning research.

Soon after releasing its recommendation algorithm rules, the CAC came out with a much more ambitious effort: a three-year road map for governing all internet algorithms. Following through on that road map will require input from many of the nine regulators that co-signed the project, including the Ministry of Industry and Information Technology (MIIT).

The second approach to AI governance has emerged out of the China Academy of Information and Communications Technology (CAICT), an influential think tank under the MIIT. Active in policy formulation and many aspects of technology testing and certification, the CAICT has distinguished its method through a focus on creating the tools for measuring and testing AI systems. This work remains in its infancy, both from technical and regulatory perspectives. But if successful, it could lay the foundations for China’s larger AI governance regime, ensuring that deployed systems are robust, reliable, and controllable.

Matt Sheehan

Matt Sheehan is a fellow at the Carnegie Endowment for International Peace, where his research focuses on global technology issues, with a specialization in China’s artificial intelligence ecosystem.

More >

In July 2021, the CAICT teamed up with a research lab at the Chinese e-commerce giant JD to release the country’s first white paper on “trustworthy AI.” Already popular in European and U.S. discussions, trustworthy AI refers to many of the more technical aspects of AI governance, such as testing systems for robustness, bias, and explainability. The way the CAICT defines trustworthy AI in its core principles looks very similar to the definitions that have come out of U.S. and European institutions, but the paper was notable for how quickly those principles are being converted into concrete action.

The CAICT is working with China’s AI Industry Alliance, a government-sponsored industry body, to test and certify different kinds of AI systems. In November 2021, it issued its first batch of trustworthy AI certifications for facial recognition systems. Depending on the technical rigor of implementation, these types of certifications could help accelerate progress on algorithmic interpretability—or they could simply turn into a form of bureaucratic rent seeking. On policy impact, the CAICT is often viewed as representing the views of the powerful MIIT, but the MIIT’s leadership has yet to issue its own policy documents on trustworthy AI. Whether it does will be a strong indicator of the bureaucratic momentum behind this approach.

Finally, the Ministry of Science and Technology (MOST) has taken the lightest of the three approaches to AI governance. Its highest-profile publications have focused on laying down ethical guidelines, relying on companies and researchers to supervise themselves in applying those principles to their work.

In July 2021, MOST published guidelines that called for universities, labs, and companies to set up internal review committees to oversee and adjudicate technology ethics issues. Two months later, the main AI expert committee operating under MOST released its own set of ethical norms for AI, with a special focus on weaving ethics into the entire life cycle of development. Since then, MOST has been encouraging leading tech companies to establish their own ethics review committees and audit their own products.

MOST’s approach is similar to those of international organizations such as the United Nations Educational, Scientific and Cultural Organization and the Organisation for Economic Co-operation and Development, which have released AI principles and encouraged countries and companies to adopt them. But in the Chinese context, that tactic feels quite out of step with the country’s increasingly hands-on approach to technology governance, a disconnect that could undermine the impact of MOST’s efforts.

One unanswered question is how these three approaches will fit together. Chinese ministries and administrative bodies are notoriously competitive with one another, constantly jostling to get their pet initiatives in front of the country’s central leadership in hopes that they become the chosen policies of the party-state. In this contest, the CAC’s approach appears to have the clear upper hand: It is the most mature, the most in tune with the regulatory zeitgeist, and it comes from the organization with the most bureaucratic heft. But its approach can’t succeed entirely on its own. The CAC requires that companies be able to explain how their recommendation algorithms function, and the tools or certifications for what constitutes explainable AI are likely to come from the CAICT. In addition, given the sprawling and rapidly evolving nature of the technology, many practical aspects of trustworthy AI will first surface in the MOST-inspired ethics committees of individual companies.

The three-year road map for algorithmic governance offers a glimpse of some bureaucratic collaboration. Though the CAC is clearly the lead author, the document includes new references to algorithms being trustworthy and to companies setting up ethics review committees, additions likely made at the behest of the other two ministries. There may also be substantial shifts in bureaucratic power as AI governance expands to cover many industrial and social applications of AI. The CAC is traditionally an internet-focused regulator, and future regulations for autonomous vehicles or medical AI may create an opening for a ministry like the MIIT to seize the regulatory reins.

The potential impact of these regulatory currents extends far beyond China. If the CAC follows through on certain requirements for algorithmic transparency and explainability, China will be running some of the world’s largest regulatory experiments on topics that European regulators have long debated. Whether Chinese companies are able to meet these new demands could inform analogous debates in Europe over the right to explanation.

Related analysis from Carnegie

On the security side, as AI systems are woven deeper into the fabrics of militaries around the world, governments want to ensure those systems are robust, reliable, and controllable for the sake of international stability. The CAICT’s current experiments in certifying AI systems are likely not game-ready for those kinds of high-stakes deployment decisions. But developing an early understanding of how Chinese institutions and technologists approach these questions could prove valuable for governments who may soon find themselves negotiating over aspects of autonomous weapons and arms controls.

With 2022 marking a major year in the Chinese political calendar, the people and bureaucracies building out Chinese AI governance are likely to continue jostling for position and influence. The results of that jostling warrant close attention from AI experts and China watchers. If China’s attempts to rein in algorithms prove successful, they could imbue these approaches with a kind of technological and regulatory soft power that shapes AI governance regimes around the globe.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.