Responding to the risks and challenges behind AI technology empowerment, China accelerates the construction of its AI security standard system
2026-04-07
With the deepening of China's "Artificial Intelligence+" initiative, various intelligent agents and AI applications have widely penetrated into production and daily life scenarios. The recent frequent AI security incidents have not only attracted public attention, but also become an important direction for the industry and academia to collaborate in tackling them. Recently, the National Cybersecurity Standardization Technical Committee (hereinafter referred to as the "Cybersecurity Standardization Committee") officially established the "Artificial Intelligence Security Standards Working Group" (WG9), marking the systematic promotion stage of China's artificial intelligence security standard system construction. Recently, there have been frequent security incidents in the global artificial intelligence industry due to the escalation of the battle between attack and defense. At the end of March, the source code of Claude Code, an AI programming tool under the artificial intelligence company Anthropic, was leaked, which is regarded as the first core code leak in the AI industry. According to Zhang Lei, a security expert at Qianxin, based on various public information and Anthropic's official response analysis, the source code leak this time is a typical human error in the release process and belongs to a supply chain security incident. It's like sending the complete set of production drawings together with the finished product that should have been given to customers. Once the core logic and protective bottom line of the product are made public, the entire operation of the product becomes transparent. Competitors can directly study its architecture, functional design, and intelligent agent logic, and can quickly imitate, catch up with, and even optimize it. At the same time, after the security rules are exposed, it is easier for people to find vulnerabilities, bypass constraints, and crack usage restrictions. ”Zhang Lei stated. Since the beginning of this year, the popular intelligent agent tool OpenClaw (commonly known as "lobster") has also been repeatedly exposed to have multiple security risks. On April 3rd, the National Information Security Vulnerability Database (CNNVD) released a notice stating that from March 10th to April 2nd, a total of 155 OpenClaw vulnerabilities were collected, including 11 extremely critical vulnerabilities and 53 high-risk vulnerabilities. Multiple versions of OpenClaw were affected by the vulnerabilities. We only spent an afternoon breaking OpenClaw. ”Lu Chen, Director of Security Innovation at DARKNAVY, a well-known white hat security team in China, said that currently, the mainstream "lobster" solutions in China are divided into two categories: one is to provide dialog boxes by shell based on OpenClaw, and the other is to provide servers for users to configure on their own. Compared to others, the former carries higher risks. Once breached, hackers can directly obtain server permissions and even access large internal network models. Liu Shaoxuan, Vice Dean of the Antai School of Economics and Management at Shanghai Jiao Tong University, revealed that a manufacturing company recently suffered a 72 hour production halt due to hastily launching OpenClaw, resulting in direct losses that may exceed 20 million yuan. There is also a legal service company that failed to take risk prevention and data security measures, resulting in a large amount of customer privacy data leakage. The person in charge of security at AsiaInfo also pointed out that current network attacks are evolving towards intelligence and automation. Hackers use AI to dynamically generate ransomware payloads and create high fidelity phishing content, greatly improving attack efficiency and concealment. AI autonomous attacks on intelligent agents and deep forgery based business fraud will become the most urgent security challenge in 2026. According to data released by the State owned Assets Supervision and Administration Commission at the end of January, central enterprises have created over 1000 AI application scenarios in key areas such as industrial manufacturing, energy and power, and intelligent connected vehicles. The trend of AI empowering industrial transformation is becoming increasingly evident. At the same time, the industry concerns caused by AI security issues have also given rise to new security demands, driving the continuous efforts of the AI security supply side. Dongguan Securities believes that the rapid implementation of intelligent agent technologies such as OpenClaw in recent times has given rise to new security demand scenarios, coupled with the continuous release of policy benefits in the field of network security, and the industry is expected to usher in new growth opportunities. Changjiang Securities predicts that the domestic cybersecurity market is expected to exceed 150 billion yuan by 2026 and reach 300 billion yuan by 2030, with a compound annual growth rate of 18% to 20%. The industry is currently in a golden period of development. Meanwhile, new AI security products and services are also being continuously released. Recently, Shanghai Artificial Intelligence Laboratory launched SafeClaw, a high security industrial level intelligent agent platform, focusing on the industrial intelligence transformation of high security needs, to promote the industry's path from "post safety" to "endogenous safety". At the same time, the Shanghai Artificial Intelligence Laboratory has also open-source an intelligent agent defense model that can quickly diagnose risks, and explored an "endogenous evolution" governance framework that embeds security criteria into the decision-making layer of intelligent agents. The most dangerous thing is not the known risk, but the 'unexpected danger'. Therefore, the current core task is to proactively build an endogenous safety system while AI capabilities soar. ”Hu Xia, a leading scientist at the Shanghai Artificial Intelligence Laboratory, stated that these efforts aim to deeply integrate security capabilities into the entire chain of AI development, providing systematic solutions for 'endogenous security' in the era of intelligent agents. ”AI governance continues to improve and accelerate the development of security standards. With the widespread application of artificial intelligence, AI governance is also receiving increasing attention. This year's government work report clearly proposes to "improve the governance of artificial intelligence", and the work report of the Standing Committee of the National People's Congress proposes to "strengthen legislative research in the field of artificial intelligence". In this context, the development of artificial intelligence security standards is accelerating. On March 25th, the Ministry of Industry and Information Technology publicly solicited opinions on industry standard planning projects such as "Application Security Requirements for Context Protocol of Artificial Intelligence Security Governance Model". In early April, the Artificial Intelligence Security Standards Working Group (WG9) stated that it will focus on promoting the implementation of core standards such as the "Maturity Assessment Method for Network Security Technology and Artificial Intelligence Security Capability", "Classification and Grading Method for Network Security Technology and Artificial Intelligence Application Security", and "Guidelines for the Application Security of Network Security Technology and Artificial Intelligence Technology Involving Minors". At the same time, under the unified deployment of the National Cybersecurity Standards Committee, efforts will be concentrated on tackling national standards in areas such as endogenous security and data foundation, new form and service security, system and application security, and scientific evaluation. In response to the new risks brought by AI, it is necessary to promote coordinated efforts from three levels: policies and regulations, technical standards, and implementation mechanisms. ”Zhang Yong, Vice President of Qianxin, believes that looking ahead to the future, firstly, security will be upgraded from "optional" to "mandatory", and security compliance will shift from recommendation to mandatory. Industry benchmarks such as "AI security investment not less than 15% of total AI application investment" can be set; Secondly, network security has shifted from "single point protection" to "full chain collaboration", achieving "automatic immunity of the entire network when an attack is detected in one location"; Thirdly, moving from "human defense" to "technical defense+intelligent defense", AI against AI has become the norm in attack and defense; Fourth, move from "passive emergency response" to "active immunity", build a resilient defense system, and achieve "quick recovery even under attack, without losing core data". (New Society)
Edit:He Chuanning Responsible editor:Su Suiyue
Source:Economic Information Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com