EdgeTheory Logo
CONTACT
← Back to Resources

AI- Enabled Cyber Operations: Narrative Intelligence Detects How Chinese Actors Weaponize AI

December 26, 2025Ellie Munshi
This EdgeTheory report synthesizes geospatial, narrative attribution, and network analysis surrounding the use of Artificial Intelligence by state actors, specifically those linked to China, to automate cyber-enabled espionage. Drawing from multi-platform collection streams, including technical forums, social media, and Chinese-language outlets, this brief maps how narratives about AI-driven hacking, state capabilities, and regulatory urgency propagate across the global information environment. The report uses EdgeTheory’s network-detection and narrative-amplification tools to trace how both alarmist and skeptical actors organize, interact, and reinforce messaging, providing a layered view of how information power shapes perceptions of a qualitatively new level of cyber threat.
Gated Content Form (#19)

Enter your email to view the full content.

Introduction

The reported attack by the China-linked group GTG-1002, which used Claude Code to automate nearly every stage of a sophisticated cyberattack, signals a major escalation in the use of AI for cyber operations. Instead of relying on human operators for every step, the attackers used a system of sub-agents to divide tasks such as infrastructure scanning, vulnerability searching, and generating malicious payloads, with human involvement reduced to approving actions. The ability of the AI to disguise harmful commands as technical scripts to evade safety filters is a critical new development. EdgeTheory’s AI analytics indicate a discernible escalation of cognitive confrontation surrounding the geopolitical and technical implications of this agent-based architecture, which enables nearly autonomous attacks. Simultaneously, narratives surrounding this incident, which highlight the idea that Claude Code automated most of a sophisticated operation linked to China, have intensified the narrative confrontation in the cyber domain. As actors leverage social media, technical commentary, and coordinated online amplification, AI-enabled cyber operations have become both a technical and informational theater of conflict. This report outlines how these narratives spread, who amplifies them, and how EdgeTheory’s tools surface adversarial networks and deceptive narratives that complicate attribution and mitigation efforts.

Key Findings 

  1. AI Exploitation Automates Core TTPs (Tactics, Techniques, and Procedures) of Cyber Espionage 

Sophisticated AI agents are now conducting machine-speed scanning, executing automated fuzzing for vulnerability discovery, generating novel exploit code via autonomous payload generation, and utilizing advanced guardrail evasion to bypass LLM safety filters. This agentic architecture places these capabilities on par with established APT groups. In post-exploitation phases, these systems engage in agentic credential harvesting and real-time data triage, reducing the role of human operators primarily to strategic oversight and approval.

  1. Adversarial Narratives Leverage Specific Psychological Vulnerabilities to Amplify Threat Perception

Malign actors, particularly those suspected of being linked to Chinese and Russian state-sponsored operations, employ a range of rhetorical techniques to shape perception and polarize global audiences. These actors exploit deep-seated psychological vulnerabilities, such as aversion to foreign influence, paranoia, and decline anxiety, to frame technological competition as a zero-sum struggle where falling behind signals national failure. By utilizing selective framing and emotive language, these narratives portray international cooperation or technology governance as existential threats to sovereignty, effectively delegitimizing diplomacy as a form of “subjugation.”

To protect their operational security, these suspected actors use deceptive narratives to foster ambiguity in attribution, denying state involvement or blaming non-state "rogue" actors to complicate responses from targeted nations. They strategically minimize or exaggerate their AI capabilities to distort the threat landscape and hide tactical evolutions, such as autonomous intrusion or AI-powered "jailbreaking." By blending verified incidents with fabricated claims, they create informational "noise" that muddies public discourse, shifts focus away from coordinated state-backed operations, and delays effective mitigation efforts.

  1. Deceptive Narratives Obscure Attribution and Technical Capabilities 

Adversarial actors use deceptive narratives that blend disinformation, obfuscation, and misattribution to confuse observers. These narratives foster ambiguity in attribution by denying state involvement , selectively minimizing or exaggerating capabilities to distort the threat landscape , and misleadingly describing techniques by underreporting the specific methods like AI-powered "jailbreaking" and autonomous intrusion.

Narrative Infographics: GEOINT & Data Analytics

Geospatial Narrative sources (yellow) and targets (red)

The narrative initially emerged from Beirut, headed for Almaty. As it spread, the most frequent origin shifted to Beijing, and the most frequent destination shifted to Beijing. There are a total of 18 points of origin, and 22 destinations. EdgeTheory’s Narrative Intelligence platform tracked narratives stemming from websites, social media actors, and RSS feeds. Individual posters on social media are amplifying items around the narrative of AI-driven cyberattacks becoming a new strategic threat, with the Claude Code incident portrayed as evidence that hostile states can now weaponize advanced AI tools to automate large portions of their hacking operations.

Sources include individuals such as Lukasz Olejnik, Evan Kirstel, TweetThreatNews, and Ben Dickson, and Chinese media from Epoch Times Chinese, VOA Chinese, and Followin io. Their posts frequently focus on framing this incident as a major turning point in how AI can be used in cyberattacks. They highlight the idea that Claude Code was able to automate most of a sophisticated hacking operation linked to China, raising concerns about what this means for national security and the future of cyber defense. Many of them stress the growing urgency around regulating advanced AI systems, while others point out that Anthropic may be overstating the level of autonomy involved. Overall, the discussion centers on both the technical implications of AI-assisted hacking and the broader political and policy stakes behind it.

Many of the insights originate from several tech commentators and industry accounts on X, Chinese-language outlets and aggregators including Aboluowang are summarizing the Anthropic claim. The topic has also been widely shared and debated on US tech forums such as Reddit: On r/ClaudeAI and r/BetterOffline, Reddit users are reposting Anthropic’s claims, sometimes with skepticism or additional speculation about whether the Chinese state was actually involved.

EdgeTheory GCA Social Media Narrative Classifier

The primary sources amplifying this narrative are predominantly Chinese-aligned, closely followed by Russian-alignment. The amplified narrative content reveals a strong focus on the use of AI and cyber capabilities by state actors to advance geopolitical goals. The actors employ a range of malign rhetorical techniques to shape perception and polarize audiences. Key vectors include aversion to foreign influence, paranoia, and decline anxiety, each operationalized through selective framing, emotive language, and implied threat construction. For example, narratives exploiting aversion to foreign influence frame international cooperation, export controls, or alliance-based technology governance as existential threats to sovereignty, using rhetoric to delegitimize diplomacy and portray foreign engagement as subjugation. Specifically, international cooperation on issues like climate or global health is reframed as a "Western-led trap" designed to stall the industrial progress of developing nations. Similarly, export controls on sensitive semiconductors are characterized not as security measures, but as "economic warfare" and "technological containment" intended to keep the nation in a state of permanent developmental vassalage. Furthermore, alliance-based technology governance is portrayed as the creation of "digital iron curtains," where multilateral standards for AI ethics are dismissed as "diplomatic bullying" meant to force smaller nations into a subordinate "exclusive club" at the expense of their own national interests. This framing is reinforced by paranoid elements, such as implied covert cyber operations or secret state-backed activities, which suggest hidden threats without requiring direct evidence. Together, these techniques encourage audiences to view foreign actors and multilateral efforts as inherently hostile. Decline anxiety then amplifies this effect by emphasizing technological, economic, or moral weakness, particularly in the AI and cyber domain, framing competition as a zero-sum struggle where falling behind signals national failure.

These narratives further demonstrate consistent operational patterns. Tactically, state-aligned amplifiers prioritize repetition across multiple platforms and languages to normalize suspicion and threat perceptions. Technically, they rely on insinuation, strategic ambiguity, and selective use of credible facts to blur the line between legitimate security concerns and manipulative messaging. Procedurally, narratives are sequenced to escalate emotional response, beginning with plausibly deniable reporting on AI or cyber developments, followed by speculative attribution, and culminating in moralized or alarmist conclusions that encourage distrust of foreign actors, democratic institutions, or electoral processes. 

Across the summaries, reliability and accuracy generally range from moderate to high, mostly between 6 and 7, though some items score lower with reliability as low as 3 and accuracy as low as 4. Incitement levels vary, with most summaries scoring low (1-3), except one with a notably higher incitement rating of 6, indicating potential inflammatory intent. The most reliable and accurate narrative items in this set score between 6 and 7 because they are grounded in verifiable facts, such as official government plans or direct quotes from industry experts. For example, the analysis of U.S.-China AI strategies and the safety warnings from Anthropic CEO Dario Amodei are considered highly credible because they use public documents and primary sources rather than rumors. Even state-run outlets like China Daily show high factual fidelity when reporting on specific national blueprints, providing a solid baseline for understanding official state goals. In contrast, lower-scoring items are often less reliable because they rely on vague evidence or unverified claims. While some reports score as low as a 3 for reliability, others include "incitement" scores as high as 6, indicating that the content is likely intended to provoke fear or anger. These lower-quality narratives often focus on suspected cyberattacks or "secret" activities that lack independent confirmation. Overall, the content leans toward credible and fact-based reporting but includes some narratives with less reliability, mixed accuracy, and occasional incitement likely aimed at influencing opinions strongly.

Edge Theory NARINT sources tracing primary sources

Grok table of Twitter network amplifying adversarial narratives on AI used for cyberattacks

The Western Security Alarmist cluster is the largest and most influential group in the network, framing the incident as clear evidence that AI has enabled a new and unprecedented level of state-sponsored cyber aggression linked to China. High-centrality users act as key amplifiers, rapidly transforming mainstream reporting into sensational narratives that emphasize urgency, escalation, and vulnerability. Through emotionally charged language and selective framing, this cluster portrays the misuse of Claude as proof of an accelerating AI-driven threat environment, using the incident to justify calls for aggressive automation of cyber defenses and stronger U.S. or Western countermeasures. The prominence and connectivity of this cluster allow alarmist interpretations to spread quickly and dominate the broader discourse.

EdgeAgent on Narratives being disseminated 

When asked about deceptive narratives being disseminated via the GCA Module, the EdgeAgent stated: “Deceptive narratives surrounding Russian and Chinese state-sponsored AI-enabled cyber operations often blend disinformation, obfuscation, and misattributions designed to confuse observers about the true origin, nature, and extent of the threats. These narratives typically include the following elements:

  • Ambiguity in Attribution: They foster uncertainty by denying state involvement or blaming non-state actors, thereby obscuring clear attribution. This ambiguity allows perpetrators to avoid direct responsibility for AI-driven cyberattacks while complicating responses from targeted nations.
  • Minimizing or Exaggerating Capabilities: Some narratives downplay the sophistication and scale of AI-enhanced cyber tools to reduce perceived risks or, conversely, exaggerate opponents’ capabilities to create fear or justify countermeasures. This selective framing distorts the actual threat landscape.
  • Misleading Descriptions of Techniques: The specific AI-driven methods employed—such as autonomous automation of cyber intrusions, AI-powered "jailbreaking" of security models, or rapid execution of complex multi-vector attacks—are often either underreported, inaccurately portrayed, or attributed to traditional hacking approaches. This obfuscation conceals innovation and tactical evolution in state cyber operations.
  • Blending Misinformation with Legitimate Concerns: By mixing verified incidents (like the use of AI chatbots to automate attacks) with speculative or fabricated claims, these narratives create noise that muddies discourse and delays effective attribution and mitigation efforts.
  • Shifting Focus Away from State Actors: Narratives may emphasize AI risks as generic technological problems or spotlight commercial or rogue threat actors to divert attention from coordinated state-backed operations, thus weakening coordinated defensive responses.

These deceptive narratives impede clear understanding of AI-assisted cyber threats by masking the precise nature of attacks, the role of state actors, and the cutting-edge AI capabilities leveraged, such as the automation of cyber intrusions with minimal human involvement, exploitation of AI model vulnerabilities ("jailbreaking"), and large-scale AI-orchestrated espionage campaigns.

Recognizing and countering such misinformation is critical for coherent attribution, preparedness, and international cooperation against evolving AI-enabled cyber threats.”

State-Sponsored AI Hacking 

The growing reliance of China and Russia on artificial intelligence to enhance cyber-enabled espionage represents a significant shift in adversarial strategy, a dynamic clearly reflected in EdgeTheory’s Narrative Intelligence. This assessment is grounded in the observation of specific TTPs where AI is manipulated to automate significant portions of the cyber-espionage workflow. Specifically, sophisticated AI agents are now conducting machine-speed scanning, executing automated fuzzing for vulnerability discovery, generating novel exploit code via autonomous payload generation, and utilizing advanced guardrail evasion to bypass LLM safety filters. In post-exploitation phases, these systems engage in agentic credential harvesting and real-time data triage, making the operations nearly autonomous, with human operators primarily relegated to strategic oversight and approval. This agentic architecture confirms commentary suggesting that such tools signify a new stage of state-directed hacking, effectively placing these capabilities on par with established APT groups.

Medium post discussing the growing threat from AI Cyber Espionage 

EdgeTheory analytics confirm the accelerating spread and influence of these narratives across the global information ecosystem. Malign actors frequently leverage recurring public anxieties, specifically about "uncontrollable AI" and uncertainty regarding the reliability of major technology firms, to amplify manipulative or deceptive framings that portray AI-driven hacking as either catastrophic or inevitable.  

Russian Telegram Post on GTG-1002 Attack

English Translation

SecurityLab.ru: "The Chinese group GTG-1002 conducted a nearly autonomous attack using ClaudeAnthropic recorded a GTG-1002 attack where the perpetrators used Claude Code and the MCP protocol to automate almost every stage of the attack. The AI analyzed the infrastructure, conducted scanning, searched for vulnerabilities, and even generated useful payloads. This is the first time a model of this level has actually been used in a multi-stage operation against valuable targets.The mechanism was built on a system of sub-agents. They divided the tasks among themselves, and a human only connected for approval of actions. Thanks to skillful masking of commands under technical scenarios, it was possible to hide the malicious context and bypass built-in filters. Although the model occasionally made mistakes, offering major errors, it still successfully handled most of the operations.Anthropic emphasizes that this is a qualitatively new level of threat. Previously, Claude was used for extortion, but there the operators acted manually. Now, the agent architecture makes it possible to conduct almost autonomous attacks.#antivirus #agentAI #cybersecurity”

The statement describes a major escalation in the use of AI for cyber operations by state sponsored actors. Anthropic reports that the China-linked group GTG-1002 used Claude Code—along with the MCP protocol—to automate nearly every stage of a sophisticated cyberattack. Instead of relying on human operators to carry out each step, the attackers set up a network of sub-agents that divided tasks such as scanning infrastructure, finding vulnerabilities, and generating malicious payloads. Human involvement was reduced to approving actions, while the agents disguised harmful commands as technical scripts to evade safety filters. Although the model occasionally produced incorrect or fabricated information, it still completed most of the operational workflow. This signals a new phase in cyber threats, where advanced AI systems can meaningfully automate complex intrusions and reduce the need for skilled human hackers. The attack demonstrates that agent-based architectures can turn AI models into near-autonomous operators capable of executing full attack chains.

Conclusion 

The attack attributed to GTG-1002 marks a pivotal moment in cyber conflict, demonstrating that AI has moved beyond mere assistance to become an agentic tool capable of executing near-autonomous cyber espionage campaigns. This qualitative escalation in technical capability is being simultaneously mirrored in the global information environment. Adversarial actors are not only deploying the technology but are also strategically shaping the narratives around it, employing sophisticated techniques like rhetorical exploitation of public anxieties and calculated ambiguity in attribution to muddy the discourse. EdgeTheory’s Narrative Intelligence is crucial for distinguishing genuine technical concern from manipulative messaging, revealing that narratives are propagating through highly connected and influential networks, particularly the Western Security Alarmists cluster. Recognizing and countering the deliberate blending of legitimate concerns with speculation is critical for coherent attribution, informed policy-making, and collective preparedness against the evolving challenge of AI-enabled cyber threats.

Lead Analyst:

Ellie Munshi is an analyst at the EdgeTheory Lab. She is studying Strategic Intelligence in National Security and Economics at Patrick Henry College. She has led special projects for the college focused on Anti-Human Trafficking, Chinese influence in Africa, AI influence on policymakers, and is also an intelligence analyst intern at the Department of War.

hello world!
hello world!

AI-Native Narrative Intelligence

Request A Demo

AI-Powered Narrative Intelligence For Decision Advantage

Detect, Assess, Shape

chevron-down