North Korean Hackers Weaponize AI Coding Agents in New Supply-Chain Attack Campaign

By

Breaking: AI Coding Agents Targeted in Sophisticated Supply-Chain Attack

Security researchers have uncovered a coordinated campaign where North Korean hackers are manipulating AI coding agents to install malicious software dependencies. The attack, dubbed PromptMink, exploits the autonomous behavior of AI tools that scan package registries for code components.

North Korean Hackers Weaponize AI Coding Agents in New Supply-Chain Attack Campaign
Source: www.infoworld.com

ReversingLabs analysts say the operation targets developers working with cryptocurrency and fintech applications. The threat actors aim to generate funds for the North Korean regime through data theft and system compromise.

Attack Methodology: Bait Packages and Dependency Confusion

AI coding agents regularly pull packages from registries like NPM and PyPI. Attackers publish packages with persuasive descriptions and legitimate functionality, making them attractive for integration.

The PromptMink campaign uses a two-layer approach: a bait package with real features and a secondary malicious dependency that executes an information stealer. Researchers at ReversingLabs explain: This campaign presents the new frontier in software supply chain security: AI coding agents manipulated into installing and using malicious dependencies in the code they generate.

Another vector exploits hallucinated package names—dependencies that AI agents invent but don't exist. Attackers register those names with malicious code, waiting for agents to automatically download them.

Background: North Korea's Ongoing Cyber Operations

The campaign is attributed to Famous Chollima, an advanced persistent threat (APT) group linked to North Korea. This group has long used social engineering—fake job interviews, rogue software components—to trick developers into installing malware.

North Korean Hackers Weaponize AI Coding Agents in New Supply-Chain Attack Campaign
Source: www.infoworld.com

The PromptMink attack began in September 2024 with two packages: @hash-validator/v2 and @solana-launchpad/sdk. The SDK served as bait with genuine functionality, while hash-validator contained a JavaScript infostealer. This bait-dependency combo allows the campaign to persist undetected, accumulating downloads and credibility.

Over time, multiple secondary malicious packages were rotated, including aes-create-ipheriv, jito-proper-excutor, and @validate-sdk/v2. The operation expanded to Python and Rust registries, with packages like @validate-ethereum-address/core appearing.

What This Means for Developers and Security Teams

AI coding agents are becoming a prime vector for supply-chain attacks. Unlike traditional social engineering, threat actors can test their lure packages against AI models before deployment, making the attacks more efficient.

Developers must manually verify every dependency pulled by AI agents, especially those related to cryptocurrency or cryptographic functions. Security scanners should be configured to flag packages from unknown or recently created publishers.

ReversingLabs researchers warn: The underlying problem is not much different from established patterns of social engineering—but the scale and automation of AI agents amplify the risk significantly. Organizations using AI coding assistants should enforce strict access controls and maintain a registry of approved packages.

Related Articles

Recommended

Discover More

Updated Minimum Requirements for NVIDIA GPU Compilation in Rust 1.97Testing in the Dark: How AI Is Breaking Traditional Software VerificationRust WebAssembly: Upcoming Changes to Symbol Linking and Undefined References10 Essential Facts About the CSS contrast() Filter Function8 Key Insights Into OnePlus's Merger With Realme and What It Means for the Brand's Future