
Checkmarx and Bitwarden Just Showed That Your Pipeline Is the Attack Surface
Security teams spent years being told to watch their dependencies, pin their Actions, and rotate their secrets. The advice was correct. Most teams nodded along and did none of it.
On April 22, that gap became very expensive. A threat actor pushed a single commit to Bitwarden's CI/CD pipeline at 21:18 UTC. Four minutes later, a backdoored Bitwarden CLI was live on npm. The whole thing was possible because Bitwarden's pipeline was running a Checkmarx GitHub Action that had been compromised hours earlier. Developers who installed it that evening sent their GitHub tokens, AWS keys and SSH credentials to an attacker-controlled server without a single error message.
What the Checkmarx Bitwarden supply chain attack reveals is not a new technique or a novel exploit. It reveals how much privileged access your pipeline has already handed to tools you stopped verifying the day you installed them.
This is the same attack class we covered in the Axios npm supply chain attack and the LiteLLM PyPI compromise. The pattern is consistent: developer tooling with privileged pipeline access becomes the entry point. If you want to understand how Strobes AI maps a supply chain zero-day to your environment in under 30 minutes, that breakdown is worth reading alongside this one.
Checkmarx, the scanner that scanned you back
To understand how a backdoored Bitwarden CLI ended up on npm, you have to start six hours earlier with Checkmarx.
How the attacker got in
At 14:17 UTC on April 22, a threat actor used valid Checkmarx publisher credentials to authenticate to DockerHub and overwrote five trusted KICS image tags. Not fake tags, not a typosquat package — the real checkmarx/kics repository, the one teams had been pulling from for months. The window lasted 84 minutes. Any pipeline that ran between 14:17 and 15:41 UTC pulled the malware without knowing it.
Why it was invisible
The modified binary did everything it was supposed to do. It ran the IaC scan, returned results, and raised no alerts. While that was happening, it encrypted those scan results and transmitted them to audit.checkmarx[.]cx, a domain built to impersonate legitimate Checkmarx infrastructure. If your pipeline allowed outbound connections from your scanner, and most do, nothing looked wrong.
How it spread
The infection did not stay on the machine it landed on. Using the GitHub tokens it harvested, the malware automatically found repositories the victim had write access to and injected a malicious workflow at .github/workflows/format-check.yml. It then parsed .npmrc files, identified npm packages the victim maintained, and republished those packages with the payload embedded. One compromised pipeline became the source for every downstream repository and package it could touch.
Who was behind it
This was TeamPCP, a group that had run the same playbook against Trivy and LiteLLM before Checkmarx. Same payload, same encryption scheme, same tpcp.tar.gz naming convention. The targeting was not random. Security and developer tooling sit inside pipelines with privileged access to everything else. That is the point.
What they took
SSH keys, Git credentials, AWS, GCP, and Azure keys, Kubernetes and Docker configs, .env files, VPN configurations, CI/CD tokens, cryptocurrency wallets, Slack and Discord webhook URLs. On non-CI machines, the malware installed a systemd user service polling attacker infrastructure every 50 minutes, maintaining persistence well after the 84 minute window closed.
Bitwarden CLI, the password manager that became the pipeline
Bitwarden did not get hacked in the traditional sense. No credentials were phished, no zero-day was exploited, no insider made a mistake. The Bitwarden CLI was compromised because one GitHub Action in its CI/CD pipeline was no longer trustworthy, and nothing in the pipeline was checking.
The four minute window
At 21:18 UTC, a malicious commit was pushed to bitwarden/clients/.github/workflows/publish-cli.yml. At 21:22 UTC, @bitwarden/cli@2026.4.0 was live on npm. Four minutes from pipeline compromise to a malicious package available for download. That gap is important because it tells you this was not a slow, exploratory attack. The workflow was pre-written, the package was pre-built, and the publish credentials were already in hand. The commit was the trigger, not the beginning.
The repackaging trick
The malicious version was not a fresh build of the Bitwarden CLI. It was 2026.3.0 wrapped in a malicious outer shell claiming to be 2026.4.0. Two files appeared that had no presence in the prior clean release — bwsetup.js and bw1.js. Internal package metadata still pointed to 2026.3.0. Anyone running a diff against the previous release would have seen the discrepancy immediately. Automated integrity checks would have caught it. Neither happened.
Why you did not need to run it
This is the detail that matters most for understanding blast radius. The malware triggered via a preinstall hook. Not on login, not on first use, on installation. Every CI pipeline running an automated npm install of the Bitwarden CLI during that 90 minute window was compromised at the moment the package was pulled. The developer never had to open a terminal.
What this means for everyone downstream
A single developer with the malicious version installed handed an attacker GitHub token access to every repository they could write to. Every pipeline those tokens could reach was now a potential injection point. The Checkmarx attack seeded this. The Bitwarden pipeline carried it. And every developer environment that ran npm install that evening became the next potential source.
Bitwarden confirmed no end-user vault data was accessed. The compromise was contained in the CLI build pipeline. But the credentials harvested from developer machines that evening — GitHub tokens, AWS keys, SSH keys, MCP server configs, including ~/.claude.json — those were already gone.
This attack shares the same worm-like propagation pattern we documented in the Vercel AI tool supply chain breach, where a single compromised tool cascaded across infrastructure serving millions of websites. The blast radius calculus is the same: one poisoned tool, every downstream pipeline it can touch.
What a program built for this reality actually looks like
The Checkmarx IOCs were public within hours. Bitwarden issued a CVE the same day. The intelligence existed. The teams affected were not missing information. They were missing the ability to map that information to their own environment fast enough to matter.
That is a program design problem. Here is what the program needs to look like.
The toolchain is attack surface, not infrastructure
GitHub Actions, npm dependencies and Docker base images sit inside pipelines with authenticated access to credentials, cloud environments and deployment infrastructure. None of that tooling was in the asset inventory of the teams affected. A program that inventories production applications but not the toolchain building and deploying them has the same gap TeamPCP spent two months exploiting.
Continuous SCA mapped to assets changes the response window
A quarterly scan would have told you @bitwarden/cli@2026.4.0 was in your environment sometime in Q2. Continuous SCA mapped to your actual asset inventory tells you within minutes whether you pulled it, which pipeline touched it and what credentials were in scope. The teams still investigating days after the incident were asking developers to self-report whether they had run npm install in a specific 90 minute window four days ago.
Knowing a package is in your environment is not enough
You need to know which pipeline pulled it, what secrets were mounted in that environment, and which downstream repositories those tokens could reach. Pattern based repository analysis answers those questions directly. A flat dependency list does not.
The toolchain was inside the perimeter, it had privileged access, and nothing in the security program was watching it. A program built for this environment starts by putting that layer on the map.
Strobes VI now tracks supply chain attacks, ransomware groups, and threat actors — including TeamPCP campaign activity — so your exposure assessment automatically maps new intelligence to your actual asset inventory rather than leaving your team to self-report from memory four days later.
If you cannot answer within the hour which pipelines in your environment pulled a malicious package during a 90 minute window four days ago, your program has the same blind spot these teams had. Strobes can help you close it.