
AI-Accelerated Offense: The Cyberattack Your Security Program Was Never Built to Stop
A frontier AI model called Mythos found thousands of zero-day vulnerabilities across every major operating system and web browser in a matter of weeks.
One of those flaws had been sitting undetected in OpenBSD for 27 years. Another in FFmpeg had survived five million automated scans. The model did not just find them. It developed working exploits and mapped paths to use them.
This is not a research preview of something coming in five years. It happened in April 2026, and it changed the baseline assumptions that most security teams are still operating on.
This is AI-Accelerated Offense. The attackers are not coming. They are already inside.
What is AI-Accelerated Offense
AI-Accelerated Offense is the use of AI models and autonomous agents to dramatically reduce the time, skill, and cost required to discover vulnerabilities, develop working exploits, and execute AI-powered cyberattacks end-to-end. Where a skilled attacker once needed weeks to move from reconnaissance to a working proof-of-concept, an AI agent completes the same sequence in hours, with minimal human involvement at each stage.
This is different from “AI-powered cyberattacks” as a broad category, which covers phishing generation, deepfakes, and credential stuffing. It is also different from “offensive AI,” an older framing that describes AI used as a supporting tool for human attackers. AI-Accelerated Offense is specifically about the speed and autonomy of the full attack chain itself. The attacker is no longer the bottleneck. The AI is running the operation, and the human is supervising it at a handful of decision points.
The defenses built to stop a slow, human-driven attacker do not stop an autonomous agent that never sleeps, never makes impatient mistakes, and can run parallel attack sequences against dozens of targets simultaneously. The threat model has changed. Most security programs have not.
How AI Executes a Cyberattack From Start to Finish
Every stage that previously required skilled human effort has been compressed or automated. What used to be a weeks-long operation involving multiple specialists is now a single agent running end-to-end.
Reconnaissance
An AI agent maps your external perimeter the same way an attacker would: identifying reachable hosts, fingerprinting services, surfacing forgotten subdomains, exposed management interfaces, default credentials, and misconfigured storage. This used to take a human team days of careful work. An agent does it in minutes and does not move on before the job is complete. The output is a precise, prioritized map of your attack surface handed directly to the next phase.
Vulnerability Discovery
This is where AI-Accelerated Offense becomes genuinely alarming. Models like Mythos do not just scan for known CVEs. They analyze code paths, reason about logic flaws, and chain multiple weaknesses together into viable attack paths that no individual signature would catch. The 27-year-old OpenBSD flaw and the 16-year-old FFmpeg bug were not found by matching patterns. They were found by a model reasoning through code the way a senior vulnerability researcher would, except faster and at a scale no human team can replicate. Project Glasswing, the coordinated vulnerability research initiative that deployed Mythos, found thousands of high-severity vulnerabilities across every major OS and browser in weeks.
Exploit Development
Once a vulnerability is identified, exploit development follows without the weeks-long pause that used to separate discovery from weaponization. Offensive AI compresses this by reverse-engineering patches, identifying what changed, inferring the original flaw, and generating working exploit code to confirm it. Published technical documentation from the Project Glasswing research shows Mythos can take a CVE identifier and a git commit hash and autonomously produce a working exploit within hours.
Lateral Movement and Credential Harvesting
After gaining initial access, the agent does not stop. It maps internal systems, identifies the highest-value targets, and harvests credentials autonomously. In the Chinese state-sponsored espionage campaign documented in September 2025, Claude Code was weaponized to inspect target systems, identify the highest-value databases, escalate privileges, and create backdoors without pausing for human instruction. The agent worked across roughly thirty global targets. Everything between the operator’s high-level directives was handled by AI.
Exfiltration and Documentation
The final phase is where AI-Accelerated Offense reveals how thoroughly it has replaced human skill. The agent did not just extract data. It categorized stolen information by intelligence value, prioritized the highest-privilege accounts, and wrote its own attack documentation to guide the next phase of operations. A human operator received a structured summary of what had been taken and where to go next. The fraction of the operation requiring human judgment was roughly four to six decision points across the entire campaign.
The Numbers That Should Change How You Think About Risk
If the kill chain above felt abstract, the data does not.
The patch window is now measured in days, not weeks
The average time to weaponize a disclosed vulnerability has fallen to five days, according to a recent analysis. That is not enough time for most organizations to complete a standard change approval workflow, let alone test and deploy a fix across production infrastructure. Meanwhile, 131 new CVEs are being disclosed every single day, according to a security report released in 2026. No vulnerability management process built around weekly triage meetings and manual prioritization survives that volume.
AI drives the surge in application attacks
IBM X-Force observed a 44 percent increase in attacks beginning in 2025, driven largely by the exploitation of vulnerabilities in public-facing applications and by AI-enabled vulnerability discovery, marked by missing authentication controls.
AI makes it cheap and fast to scan large numbers of internet-facing services for known weaknesses, which means attackers do not need to be selective. They run against everything and let the agent sort out what yields a foothold.
Most organizations are already behind on patching
The 2025 Verizon Data Breach Investigations Report found that only 54 percent of vulnerable devices were fully remediated within the year, with a median patch time of 32 days. An attacker running offensive AI does not need to find a new flaw. They need a list of unpatched systems and a few hours. The gap between patching speed and weaponization speed is where most breaches happen, and AI has made that gap wider on the attacker’s side, while most defenders are still running the same processes they had three years ago.
The threat is already active, not theoretical
According to the State of AI Cybersecurity 2026 report, 73 percent of security professionals say AI-powered threats are already actively hitting their organizations, with automated vulnerability scanning and exploit chaining cited as the second biggest concern after hyper-personalized phishing. This is not a future problem that can be scheduled for next quarter’s planning cycle.
Why Most Security Programs Are Not Built for This
Most security programs were designed around an attacker who moves slowly. Quarterly penetration tests made sense when the threat moved on a human timescale. Monthly patching cycles were defensible when the window between disclosure and active vulnerability exploitation was measured in weeks. Manual triage was acceptable when the daily volume of CVEs was something an analyst team could process in a working day.
None of those assumptions hold anymore. AI-Accelerated Offense does not introduce new vulnerabilities into your environment. It removes the time buffer that defenders quietly depended on to function. The core gaps it exploits are specific and structural:
Patch latency: The median organizational patch time is 32 days. Offensive AI produces a working exploit in under a week. That gap is the attack surface.
Triage bottlenecks: Manual triage cannot scale to 131 new CVEs per day. Risk-based prioritization that was acceptable last year is not fast enough when AI is scanning for high-EPSS vulnerabilities within hours of disclosure.
Periodic over continuous: Penetration testing done four times a year tells you about your exposure on those four days. Offensive AI scans your perimeter continuously. The mismatch is structural, not a resourcing problem.
Signature-based detection: Most endpoint security is built on recognizing things that look like known attacks. Offensive AI generates novel variants of its own tooling in hours, iterating against detection until it finds a combination that does not trigger existing rules.
Rob T. Lee, Chief AI Officer at SANS Institute, put it directly at RSAC 2026: AI has reduced exploit timelines from years to days, with attackers now moving toward execution in hours or minutes. Security programs built on human-speed assumptions are not just behind. They are structurally incompatible with the AI-powered cyberattacks they are trying to defend against.
Closing the Gap Before Offensive AI Finds It
Every gap in that list, patch latency, triage volume, periodic testing, signature-based detection, shares the same root cause: security programs built around snapshots of risk rather than a continuous, live picture of it. Closing those gaps is not a tool problem. It is an operational model problem.
Strobes is built on the premise that vulnerability management has to be continuous, not periodic. The platform centralizes findings from scanners, penetration tests, bug bounty programs, and external attack surface data into a single risk-prioritized queue, so security teams are always working from an accurate, current picture of their exposure rather than a snapshot that is already weeks old by the time it reaches a triage meeting.
The question security teams face is no longer “what do we have to fix” but “what do we fix in the next four hours before an agent finds it first.” Strobes maps findings against exploitability data, asset criticality, and real-time threat intelligence so that when a new CVE lands on the CISA KEV catalog, teams know within minutes whether it touches something they own and how exposed they are. That is the operational change AI-Accelerated Offense demands: not faster humans doing the same manual process, but a continuous, automated system that surfaces the right risk at the right moment before offensive AI finds it first.
The 24-Month Window You Cannot Afford to Ignore
The Project Glasswing technical announcement states plainly that within the next 24 months, vast numbers of bugs that have sat unnoticed in code for years will be found by AI models and chained into working exploits. This is not speculation from an outside observer. It is a projection from the researchers who built and tested the model, based on what it already does. An equivalent offensive AI capability will be available to anyone within two years.
Claude Mythos is currently restricted to roughly 50 vetted organizations. That restriction is deliberate and temporary. The same capability trajectory that produced a model capable of autonomous zero-day discovery and exploit development in months will apply to every major AI lab building in this direction. When that changes, the current window where defenders can still outpace attackers on vulnerability discovery closes permanently.
What this means for security teams right now is specific:
Patch prioritization needs to move to hours, not days. Anything on the CISA Known Exploited Vulnerabilities catalog with internet exposure should be treated as an emergency, not a scheduled item.
Vulnerability management needs to become continuous. A process that checks exposure quarterly is not a security program built for AI-powered cyberattacks. It is a liability.
Autonomous red-teaming needs to be part of your program. Review the best AI pentesting tools available today and point an AI agent at your own perimeter from the outside with no credentials, running it the way an attacker would. It is not an advanced capability anymore. It is the new baseline.
The organizations that act on this now are not just ahead of the threat. They are the ones still operating when everyone else is writing the incident report.
The next blog in this series, How to Prepare Your Security Program for AI-Speed Attacks, covers the specific changes to patch workflows, triage processes, and continuous threat exposure management that separate organizations that survive this shift from those that become case studies in why it matters.
Frequently Asked Questions
Can smaller, cheaper AI models carry out AI-Accelerated Offense, or does it require a frontier model?
Frontier models set the capability ceiling, but the bar for meaningful offensive use is lower than most assume. Independent security research shows cheaper, lighter models handle reconnaissance, CVE triage, and n-day exploit adaptation at a fraction of the cost. The full autonomous kill chain documented in the 2025 Chinese espionage campaign used a widely available agentic coding tool, not a restricted frontier model. Defenders should plan for AI-Accelerated Offense as a broadly accessible capability, not one reserved for nation-states.
Does AI-Accelerated Offense primarily target unpatched vulnerabilities, or does it also threaten systems where patches exist?
Both, but the mechanism differs. Against unpatched systems, offensive AI weaponizes known CVEs faster than most patch cycles can respond. Against patched systems, it uses patch diffing, reverse-engineering the fix to infer the original flaw, and generating a working exploit before organizations have deployed the update. Research into 2026 exploit timelines found the mean time to exploit dropped from 61 days to 28.5 days year over year. The patch window itself is the attack surface.
How can Strobes detect and respond to AI-Accelerated Offense before damage is done?
AI-driven attacks move at machine speed. Systematic enumeration, chained exploits, and lateral movement outpace any manual triage workflow. The answer is autonomous AI running on your side of the equation.
Strobes continuously maps your attack surface, correlates every asset against live exploitability data, EPSS scores, and the CISA KEV catalog, and surfaces your highest-risk exposures within minutes of a new threat emerging. Background agents then triage findings autonomously, checking exploit availability and asset sensitivity, without waiting for a human to open a ticket. Learn more about how adversarial exposure validation closes the gap between discovery and remediation.
Findings flow directly into GitHub, Jira, or team alerts with full remediation context attached. Every high-impact action requires human approval before it executes, so the right people stay in control of the decisions that matter. Your exposure window shrinks before an attacker’s automated toolchain has time to exploit it.
Is n-day exploitation now a bigger risk than zero-days in an AI-accelerated world?
For most organizations, yes. Zero-days get the headlines, but n-day exploitation is the larger operational risk at scale. AI is particularly effective at patch diffing, which means the window between a vendor publishing a fix and an attacker having a working exploit has collapsed to days. Organizations running 30-day patch cycles are carrying known exploitable vulnerabilities for most of that window.
Can defenders use offensive AI techniques against their own systems to find flaws before attackers do?
Yes. Pointing an AI agent at your own perimeter from the outside with no credentials, running it the way an attacker would, is now accessible to security teams that previously could not afford continuous external testing. Start with your highest-exposure services, internet-facing authentication endpoints, input-handling code, and anything reachable without credentials. Running this on the same cadence as your asset inventory refresh is the minimum defensible posture, given what offensive AI can now do.