The Lyceum: Cybersecurity Weekly — Mar 18, 2026
Photo: lyceumnews.com
Week of March 18, 2026
The Big Picture
This week, the things you trust turned out to be the things that hurt you. Iran-linked hackers didn't deploy exotic malware against a Fortune 500 medical device company — they just logged into its device management console and pressed "factory reset" on everything. Researchers published blueprints showing how AI agents can infect each other like network worms from 2003. And a U.S. defense contractor's iPhone hacking toolkit surfaced in the hands of Russian spies and Chinese criminals, after reports that an insider removed it from secure control. The theme isn't "new threats." It's that the infrastructure we already depend on — admin tools, AI agents, contractor relationships — is leaking capability in every direction.
What Just Shipped
- CERT-UA Advisory on APT28 / CVE-2026-21509 (CERT-UA): Ukraine's CERT published a bulletin detailing APT28 (UAC-0001) exploiting a new vulnerability, CVE-2026-21509, in campaigns targeting Ukraine and EU countries.
- CERT-UA Advisory on UAC-0252 / SHADOWSNIFF & SALATSTEALER (CERT-UA): New advisory on UAC-0252 deploying two stealers — SHADOWSNIFF and SALATSTEALER — against Ukrainian targets.
- Handala Hack Modus Operandi Report (Check Point Research): Deep-dive into the Iranian MOIS-linked group behind the Stryker wiper attack, detailing VPN brute-force entry, NetBird lateral movement, and multiple wiper variants.
- Claude Code RCE Vulnerabilities — CVE-2025-59536 & CVE-2026-21852 (Check Point Research): Disclosed remote code execution flaws in Anthropic's Claude Code project files that enable API token exfiltration.
- Silver Dragon Espionage Campaign (Check Point Research): China-linked Silver Dragon targeting organizations in Southeast Asia and Europe with PlugX and Cobalt Strike via war-themed lures.
- Iranian MOIS–Cybercrime Convergence Report (Check Point Research): Documents Iranian state actors adopting criminal tools like the Rhadamanthys infostealer alongside destructive wipers.
- FreeBuf: AI Vulnerabilities in Amazon Bedrock, LangSmith, and SGLang (FreeBuf): Chinese-language reporting on exploitable flaws in major AI platforms enabling data theft and remote code execution — not yet covered in English-language press.
This Week's Stories
Iran Didn't Ransomware Stryker. They Just Erased It.
Most cyberattacks come with a ransom note. This one came with a manifesto.
On March 11, Stryker Corporation — a Fortune 500 medical technology company that makes everything from surgical robots to defibrillators — suffered a catastrophic wiper attack. The group responsible, Handala, is linked to Iran's Ministry of Intelligence and Security (MOIS) and assessed by Check Point Research as a persona of Void Manticore, an MOIS-affiliated actor.
Here's the twist that should keep every IT director awake tonight: the attackers didn't write custom malware. They gained administrative access to Stryker's Microsoft Intune environment — the platform companies use to remotely manage employee laptops, phones, and servers — and used its built-in "remote wipe" and "factory reset" features at industrial scale. According to ProArch's analysis, some departments saw up to 95% of devices erased during the attack. Reporting on the total number varies — some outlets say tens of thousands, others cite over 200,000 systems — but the technical lesson is the same either way.
Stryker says its medical products like LIFEPAK defibrillators remain safe, but hospital workflows were disrupted: some emergency departments fell back to radio calls and manual processes. The company still can't process orders or ship devices.
The terrifying part isn't the sophistication. It's the simplicity. Any organization that uses centralized device management — which is most large organizations — has this same capability sitting in its admin console right now, one compromised credential away from catastrophe.
The U.S. Built an iPhone Hacking Toolkit. Russia and China Used It.
A sophisticated iPhone exploitation toolkit called "Coruna" — 23 components designed to break into iPhones — was built inside L3Harris, a major U.S. defense contractor. It was intended for Western intelligence operations. It ended up everywhere else.
According to TechCrunch, corroborated by Google's Threat Intelligence Group and mobile security firm iVerify, the toolkit was first used in highly targeted government operations, then deployed by Russian intelligence against Ukrainians, and finally picked up by Chinese cybercriminals running broad cryptocurrency theft campaigns. The leak vector: a former L3Harris employee named Peter Williams, who pleaded guilty to selling at least eight exploits to a Russian broker — a broker the U.S. Treasury sanctioned last month.
The good news: Coruna targets older iOS versions, and iPhones running iOS 17.3 or newer are not vulnerable. The bad news: government-grade exploit toolkits are now cycling from intelligence agencies into criminal networks within months, not years. This is the spyware supply chain working exactly as critics warned it would — and a single insider removal was enough to accelerate the spread.
Update your iPhone today.
Researchers Just Built a Worm That Jumps Between AI Agents
Remember computer worms — the self-replicating malware that tore through networks in the early 2000s? Researchers have now demonstrated the same concept working across AI agents, and the results are uncomfortable.
A new preprint called ClawWorm (not yet peer-reviewed) shows how one compromised AI agent can embed hidden instructions in its output — a document summary, a task description, metadata in a file — and when another agent reads that output, it gets silently "infected" and begins spreading the payload further. With modern AI agents wired into real tools (email, Slack, ticketing systems, file storage), the worm can exfiltrate data, create tasks, or open tickets that trigger still more agents.
This isn't theoretical hand-waving. Separate research showed that indirect prompt injection — hiding malicious instructions inside data an agent reads — works reliably at scale. Over 60,000 successful injections were documented across 1.8 million attempts.
Meanwhile, malicious "skills" uploaded to ClawHub, an open marketplace for AI agent capabilities, are already delivering real malware. A separate architectural flaw means any website you visit can silently connect to your local AI agent through your browser. Microsoft has issued an advisory.
The practical upshot: AI agents need the same network-segmentation and input-sanitization discipline we use for regular software — isolation between agents, read-only credentials, human approval gates, and the assumption that every piece of data an agent touches might be hostile. Most organizations deploying agents aren't doing any of this yet.
Your Browser Is Still Ground Zero: Chrome Patches Multiple Zero-Days Under Active Attack
While everyone debates AI threats, old-fashioned browser bugs remain one of the fastest ways to get compromised. Google pushed an urgent Chrome update this week to fix at least two zero-day vulnerabilities — flaws attackers were already exploiting before the patch existed — with a third high-severity bug (CVE-2026-3909, in Chrome's Skia graphics engine) flagged as actively exploited days later.
These are memory-safety problems: "use-after-free" and out-of-bounds write bugs that let a malicious website run code on your machine just by getting you to visit a page. No clicks required beyond loading the URL. This is classic drive-by territory — malvertising, poisoned search results, or compromised legitimate sites.
Chrome zero-days have been chained with other flaws in the past to install spyware on journalists, dissidents, and executives. The fix is simple but non-negotiable: update Chrome (and any Chromium-based browser like Edge or Brave) right now, and make auto-update a policy, not a suggestion. For security teams, "who hasn't updated yet?" should be treated as a live-fire exercise.
CISA's Exploit List Keeps Growing — And the Targets Are Your Admin Tools
CISA's Known Exploited Vulnerabilities catalog — the U.S. government's official list of "things attackers are definitely using right now" — added several new entries over the past week, and the pattern tells a story. The additions include a VMware Aria Operations for Networks command-injection bug, remote-code-execution flaws in SolarWinds and Ivanti Endpoint Manager, and vulnerabilities in Wing FTP Server and the n8n workflow automation platform.
Notice what these have in common: they're not customer-facing apps. They're the tools that manage your environment — the platforms that sit deep in the network with high privileges, where a single unpatched instance gives someone the keys to everything else. Federal agencies have remediation deadlines in late March. Private companies don't get a memo, but the risk is identical.
After the Stryker attack showed what happens when attackers weaponize admin consoles, this pattern in CISA's catalog is a flashing signal: patching your "plumbing" — the monitoring, management, and automation tools — is now as urgent as patching the apps your customers see. That's where attackers are already inside.
⚡ What Most People Missed
UK military engineers are calling Palantir a national security risk. Two senior Ministry of Defence systems engineers told The Nerve that Palantir's deep integration across British government data creates a "mosaic effect" — combining individually unclassified datasets to reveal classified information, like nuclear submarine locations. Separately, the Good Law Project found that about a third of NHS trusts were not meeting minimum data security standards at the time of its review. Palantir denies the claims. This is Europe's defining tech-sovereignty fight in slow motion.
AI-generated malware just got a name. IBM researchers identified a malware family called "Slopoly" that they believe was authored with LLM assistance and tied it to the Hive0163 ransomware group. The code patterns suggest AI-assisted generation and obfuscation — meaning attackers aren't just using AI for phishing anymore; they're using it to write evasive tooling.
The 2026 International AI Safety Report quietly elevated "persuasion risk." The report elevated persuasion risk from a theoretical concern to a concrete, near-term threat, citing behavioral studies showing LLM-generated messages can measurably shift opinions, and that model explanations — even wrong ones — make misinformation more persuasive. If your product generates or personalizes messaging, start documenting guardrails before regulators force your hand.
Agent tool chains are failing basic fuzzing tests. A study that ran automated fuzzing against popular LangChain tools found roughly twenty times more erroneous behaviors than simple prompt tests revealed. Many tools had ambiguous documentation that caused silent mis-executions when driven by an LLM. If you're deploying agents, fuzz your tool endpoints like you'd fuzz any untrusted API.
📅 What to Watch
- If CISA issues advisories specifically about Microsoft Intune abuse, it could prompt formal remediation guidance or deadlines for federal agencies and increase compliance scrutiny for government contractors, raising immediate hardening and audit costs across sectors that rely on centralized MDM.
- If AI platform vendors start restricting tool-invocation capabilities or adding agent isolation features, expect product roadmaps to require token-scoping changes, stronger sandboxing, and migration of existing deployments to architectures with explicit human-approval gates — not just patching individual flaws.
- If the House of Commons Science and Technology Committee (a Commons select committee conducting inquiries) formally requests documents on Palantir's Ministry of Defence contracts, the story moves from investigative journalism into political accountability — and the precedent could reshape how European governments evaluate U.S. tech vendors handling sovereign data.
- If Meta doesn't issue a detailed technical clarification on its encryption policy changes within the next week, security researchers will likely publish independent analyses of what metadata was being collected under the old "end-to-end encrypted" regime — analyses that may be significantly less favorable to Meta's public framing.
- If underground forums begin advertising new proxy services after the SocksEscort/Avrecon takedown of 360,000 infected devices, watch for a targeted spike in credential-stuffing, ephemeral cloud-VM rentals, and rapid turnover of anonymization infrastructure that attackers use to monetize crypto-theft campaigns.
The Closer
An Iranian hacker group pressing "factory reset" on a Fortune 500 company through its own admin console. A defense contractor's crown-jewel iPhone exploits traveling from Virginia to Moscow to a Chinese crypto scam. AI agents cheerfully infecting each other like it's 2003 and someone just opened a Blaster worm email attachment, except this time the worm is a polite paragraph hidden in a meeting summary.
The most secure text editor in computing history held that title for 41 years, then someone added Markdown support.
Stay patched, stay skeptical.
If someone you know is deploying AI agents without reading any of this — do them a favor and forward it.