AI Weekly — Mar 12, 2026
Photo: lyceumnews.com
Week of March 12, 2026
The Big Picture
This was the week AI stopped being a debate topic and started being infrastructure — for wars, for hospitals, for the power grid, and for the political fights over all three. The U.S. military confirmed it's using AI to plan airstrikes in Iran, Nvidia bet $26 billion on becoming more than a chip company, and Bernie Sanders introduced legislation to freeze the entire data center boom. The question isn't "what will AI change?" anymore. We got some answers this week, and not all of them are comfortable.
This Week's Stories
The AI That's Helping Bomb Iran — and the Company That Still Objects
Here's a sentence that captures the strangeness of this moment: the U.S. military is actively using an AI model built by a company it simultaneously blacklisted and got sued by.
NBC News confirmed that the military is using AI systems from Palantir — software that relies in part on Anthropic's Claude — to identify potential targets in the ongoing strikes against Iran. This is happening even as Defense Department officials have clashed with Anthropic over limitations on how Claude can be used in military contexts.
The scale is staggering. Admiral Brad Cooper said U.S. forces have struck more than 5,500 targets inside Iran, with warfighters using "a variety of advanced AI tools" to "cut through the noise and make smarter decisions faster than the enemy can react." The Pentagon insists humans always make the final call. But critics point out that when a commander has minutes to review an AI-generated strike recommendation assembled from data no human could have synthesized alone, "independent judgment" starts to look more like "approval." Targeting systems used in prior conflicts have shown error rates around ten percent in post-conflict analyses.
This is the first public admission at this scale that AI is part of active combat planning — not an experiment, not a pilot, but doctrine. Project Maven and commercial models have been named in follow-up reporting. Congress is calling for oversight hearings, but no dates are set. The governance framework hasn't just fallen behind — it was never in the race.
Nvidia Just Spent $26 Billion to Stop Being "Just a Chip Company"
You know Nvidia as the company that sells shovels in the AI gold rush. This week, they walked into the mine.
Nvidia announced plans to invest $26 billion over five years to build open AI models — a strategic pivot from chipmaker to frontier AI lab. The opening salvo: Nemotron 3 Super, a 120-billion-parameter open model designed for "agentic AI" — systems where AI doesn't just answer questions but takes actions, uses tools, and coordinates with other AI systems.
The design is clever. Only 12 billion of those 120 billion parameters are active during any given task, using a "mixture of experts" architecture — think of a Swiss Army knife that only unfolds the blade you're actually using. It supports a million-token context window (roughly the length of several novels), giving AI agents sustained memory across long tasks. And it's fully open: weights, datasets, and training recipes, so anyone can customize and deploy it.
The strategic punchline: every company that builds on Nvidia's open models historically tends to buy Nvidia's chips. It's a flywheel wrapped in a gift. By giving governments and enterprises a model they can run and inspect locally, Nvidia positions itself to become the default vendor for "sovereign AI" — countries that insist on controlling their own AI stack. The launch comes ahead of Nvidia's GTC conference kicking off March 16, where next-generation GPU details are expected. Community reaction has been blunt: this is an attempt to redefine who controls AI's future by owning both the hardware and the models that run on it.
Bernie Sanders Wants to Freeze the AI Infrastructure Boom
At the exact moment hyperscalers are writing hundred-billion-dollar checks for new data centers, a U.S. senator wants to stop them from breaking ground.
Sen. Bernie Sanders introduced legislation calling for a moratorium on AI data center construction. The bill was introduced in the Senate on March 10, 2026; it was at the introduction stage and, as of publication, awaiting referral to a Senate committee. Senators Elizabeth Warren, Chris Van Hollen, and Richard Blumenthal are separately investigating ties between data center energy use and rising electricity bills. The numbers backing their case are real: residential electricity prices are forecast to rise another 4% nationwide in 2026, with $23 billion in power-capacity costs attributable to data centers — costs ultimately passed to consumers.
What makes this politically interesting is the coalition. In conservative Florida, Governor Ron DeSantis announced an AI "bill of rights" giving local communities the right to limit new data center construction. Florida's House passed a bill this week to regulate AI data centers. A left-wing senator and a right-wing governor agreeing on anything in 2026 is a genuine signal.
The bill has essentially zero chance of passing the current Senate. But it's normalizing the idea that AI infrastructure is a legitimate target for environmental and community regulation — the same political path that slowed nuclear power plant construction in the 1970s, not through a single ban but through accumulated local opposition. Watch state-level versions, not the federal one. Watch Virginia, Georgia, and Michigan.
Anthropic Is Losing to the Pentagon — and Winning Everywhere Else
While Anthropic has spent weeks in a public feud with the Department of Defense — getting blacklisted, filing federal lawsuits, watching military contracts evaporate — something quiet has been happening in the commercial world. Anthropic is eating OpenAI's lunch.
Ramp, a corporate finance platform that tracks real card transactions from thousands of companies, published data showing Anthropic has significantly closed the gap with OpenAI in enterprise spending — and in some categories, overtaken it. Nearly one in four businesses on Ramp's platform now pays for Anthropic, as of March 2026, up sharply from a year ago. Separate data shows Anthropic winning roughly 70% of first-time enterprise AI purchases when businesses choose between Claude and OpenAI's offerings, as of March 2026.
The pattern makes sense. Claude has built a reputation for reliability on long, complex documents — the kind of thing lawyers, analysts, and compliance teams depend on. Enterprise buyers aren't picking one AI; they're using several, and Claude is increasingly the one they trust with the serious work.
The irony is sharp: the safety limits Anthropic refused to drop for the Pentagon are the same ones making enterprise buyers feel comfortable deploying Claude on sensitive internal data. The military drama has coincided with increased enterprise adoption in commercial markets.
AI Agents Have Moved Into Hospital Paperwork — Quietly
Nobody held a press conference. But at HIMSS this week — the annual healthcare IT conference where hospital CIOs decide what they're buying next — something notable happened: major vendors showed up not with chatbot demos, but with named, deployable AI agents built for live hospital workflows.
The biggest target: prior authorization, the grinding process where hospitals ask insurers for permission before performing procedures. It's exactly the kind of task AI agents excel at — repetitive, rule-based, time-consuming, and intolerant of errors. UiPath demonstrated agentic solutions already running parts of these flows in production, with human sign-off at key decision points. Artera reported hundreds of providers have deployed its AI agents for patient intake, reminders, and routing, claiming millions of staff hours saved as of HIMSS 2026.
The pattern emerging is that the first serious production agents are administrative clerks, not robot doctors — systems living in inboxes and scheduling tools, not operating rooms. That's less dramatic and far more consequential for how hospitals are actually staffed. Early adopters are already hiring "agent operations" staff — a hybrid role blending IT admin, governance, and workflow design to monitor fleets of AI agents. When a job title appears, the pilot phase is over.
Hospitals are one of the most regulated, liability-sensitive environments in the economy. If AI agents are trusted with administrative workflows there, the bar for enterprise adoption everywhere else just dropped.
New Products & Launches
Apple M5 Max MacBook Pro shipped March 11 with up to 128GB of unified memory and Neural Accelerators in every GPU core. Early benchmarks show it running 70-billion-parameter AI models on battery at 60–90 watts — new territory for laptop-based AI. Dedicated GPUs still win on raw speed by 2.5–3x, but the M5 Max never runs out of memory, which matters the moment your model exceeds a GPU's capacity. Serious local AI just moved off server racks and into backpacks.
Nemotron 3 Super from Nvidia launched as an open-weight agentic model with 120B parameters, 12B active per task, and a million-token context window — positioned for enterprises wanting to run multi-agent systems on their own infrastructure rather than renting cloud time.
ntransformer, an open-source project, showed how to run a 70B-parameter model on a single gaming GPU by streaming data from fast storage, trading speed (~0.5 tokens/second) for accessibility. It's slow, but it means a $600 graphics card can now touch frontier-scale models.
⚡ What Most People Missed
The FTC's quiet deadline already passed. A December 2025 Executive Order set March 11 as the deadline for the FTC to issue a formal policy statement on how its "unfair and deceptive" practices authority applies to AI. The real play: the federal government is exploring whether it can use the FTC to override state AI laws in California, Colorado, and elsewhere. The Department of Justice has established an AI Litigation Task Force. The patchwork of state AI regulation may be about to get a federal buzzsaw.
The people who built the most famous AI agent say function calling is broken. A former backend lead at Manus — the Chinese AI agent startup that went viral earlier this year — posted on Reddit that after two years of production work, they abandoned function calling entirely. Instead of having the AI pick from a menu of tools, they let it write and execute code directly — an approach called CodeAct. Multiple independent builders are quietly reporting the same thing: the standard technique looks clean in demos and breaks in production.
Meta bought a social network built for bots. Moltbook is a Reddit-style forum where only AI agents can post and humans can only watch. Meta acquired it this week, folding the founders into its Superintelligence Labs. If Meta can harvest patterns from millions of agents arguing and collaborating in public, that's a training and product-testing loop nobody else has at scale.
AI "brain fry" is getting quantified. A Harvard Business Review piece (March 2026 survey) reported that roughly 1,500 workers said while AI offloads repetitive tasks, the overhead of managing multiple AI tools increased decision fatigue by about 33% and produced a measurable bump in errors. As hospitals and enterprises deploy agent fleets, this human-factors signal matters: successful AI at scale won't just be about accuracy — it'll be about how much supervision costs the people doing the supervising.
A fruit fly's brain was copied to a computer — and it walked. Researchers at Eon Systems mapped all 125,000 neurons and 50 million synaptic connections of a fruit fly brain, loaded it into a physics simulation, and it produced walking behavior without being programmed to walk. Zero practical application today. But it's the first time biological intelligence has been meaningfully ported to software, and both neuroscientists and AI researchers are paying close attention.
📅 What to Watch
- If a federal judge grants Anthropic a preliminary injunction limiting the Department of Defense's use of Claude, it would establish that certain AI safety limits are legally protected — and every defense contractor will quietly rewrite their government contracts overnight to avoid exposure.
- If Nvidia's Nemotron models show up quickly on open-model leaderboards, it would indicate Nvidia is succeeding at locking developers at the software layer too — forcing competitors to choose between partnering with Nvidia or building entirely separate stacks.
- If the FTC pairs its new AI policy statement with a major enforcement action, every company with a consumer-facing model would need a compliance audit far faster than many legal teams can schedule.
- If Amazon quantifies AI productivity gains in upcoming earnings alongside increased "review work" complaints, that will become the most credible, granular real-world data on whether enterprise AI actually reduces labor hours per task or simply reallocates work into supervision and review.
- If state-level data center bills in Florida and Colorado survive committee, hyperscalers will start negotiating permits like power plants — offering local jobs and grid investments in exchange for political cover.
A robot tidied a living room without human help, a fruit fly's brain walked inside a computer, and the U.S. military confirmed it's using AI to pick bombing targets while a senator tries to stop anyone from plugging in a new server. Meanwhile, the builders of Manus say the technique everyone teaches doesn't work — which feels about right for a field where the laptops just got smarter and the humans just got more tired.
Until next week. —