AI Daily — Mar 08, 2026
Sunday, March 8, 2026
The AI industry is arguing with itself today — and the arguments are the story. OpenAI's robotics chief walked out over a Pentagon deal she says was signed before the rules were written. Block employees are telling reporters that AI can't actually do their jobs, undercutting Jack Dorsey's dramatic layoff narrative. And quietly, Alibaba's latest open-source model is making developers wonder why they're still paying for API access. Control is the thread: who decides where AI goes, who it replaces, and who gets to sell it to the government.
OpenAI's Robotics Chief Quit Over the Pentagon Deal — and Said Exactly Why
Robotic arm and engineer in research lab
Caitlin Kalinowski, who led OpenAI's robotics and hardware team, resigned yesterday and posted a precise, public explanation. She didn't object to AI in defense broadly — she objected to the process. "The announcement was rushed without the guardrails defined," she wrote. "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
The backstory matters. The Pentagon had been negotiating with Anthropic first, but those talks collapsed after Anthropic pushed for strict limits on domestic surveillance and autonomous weapons. OpenAI stepped in and signed instead. Before the deal was announced, 91 OpenAI employees had signed an internal letter urging leadership to reject military AI contracts.
OpenAI told Engadget there are no plans to replace Kalinowski — a strange sentence for a team of roughly 100 people that was just hitting its stride. The Brussels Times picked up the story, giving it international visibility. This is the most senior ethics-driven departure from a major AI lab in recent memory, and it lands at the exact moment these companies are being asked to show where their red lines actually are — not just where they say they are.
"You Can't Really AI That" — Block Workers Push Back on Dorsey's Layoff Story
Empty office floor after employee layoffs
Jack Dorsey told the world that AI made 4,000 jobs at Block unnecessary. His remaining employees are telling a different story.
In February, Block cut about 40% of its staff; Dorsey attributed the move to "intelligence tools" and predicted that "within the next year, the majority of companies will reach the same conclusion." Following the announcement, Block's shares closed up 17% on the trading session after the announcement.
But today, The Guardian reported that current and former Block workers are publicly disputing the premise. They describe specialized, judgment-heavy work that AI tools simply cannot replicate — the kind of context-rich, relationship-dependent labor that doesn't reduce to a prompt. A former head of communications wrote an op-ed suggesting the layoffs are more about Dorsey proving his "A.I. credentials" than a true signal that AI is doing the work. One data scientist who left voluntarily said she was offered a 75% pay increase to stay — casting doubt on how much Block is actually saving.
Here's the detail that should make other CEOs nervous: Block underwent three rounds of layoffs since 2024, and in the earlier rounds, Dorsey cited performance reasons with no mention of AI — despite the same tools being available. The "AI made me do it" narrative is getting picked apart in real time.
The US Government Just Moved to Classify Anthropic as a "Supply Chain Risk"
The Trump administration is moving to designate Anthropic — maker of the Claude chatbot — as a "supply chain risk," a label normally reserved for telecom and defense firms. If it sticks, federal contractors would need to stop using Claude in their systems, and enterprises would face real legal questions about depending on it.
This is the subplot of the OpenAI/Pentagon story that deserves its own headline. After Anthropic's negotiations with the Pentagon collapsed over safeguards, the government moved to apply a bureaucratic tool to limit the company's use in sensitive supply chains. Anthropic has signaled it intends to challenge the designation legally, arguing its caution was protective, not obstructive.
The dynamic looks a lot like the early 2010s encryption backdoor debates: one company takes the principled stand and pays a market cost; another takes the contract and absorbs the reputational hit. Both are betting their brand on opposite theories of what customers will value in 2026. If the designation is finalized this month, expect procurement teams across government and enterprise to start re-reading vendor contracts for clauses that could force sudden model swaps.
Alibaba's Qwen 3.5 27B Is Beating Models Three Times Its Size
Something is happening in the open-source AI community that deserves more attention than it's getting.
Alibaba's Qwen team launched the full Qwen 3.5 family — nine models over 16 days. The 27B model (27 billion parameters — roughly a measure of a model's size and capability) has become the talk of r/LocalLLaMA this week, where developers run AI on their own hardware. Community members are reporting this week that it outperforms much larger competitors on coding and reasoning tasks.
The architecture innovation doing the heavy lifting is called "Gated Delta Networks" — a new attention mechanism that processes long documents in near-linear time instead of quadratically, meaning it doesn't grind to a halt on long inputs the way standard transformers do. The 35B-A3B variant uses Mixture-of-Experts (where only a fraction of the model activates for each task) and the Qwen team says it outperforms the previous generation's 235B model. The 9B model scored 70.1 on MMMU-Pro visual reasoning benchmarks, beating both Gemini 2.5 Flash-Lite and GPT-5-Nano — though these are Alibaba's own numbers, not independently verified.
Cheap, capable, runs locally: that combination is what breaks the dependency on cloud AI. Developers are already sharing crash reports and fixes for running it on consumer hardware — the messy, beautiful reality of open models hitting real infrastructure. The 2B model runs on any recent iPhone in airplane mode. Watch whether enterprise developers begin shifting workloads off OpenAI and Anthropic APIs toward self-hosted Qwen over the next quarter.
Shenzhen's "OpenClaw" Shows What Grassroots AI Adoption Actually Looks Like
A viral clip from Shenzhen today is basically a time-lapse of automation creeping into everyday life — and the crowd is cheering.
Nearly 1,000 people lined up near Tencent's Shenzhen campus for a public installation event for "OpenClaw": an AI agent system that configures, tunes, monitors, and maintains fleets of claw machines without on-site staff. Tencent engineers were reportedly helping attendees install and configure the software for free. Students, developers, and retirees stood in the same line. The underlying product, from iSetupClaw, uses an AI agent to remotely handle everything from setup to diagnostics across thousands of locations that used to need technicians.
It's just claw machines. But the pattern scales: one AI agent quietly replaces a whole chain of specialized roles, then rolls out to thousands of locations at once. The Reddit thread drew 700+ upvotes and a mix of awe and anxiety. Chinese retail and entertainment spaces are becoming the living lab for "small but everywhere" automations — and the fact that people are lining up to participate, rather than protesting, tells you something about the cultural gap in how different countries are absorbing this technology.
⚡ What Most People Missed
Qwen's lead researcher quietly resigned on March 4 — and it may matter more than the model releases. Junyang Lin, described as "key to releasing their open weight models from 2024 onwards," departed after a re-org put a new hire from Google's Gemini team in charge. The open-source community built its trust in Qwen around Lin's leadership. The 3.5 family may be the team's swan song.
Claude briefly led the App Store amid the Pentagon backlash. Amid the Pentagon deal backlash, users reportedly canceled ChatGPT subscriptions and #CancelChatGPT trended; as of March 7, Anthropic's Claude was the top AI app on Apple's U.S. App Store. Whether that's a lasting shift or a protest spike is the real question.
A solo developer classified 3.5 million US patents on one GPU. A patent lawyer ran NVIDIA's Nemotron-9B on a single RTX 5090, tagged every US patent from 2016–2025 into 100 categories over 48 hours, and built a free search tool on top of it. "AI-grade search over millions of documents" just moved from FAANG-only to weekend-project territory.
An open-source deepfake detector now shows you where the face was faked. VeridisQuo combines spatial and frequency analysis to render heatmaps of manipulated regions — designed for journalists and investigators who need to show their work, not just get a confidence score. It's research-grade, not production-ready, but 462 upvotes on r/MachineLearning suggest real practitioner interest.
AI data centers are sparking local backlash over power lines. As hyperscalers race to add GPU capacity, utilities are stringing high-voltage transmission lines across rural counties. Fortune documented residents calling one project "hell" as towers march across farmland. These fights can delay expansions and raise costs — the physical infrastructure bill for training next-generation models is now colliding with voters.
📅 What to Watch
- If more OpenAI employees resign over the Pentagon deal this week, it means internal consensus around the contract is weaker than leadership has acknowledged — and the talent redistribution to smaller labs and competitors could accelerate OpenAI's robotics slowdown at the worst possible time.
- If Claude's App Store lead holds through next weekend, it signals that user sentiment around AI ethics has become a measurable competitive variable, not just a PR concern — which would reshape how labs frame their government relationships and product positioning.
- If the Anthropic "supply chain risk" designation is finalized this month, enterprise procurement teams will face forced model swaps mid-contract, creating a compliance scramble that benefits whichever labs land on the "approved" list and complicates long-term vendor roadmaps.
- If another major company announces AI-driven layoffs in the next two weeks, it would suggest Block was an early mover rather than an outlier — and force HR and finance teams to accelerate re-skilling and contractor strategies.
- If agentic systems like OpenClaw start appearing in Western retail, it means frontline service automation is arriving faster than most corporate workforce plans account for — and localized political and regulatory responses will determine how quickly those deployments scale.
That's Sunday. The cracks in the narratives are where the real information lives — who's walking out, who's pushing back, who's building quietly while the headlines argue. Pay attention to the gaps between what companies say AI can do and what the people doing the work report. That's where next month's stories are hiding.