The Lyceum: AI Weekly — Mar 16, 2026
Photo: lyceumnews.com
Week of March 16, 2026
The Big Picture
AI left the promise phase this week and entered the consequences phase. xAI is losing the people who built it while racing toward an IPO. BuzzFeed is dead because it bet AI could replace editorial judgment. Anthropic made the most important capability upgrade of the year free. And a Bloomberg survey revealed that most companies blaming AI for layoffs are just using it as cover. The theme: the gap between what AI actually does and what people claim it does is becoming the story.
This Week's Stories
xAI Is Coming Apart at the Seams — With an IPO on the Horizon
When a founder publicly admits his own startup "was not built right first time around," you're past spin territory.
Elon Musk triggered another wave of cuts at xAI this week, pushing out more co-founders amid his frustration with the company's underperforming coding division — all while preparing for an IPO that, following xAI's merger with SpaceX, could rank among the largest ever. According to the Financial Times, co-founder Guodong Zhang was relieved of his primary duties leading the Imagine team and told colleagues he was leaving after being blamed for coding product issues. Musk brought in "fixers" from SpaceX and Tesla to audit operations.
The departures leave the three-year-old company with only two of its original twelve co-founders — a stunning attrition rate for a company valued at $250 billion weeks ago. "We're currently behind in coding," Musk acknowledged at a conference last week. Staff have complained that the upheaval is destroying morale, and researchers continue to leave amid complaints of burnout from Musk's "extremely hardcore" demands or after receiving better offers from rivals.
The timing is brutal: a company shedding its technical brain trust while racing toward a historic public offering is a very specific kind of risk. Meanwhile, xAI continues spending billions on data center infrastructure around Memphis and just scored a permit in Mississippi for one of the region's largest power plants — natural gas turbines to feed its GPU clusters. The infrastructure buildout marches on even as the people who'd use it walk out the door.
Anthropic Just Made Your Entire Codebase Fit in One Prompt — for Free
For years, AI researchers have promised you'd be able to feed a model your entire document archive, codebase, or legal discovery and let it reason across the whole thing at once. This week, that stopped being expensive.
On March 13, Anthropic made its full one-million-token context window — "context window" meaning the amount of text a model can consider in a single conversation — generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing. No multiplier. A 900,000-token request costs the same per-token rate as a 9,000-token one. To put that in perspective: a million tokens is roughly the entire Harry Potter series, twice, with room to spare. Or a full corporate codebase. Or a year of financial filings.
The pricing matters as much as the capability. Competitor Gemini 3.1 Pro's pricing rises beyond 200,000 tokens; GPT-5.4's rises beyond 272,000. That makes Anthropic the only major provider where "give the AI everything" doesn't come with a surprise bill at month's end.
Two signals reinforce the move. Ramp, the corporate finance platform, published anonymized spend data showing Claude is already edging into OpenAI's enterprise territory, particularly for internal copilots and analysis tools — suggesting this is a market play to lock in high-volume customers, not just a product update. And Anthropic's own pricing announcement makes the corporate intent explicit.
The honest caveat: bigger context windows don't automatically mean smarter answers. Models can still lose track of things buried in the middle of enormous inputs. But when predictable token economics start winning procurement decisions, rivals have to respond on cost, not just capability.
BuzzFeed Is Done. The AI Content Bet Failed.
Three years ago, BuzzFeed CEO Jonah Peretti bet the company on AI-generated content, promising it would "replace the majority of static content" — announced one month after shuttering the Pulitzer Prize–winning BuzzFeed News. This week, the result came in: BuzzFeed is shutting down entirely.
The autopsy matters more than the death certificate. BuzzFeed's experiment was one of the first high-profile attempts to use AI to substitute for editorial quality rather than augment it — flooding the zone with cheap, AI-generated listicles hoping search traffic would follow. For a while, it seemed to work. Then Google's algorithm updates hit AI-generated content hard, traffic collapsed, and the audience that remained could tell the difference. The company reported a net loss of $57.3 million in 2025. Bloomberg's feature traces the editorial and commercial missteps in detail, showing how the pivot hollowed the newsroom and failed to protect brand trust.
The lesson isn't that AI can't help media companies — it's that AI-generated content without editorial judgment doesn't hold an audience. Publishers who've fared better used AI for production tasks (transcription, SEO tagging, image resizing) while protecting the human reporting and voice readers actually come for.
The timing is pointed: this collapse lands the same week a Bloomberg survey revealed hiring managers have been blaming layoffs on AI because "it plays better" with the public.
The Hiring Manager Survey That Reframes the AI Jobs Debate
Here is a number that should make you sit up straight.
A Bloomberg survey of 1,000 hiring managers (March 15, 2026) found that 59% of respondents say they stress AI's role in layoffs to the public (in the March 15, 2026 Bloomberg survey) — even when the actual driver was something more mundane, like slowing revenue or a post-pandemic headcount correction. "AI did it" has become a convenient alibi for layoffs that would have happened anyway.
This doesn't mean AI-driven displacement isn't real — it clearly is in certain roles. But it means the headline numbers are almost certainly distorted. When companies loudly announce AI is replacing workers, it's worth asking: was this reorganization already planned? Is AI the cause, or the cover?
The murkiness cuts both ways. It may mean displacement is overstated in some quarters — but it also means workers losing jobs to genuine automation aren't being counted properly when executives attribute cuts to "strategy." The data we have on AI's workforce impact is noisier than anyone wants to admit, and this survey is the most direct evidence yet of why.
A second finding buried in the same survey got less attention: a significant share of managers said they understate AI's role when speaking to workers directly — because telling someone a machine took their job feels crueler than "restructuring." The result is a workforce that simultaneously over-attributes some losses to AI and has no idea it caused others.
Humanoid Robots Got Real Jobs and Real Price Tags
The demo era for humanoid robots is ending. The commerce era is beginning.
At Automation World 2026 in Seoul this week, humanoid and industrial robots from Unitree, Fourier Intelligence, AGIBOT, Leju Robotics, and Huawei moved well past staged demos. These machines navigated stairs, gripped delicate objects, and performed factory-floor tasks in uncontrolled environments — the conditions that have historically broken every impressive demo. Manufacturers are now publishing price sheets, with some models entering the $30,000–$50,000 range — still expensive, but approaching the cost of a single full-time worker over two to three years once you factor in benefits and turnover.
The research side is converging fast. A new preprint shows a humanoid robot learning to sustain tennis rallies after roughly five hours of human motion-capture data plus simulation. A separate UC Berkeley paper called HITTER demonstrated a humanoid playing table tennis with a 92.3% return rate — 106 consecutive shots — on an off-the-shelf Unitree G1 robot, using no specialized hardware. Two independent groups, same week, both showing general-purpose humanoid bodies learning fast-reflex physical skills from minimal data. The amount of data required to teach a robot a complex physical skill just dropped by an order of magnitude.
The China angle is significant and under-covered: AGIBOT and Leju are Chinese companies deploying in Chinese factories at scale, with government procurement support. The robotics race has a geography that the humanoid hype cycle — largely framed around Boston Dynamics and Figure — tends to obscure.
New Products & Launches
Perplexity Personal Computer — Perplexity launched a product that turns a dedicated Mac mini into a 24/7 AI agent wired into your local apps and cloud accounts. It orchestrates roughly 20 different AI models behind the scenes, can watch your inbox, manipulate files, and act on your behalf while you sleep, with kill switches and audit logs built in. At around $200/month plus the hardware, it's priced like a part-time assistant, not an app — and it's the first serious attempt to ship "always-on personal AI" as a supported consumer product.
OmniCoder-9B — A small team called Tesslate released an open-source coding model fine-tuned on 425,000 "agentic trajectories" — recordings of much larger frontier models reasoning through programming tasks — built on Alibaba's Qwen 3.5-9B architecture. Early community reports suggest it punches well above its size on agent-style coding benchmarks, reinforcing that not every useful developer tool requires renting cluster-scale compute.
Microsoft Copilot Cowork — Microsoft announced an evolution of its Copilot assistant designed to automate multi-step workflows across its 365 apps, built in collaboration with Anthropic. It can assemble a presentation by pulling financial data, then schedule the follow-up meeting, from a single request — moving the "AI agent" concept from standalone apps into the software most offices already use.
⚡ What Most People Missed
- Karpathy mapped every U.S. job for AI risk — then removed the repo. Andrej Karpathy published an interactive study scoring every U.S. occupation for AI exposure: 42% of jobs scored 7+ on a 10-point scale, representing 59.9 million workers and $3.7 trillion in wages. Higher education correlates with more exposure, not less. The repo was later removed — likely because the scores were LLM-generated (using GPT to score how replaceable jobs are by GPT is methodologically circular), and the viral spread stripped that caveat away. The website still works.
- "Sloppypasta" is now a word, and it's naming something real. A website called Stop Sloppypasta hit the Hacker News front page this week. The concept: verbatim LLM output copy-pasted at someone, unread and unrefined, is rude because it shifts the verification burden to the recipient. The first AI etiquette norm of the workplace era is forming in real time, arriving as a meme before it arrives as policy.
- ByteDance found the export-control loophole. The TikTok parent company is accessing roughly 36,000 Nvidia Blackwell GPUs in Malaysia through a cloud partner called Aolani — about $2.5 billion worth of hardware that technically complies with U.S. rules because the racks sit outside China. Nvidia confirmed no objections. As the Wall Street Journal reports, this is part of a broader playbook. Export controls are shifting from "who owns the chips" to "where the racks are bolted to the floor."
- AI's power problem became a political problem. Progressives in Congress and some state policymakers have proposed moratoria on new AI data centers, citing electricity and water consumption. States like Florida are advancing their own oversight measures. And landowners in the Midwest are organizing against the high-voltage transmission lines needed to connect new facilities — describing proposed 240-foot metal towers bisecting family farms. AI's growth is hitting hard physical limits that can slow things down as effectively as any chip shortage.
📅 What to Watch
- Nvidia GTC kicks off Monday (March 17) — If Nvidia announces a new inference chip priced below what Google and Amazon charge for their custom silicon, it reshapes the economics of running AI at scale for every company that doesn't have its own chip team. Preview here.
- If major cloud providers publicly back or oppose a data-center moratorium, it signals whether the industry sees local politics as a real brake on growth — or just noise to lobby away.
- The state AI legislation avalanche is accelerating — Colorado, Texas, Illinois, Washington, Virginia, and Utah are advancing AI-related proposals in their legislatures. If the White House publishes its expected list of "onerous" state measures, the federal preemption fight goes from theoretical to immediate.
- If the Tennessee grandmother wrongful-arrest case reaches formal litigation, it becomes the test case defining courtroom rules for AI-generated evidence — and could accelerate facial recognition bans in multiple states.
- If Perplexity announces enterprise adopters for Personal Computer, it means companies are ready to give always-on agents deep access to internal systems — a trust threshold the industry hasn't crossed yet.
The Closer
A grandmother in jail because an algorithm can't tell faces apart; a dead media company because an algorithm can't tell stories apart; a robotics lab where a humanoid learned tennis in an afternoon because, apparently, that algorithm works fine.
Somewhere, a hiring manager is telling a reporter that AI eliminated 200 jobs, then turning around and telling the remaining employees it was "strategic restructuring" — and both audiences believe him, which is the most human thing AI has produced all year.
Until next week.
If someone you know would get something out of this, send it their way.