The Lyceum: Macro & Markets Daily — Mar 13, 2026
Photo: lyceumnews.com
Friday, March 13, 2026
The Big Picture
The economy grew at half the speed anyone expected, oil stayed above $100 for a second straight day, and Sam Altman told a room full of BlackRock clients that AI is breaking the labor-capital bargain and nobody has a fix. Stocks sold off across the board — the Dow closed down 1.1% on the session, the S&P 500 closed down 1.1% on the session, and the Nasdaq closed down 1.4% on the session — and the message from every asset class was the same: AI is still a growth engine, but the neighborhood it's operating in just got a lot more dangerous.
Today's Stories
The Economy Grew at Half the Speed Anyone Expected — and That Was *Before* the Oil Shock
The Bureau of Economic Analysis revised Q4 2025 GDP sharply downward to 0.7% annualized — exactly half of the initial 1.4% reading and well below what Wall Street had penciled in. Layer on a core PCE inflation print (the Fed's preferred gauge, which strips out volatile food and energy prices) running at 3.1% year-over-year, and you have the textbook definition of a stagflation scare: growth stalling while prices refuse to cool.
The timing is brutal. Brent crude settled above $100 for a second consecutive session, with Strait of Hormuz headlines keeping a geopolitical risk premium baked into every barrel. The 10-year Treasury yield closed near 4.27% on the session, though desks saw it trade around 4.22% intraday — a sign that bond traders are wrestling with two contradictory impulses: recession fear pulling yields down, sticky inflation pushing them back up.
For the Fed, this is a policy trap. Growth says ease. Inflation says don't. And the FOMC meets Wednesday, March 18. Corporate AI R&D investment showed mid-single-digit resilience in the GDP details, but that alone won't offset consumer pullbacks if confidence keeps sliding — the preliminary University of Michigan Consumer Sentiment index for March (March 2026) came in at roughly 55.5, the lowest reading this year. The practical takeaway: you can still invest in AI-driven productivity, but your timing assumptions and macro hedges need to be considerably more conservative than they were a quarter ago.
Altman Says the Quiet Part Out Loud on AI and Labor
Speaking at a BlackRock Infrastructure Summit — not a TED Talk, not a podcast, but a room managing north of $10 trillion in assets — OpenAI CEO Sam Altman said that AI is "killing the balance" between labor and capital and admitted nobody has a policy answer. He framed capitalism itself as the problem, noting that centuries of economic organization built around managing scarcity now have to quickly learn the opposite. "If it's hard in many of our current jobs to outwork a GPU, then that changes," he said.
This isn't just rhetoric. A new survey shows CEOs are making AI use mandatory for employees — tying it to performance reviews and promotions — even though many of those same executives report minimal hands-on use themselves (March 2026 survey). That mismatch creates shadow workflows where employees wire critical tasks through half-understood tools to satisfy policy, while managers still judge performance on legacy expectations. Meanwhile, a new generation of startups is deliberately avoiding large headcounts, preferring to invest capital into compute power — in India, entrepreneurs are attempting "zero person" startups that rely entirely on AI for software, legal work, and customer support.
With a leading AI figure framing the labor displacement problem this starkly amid coverage of labor disruption, regulators, unions, and tax policymakers may lean in harder. The online reaction was immediate and pointed, with observers warning that unchecked displacement could cascade into weaker consumer demand — a macroeconomic angle that connects directly to today's GDP revision.
The Federal-State AI Regulation War Just Escalated
If you were hoping for one clean national rulebook on AI, today was a setback. North Carolina Attorney General Jeff Jackson announced he's leading a bipartisan coalition of state AGs opposing federal efforts to preempt state AI consumer-protection laws. This follows a March 11 deadline when the Commerce Department was required to identify state AI laws the administration considers "burdensome" and the FTC had to issue a preemption policy statement. The legal machinery is now turning — and the states aren't folding.
The backstory matters: Congress recently removed a 10-year preemption provision from pending federal legislation, and 36 state attorneys general have warned federal lawmakers that federal preemption would have "disastrous consequences." Meanwhile, Europe is moving in the opposite direction — EU telecoms and digital ministers just agreed a negotiating position to streamline AI Act compliance, cutting red tape and aligning oversight bodies. If Brussels turns compliance into repeatable infrastructure, vendors may build once for the EU and export that standard globally.
For companies, this means AI governance is about to look like data-privacy compliance — state by state, with differing disclosure, audit, and appeal requirements. Product, legal, and risk teams should plan for configurable policy layers, not a single federal standard.
⚡ What Most People Missed
Meta is stripping encryption from Instagram DMs. Starting May 8, 2026, end-to-end encryption disappears from Instagram direct messages, with Meta citing low adoption and pointing users to WhatsApp. The unasked question: whether unencrypted DMs become training data for AI models. Meta hasn't addressed it.
The Pentagon is floating AI chatbots in the targeting loop. NBC News reported how LLMs could triage data and generate target packages for human operators, while Anthropic has filed suit challenging a contract clause requiring "any lawful use" of its models — a rare instance of an AI firm pushing back in court rather than quietly capitulating.
Americans overwhelmingly prefer banning AI to leaving it unregulated. A new survey finds U.S. adults would, by roughly 4-to-1 margins, rather halt AI development entirely than continue without strong regulation (March 2026 survey), giving politicians bipartisan cover to be far more aggressive on AI rules than Silicon Valley expects.
Agent orchestration protocols are going mainstream before their security is mapped. Vendor blogs now treat the Model Context Protocol (MCP) as table-stakes infrastructure for plugging LLMs into enterprise systems, but recent preprints flag MCP-specific security and consistency risks that haven't been resolved — creating lock-in and concentrated attack surfaces simultaneously.
Third-party AI service fragility is a real operational risk. A Perplexity API quota failure this week — a mundane 401 error — is a reminder that enterprise AI pipelines chaining external models can stop cold when quotas, billing, or rate limits shift. Treat vendor quota governance as a first-order risk item, not an engineering footnote.
📅 What to Watch
- Wednesday, March 18 — FOMC rate decision (~2:00pm ET): If Powell signals the Fed can't ease despite a 0.7% Q4 GDP revision because inflation remains sticky, the yield curve could reprice and cloud providers and startups may delay or scale back compute-heavy AI capex plans, slowing deployment timelines.
- Wednesday, March 18 — February PPI (8:30am ET): With Brent above $100, this is the first hard look at how energy costs are feeding through to producers — a hotter-than-expected print would more directly compress margins for industrials and retailers and trigger earnings revisions in cyclical sectors.
- Wednesday, March 18 — Bank of Japan meets the same day as the Fed: A hawkish tilt from Tokyo while Powell is speaking could trigger yen appreciation and a unwind of carry trades, amplifying global bond volatility and pressuring exporter earnings.
- Tomorrow, 8:30am ET — Retail Sales for February: A miss would increase the odds of further downward revisions to Q1 GDP forecasts and force revisions to consumer-discretionary earnings estimates, making the Q4 revision look like a leading indicator rather than a lagging one.
- Continuous — Strait of Hormuz tanker traffic: Every transit or attack is a real-time oil-price signal; if traffic resumes, expect a material repricing lower; if attacks intensify, $110+ Brent becomes a distinct possibility that would force companies to rework input-cost models and hedging strategies.
The Closer
The economy growing at the speed of a parking garage elevator. Sam Altman telling the world's largest asset manager that GPUs are coming for your headcount. And 36 state attorneys general forming a regulatory Voltron while federal lawmakers watch from the bleachers.
Four-to-one, Americans would rather ban AI than let it run unsupervised (March 2026 survey) — which means the most bullish person in the room is now the one arguing we should merely regulate it.
Stay sharp out there.
If someone you know is making AI bets without reading the macro fine print, forward this their way.