Power & Infrastructure Weekly — Mar 10, 2026
Photo: lyceumnews.com
Week of March 10, 2026
The Big Picture
AI data centers crashed into physical reality this week — the grid, the pipe, and the chiller all pushed back at once. OpenAI and Oracle shelved a flagship Texas expansion over money and power constraints, MISO fast-tracked billions in new transmission just to keep Midwest data center loads from blowing up reliability, and a new research paper put numbers to something water utilities have been feeling in their bones: hyperscale cooling can eat a city's entire spare water capacity. The through-line is now undeniable: big compute is a regulated infrastructure customer, not a clever load hiding behind "the cloud."
This Week's Stories
OpenAI and Oracle Hit Pause on Texas "Stargate" Expansion as Power, Financing Collide
If you wanted a case study in how fast AI ambitions can outrun grid and capital reality, the Abilene "Stargate" campus just volunteered. Oracle and OpenAI have scrapped plans to expand their flagship AI data center in Texas, shelving a major capacity boost at the already-under-construction site after negotiations over financing and OpenAI's evolving needs broke down. The broader Stargate initiative was pitched as a $500 billion, multi-gigawatt AI infrastructure build; this specific expansion would have pushed the Abilene campus well beyond its initial roughly gigawatt-scale vision.
But the more telling detail is why OpenAI walked. Reporting indicates OpenAI wants "liquid-ready" sites for next-gen GPUs with rack power densities in the 100–120 kW range, while Oracle's existing build strategy leaned on legacy air-cooled footprints. That mismatch — between the thermal architecture AI chips actually need and what's being built — favors developers offering direct-to-chip liquid readiness from day one. For grid planners and IPPs, the signal is clear: even the most deep-pocketed AI buyers can't will new capacity into existence on a 24-month timeline. Watch what OpenAI does next with alternative sites — if they pivot toward regions with firmer transmission and clearer regulatory paths, that's your map for the next wave of hyperscale power demand. Additional reporting from CNBC.
PJM Says Data Centers Could Add 35 GW of Peak Load in Five Years — and the Co-Location Rate Fight Lands March 18
PJM — the multi-state grid operator covering 65 million people — released a forecast showing data centers as the primary driver behind a projected 58% increase in summer peak demand by 2046, with roughly 35.1 GW of new load between 2026 and 2031. That's essentially the growth story for the whole region, compressed into a five-year window.
Meanwhile, the regulatory machinery is grinding toward a decision that will shape how all that load actually connects. Responses to PJM's initial brief on co-location transmission service terms — the rules governing how data centers can sit next to power plants and share infrastructure — are due March 18. FERC is proposing two new transmission service types for co-located load, and whoever wins the rate design argument sets the template for co-location economics across the PJM footprint. Commissioner Chang's concurrence made the political constraint explicit: captive ratepayers — small businesses and residential customers — must not bear the cost of data center-driven buildouts. A bipartisan group of Senators led by Ed Markey echoed the point, urging state regulators to protect ratepayers from speculative load growth. If you're modeling co-location project economics, the March 18 filings are primary source material.
One operational bright spot: PJM has also become the first RTO to implement ambient-air dynamic line ratings under FERC Order 881, which can boost transmission throughput 10–40% in favorable ambient conditions — potentially delaying some new builds by squeezing more capacity from existing wires.
MISO Fast-Tracks $8.8B of Transmission — and Data Centers Get a One-Third Slice
If you still think data centers are just "load growth," MISO's new transmission plan treats them more like a new industry. The Midcontinent Independent System Operator released a draft 2026 Transmission Expansion Plan totaling $8.8 billion in grid investment, with roughly $3.1 billion — about 35% of the 2026 plan — explicitly tied to rapid data center interconnections in the Midwest. Three substations must be built more than five years ahead of prior long-range timelines to handle hyperscale load, forcing MISO to open a third "variance" review.
That's a structural shift: these projects are moving from "nice to have for renewables" to "must-have to keep AI clusters online." The coming battery boom adds another layer — the EIA expects about 24 GW of utility-scale battery storage to be added in 2026 alone, concentrating interconnection demand and transformer procurement in the same supply chains and permitting windows. And transformers are the unglamorous bottleneck nobody can wish away: Edison Electric Institute pegs large power transformer demand exceeding supply by 30% as of 2026, with waits stretching to 120 weeks. No transformers, no grid expansion — period.
For developers and regulators, the next few months of stakeholder fights over who carries that $3.1 billion on their bills will tell you how friendly each state is really willing to be to data center growth.
⚡ Meta's 6.6 GW Nuclear Play Signals Hyperscalers Are Done Waiting for the Grid
Meta announced plans to unlock up to 6.6 GW of nuclear energy — including small modular reactor partnerships — to power AI infrastructure. This follows Amazon's reported $650 million purchase of a 960 MW data center campus connected directly to Talen Energy's 2.5 GW Susquehanna nuclear plant in Pennsylvania (World Nuclear News). Together, these moves signal a fundamental procurement shift: hyperscalers are moving from market-sourced power purchase agreements to firm, dispatchable supply contracts that reduce exposure to hourly renewable variability and transmission constraints.
The nuclear play makes strategic sense — always-on generation that doesn't depend on weather or time of day — but it also creates new policy friction. Siting, water use near plants, and workforce impacts will draw regulatory scrutiny. Congressional proposals exploring DOE cost-share mechanisms for next-generation reactor overruns suggest Washington recognizes that SMRs won't scale without a financial backstop. If DOE moves quickly on grants, Meta's vision becomes materially more achievable.
AI Data Centers Could Demand a "New York City" of Water Capacity by 2030
Your next data center power study probably needs a hydraulic model stapled to it. A new preprint — "Small Bottle, Big Pipe" — finds that if 2024 cooling practices persist, U.S. data centers could require 697–1,451 million gallons per day of new water capacity by 2030, comparable to New York City's roughly 1,000 MGD average supply. This isn't about annual volume — it's about peak withdrawal, the water system's ability to deliver on the hottest, driest days, which directly competes with residential and industrial demand.
The authors value required new water infrastructure at $10–58 billion by 2030, concentrated in a handful of host communities. They also model the tradeoff everyone hand-waves: switching from evaporative to dry cooling cuts water use but pushes more load into the power system during summer peaks, when grids are already stressed. The proposed solution — co-planning water and power, including "Pipe Neutral" frameworks where operators fund equivalent local water capacity — could start showing up in host-community agreements alongside renewable PPAs.
The regulatory response is already materializing. Arizona's water department circulated a draft proposal on March 6, 2026, tightening water-use permits for large industrial users in Phoenix, explicitly targeting evaporative cooling at data centers. Colorado legislators introduced dueling bills — one taxing data centers to fund grid upgrades with clawbacks if water efficiency exceeds thresholds, the other offering tax breaks for low-water cooling. And Saudi Arabia announced a desalination plant producing 1 million cubic meters per day with integrated direct potable reuse, targeting 2028 — a reminder that in arid markets, desal could be a data center enabler if the energy tradeoffs are managed.
Because this is a preprint, not yet peer-reviewed, treat the exact numbers with caution. But the framing is dead on: cooling water capacity is becoming a hard siting constraint, just like transformer lead times.
New Products & Launches
Panasonic Liquid Cooling Systems for AI Data Centers — Panasonic entered the data center liquid cooling market in Europe with coolant distribution units (CDUs) rated to 1,200 kW and free-cooling chillers tuned for 120 kW/rack densities. Early orders for 400–800 kW CDUs are already visible in European colo parks, broadening CDU supply beyond specialist vendors and putting downward pressure on lead times.
Alfa Laval FreeWaterLoop — Swedish industrial heat exchanger giant Alfa Laval launched FreeWaterLoop, an external cooling system that harnesses natural water sources (rivers, lakes) for data center facility loops while minimizing net water consumption and returning water to its origin. When a 140-year-old industrial firm enters your market, the engineering problem has matured past the startup phase. Expect faster deployment in regions with cold surface water and established industrial permitting — but also new water withdrawal and thermal discharge permitting requirements.
Immersion Cooling Market Crosses $930M — A market analysis published March 10, 2026 pegs immersion cooling at $931 million in 2026, growing to nearly $5 billion by 2033 at a 27.1% compound annual growth rate (CAGR) through 2033. Single-phase immersion dominates revenue; two-phase is the fastest-growing segment. Hyperscalers are now specifying immersion readiness, fluid chemistry compliance, and CDU delivery timelines in initial RFPs — not as retrofits. The PFAS angle matters here too: rising demand for PFAS-free, bio-based dielectric fluids is creating both opportunity and regulatory headache as disclosure rules tighten.
⚡ What Most People Missed
- Germany's July 1 waste heat mandate is closer than most developers have admitted. Starting July 1, 2026, new data centers in Germany must demonstrate that at least 10% of their generated waste heat is utilized. Delivered heat costs are running 12–30 EUR/MWh as of 2026, undercutting gas boilers. But the 3:1 asset life mismatch between 30-year district heating networks and 7–10-year data center equipment cycles remains the single biggest structural barrier to scaling this model.
- FERC just ordered RTOs to break out "uplift" costs by cause and participant type — those side payments to generators outside main energy markets. This sounds esoteric, but it means no more hiding bad forecasting in a single mysterious line item. For big flexible loads doing demand response, this could expose who's driving redispatch costs — and make it easier for regulators to design large-load tariffs that don't cross-subsidize.
- The PFAS compliance risk is no longer primarily federal — it's a 50-state patchwork. While EPA's spring 2026 reissued rule will cover PFOA and PFOS at 4 ppt and drop four other chemicals, states are rushing to fill the void. North Carolina just opened public hearings on industrial PFAS monitoring rules with timelines that collide with planned AI campus energizations. Virginia's biosolids compromise — mandatory monthly PFOS/PFOA sampling starting January 2027 — is the template: monitor first, regulate harder later.
- The liquid cooling M&A wave tells you where the HVAC industry thinks the market is going. In 2025 alone: Eaton bought Boyd Thermal for $9.5 billion, Trane acquired Stellar Energy Digital, and Daikin purchased Chilldyne. When three industrial conglomerates pay premium multiples for liquid cooling IP in the same 12-month window, the transition from air-cooled to liquid-cooled infrastructure is no longer a question of if — it's a question of who owns the supply chain. Fewer independent CDU vendors means higher switching costs in the next procurement cycle.
- SEIA projects U.S. battery deployments could reach 70 GWh in 2026 — a jump that implies major purchases of battery thermal management, controls, and site cooling systems, hitting the same supply chains data centers rely on.
📅 What to Watch
- If MISO's March 12 board discussions shift cost allocation for the $3.1B of data center-driven transmission to host-state ratepayers, it will set the political ceiling for how much grid pain states will absorb for AI growth — and could trigger legislative backlash in Indiana and Iowa that forces expedited hearings or compensation mechanisms.
- If OpenAI announces alternative capacity deals after the Abilene pause, the choice of partner and region will reveal whether power delivery, financing, or control over liquid-cooled GPU deployments is the binding constraint for hyperscalers in the near term.
- If a major hyperscaler publicly adopts a "Pipe Neutral" water-capacity commitment in a U.S. data center deal, it marks the moment water infrastructure gets explicitly priced into cloud contracts — changing host-community negotiations, permitting conditions, and the structure of vendor warranties.
- If early EU national implementation measures under F-Gas Regulation 2024/573 accelerate A2L refrigerant timelines, expect a ripple of chiller and VRF product refreshes — and potential refrigerant price swings — that could force North American projects to re-evaluate procurement and commissioning schedules by 2027.
- If the D.C. Circuit decides to sever rather than consolidate the AWWA v. EPA PFAS challenges, utilities could face contradictory compliance timelines across circuits — making capital planning nearly impossible for multi-state systems.
The infrastructure stack doesn't care about your product roadmap. It cares about transformers, pipe capacity, and whether someone filed the permit. Build accordingly.