Tech Policy & Regulation Weekly — Mar 12, 2026
Photo: lyceumnews.com
Week of March 12, 2026
The Big Picture
The era of soft AI guidance is over. This week the FTC mapped its century-old consumer protection statute directly onto algorithmic discrimination and AI marketing hype, BIS expanded semiconductor export controls with new classification numbers and broader foreign-product reach, and the EU moved its AI content-labeling code one draft closer to binding obligation — with an August deadline that is now uncomfortably close. The common thread: regulators across every sector are converting principles into plumbing, and the compliance work that matters most right now is the boring kind — mapping supply chains, documenting model decisions, and building audit trails that will survive scrutiny from agencies that have stopped asking nicely.
This Week's Stories
FTC Draws a Line: "Unfair" AI Is Now an Enforcement Target Under Section 5
If you were hoping federal AI policy would remain safely aspirational, the Federal Trade Commission just closed that window. On March 11, 2026, the FTC issued a Policy Statement explaining exactly how it will use Section 5 of the FTC Act — its core "unfair or deceptive acts or practices" authority — against AI systems that discriminate, deceive, or overpromise. The statement maps familiar FTC concepts (deception, unfairness, dark patterns) onto concrete AI use cases: algorithmic discrimination in lending and hiring, AI-generated scams, and the exaggerated "AI-powered" marketing claims the agency has been picking off in settlements since last year.
This isn't a new statute. It's something potentially more consequential: an enforcement roadmap built on authority the FTC already has. The statement stresses expectations for testing, documentation, and ongoing monitoring — not a one-time compliance check, but a continuous obligation. The agency also announced heightened antitrust scrutiny of mergers involving AI technologies, signaling that even smaller acquisitions in nascent AI markets will get longer looks for data-consolidation and model-entrenchment risks. For companies marketing AI products or acquiring AI startups, the practical takeaway is blunt: the FTC now has a published playbook for how it plans to come after you, and enforcement is expected to ramp through 2027.
BIS Tightens Semiconductor Export Controls — Again — With Six Notable Changes
Export control compliance teams should cancel their weekend plans. The Bureau of Industry and Security published an interim final rule this month that expands the Foreign Direct Product rule (which extends U.S. jurisdiction to chips made abroad using American technology), adds new Export Control Classification Numbers for cutting-edge chip technology, and tightens restrictions on software and equipment shipments to certain destinations. The rule was issued on a national-security basis with an expedited effective period and limited notice-and-comment, which means the clock is already running.
Separately, BIS dropped four additional updates layering onto the AI Diffusion Rule architecture, further tightening controls on advanced computing integrated circuits, model weights, and technology transfer governance. A detailed analysis from Vinson & Elkins walks through the expanded catch-all provisions and license requirements.
The updates hit hardest for hyperscalers with non-U.S. data centers, fabless chip designers using third-country foundries, and anyone holding Validated End-User designations. China also tightened export rules for rare earth elements and critical materials (gallium, germanium) used in semiconductor production — constraining an already stressed upstream supply chain. The practical result: every cross-border chip and cloud contract should be treated as at-risk until export counsel signs off.
Commerce and FTC Hit Their Deadline on State AI Law Preemption — Now What?
March 11, 2026 was the statutory deadline for the Commerce Department and FTC to publish evaluations identifying which state AI laws conflict with federal policy and merit legal challenge. The deliverables — required by executive order — are designed to function as a hit list for the Administration's AI Litigation Task Force, which the Attorney General has been directed to use for court challenges to state laws.
Here's the critical nuance: neither document can actually invalidate state law. Only Congress or the courts can do that. What the reports do is identify targets and legal theories — the map that DOJ litigation teams will use to decide where to force fights. For companies with compliance programs built around Colorado's algorithmic-bias rules, California's automated-decision frameworks, or Illinois's AI-in-hiring requirements, the risk isn't that obligations vanish overnight. It's that a patchwork of state rules gets replaced by a more centralized federal model with fewer private rights of action but more agency leverage. The practical advice hasn't changed: maintain 50-state readiness, but prioritize controls that genuinely improve AI quality and fairness — those investments travel regardless of which jurisdiction's rules survive.
Meanwhile, Utah's legislature sent nine AI-related bills to the governor's desk before its session ended March 6, 2026, covering everything from AI in schools to deepfake protections to healthcare applications. This isn't a blue-state phenomenon — it plays out amid states' efforts to protect citizens and the White House's push for a unified national standard.
Brussels Publishes Second Draft of Its AI Content-Labeling Code — August Deadline Now Uncomfortably Close
The EU's AI Office published the second draft of its Code of Practice on marking and labeling AI-generated content on March 5, 2026. The code specifies how providers must disclose AI-generated text, images, audio, and video to end users under the AI Act's general-purpose AI obligations. New in this draft: clearer ties to Article 50 (the synthetic content rule), emphasis on open standards for interoperability, a proposal for a recognizable EU-wide icon to label AI content, and practical guidance on watermark resilience across transformations.
The code is technically voluntary at this stage, but it's a strong preview of enforceable expectations once the relevant obligations become fully applicable on August 2, 2026. Comments are due March 30, 2026; the final code is targeted for early June. For companies surfacing generated content to EU users, treat this as the operational checklist you'll be measured against. The EU Parliament's briefing on the companion "digital omnibus" package confirms that the enforcement architecture — registration, sector regulator roles, interlocks with the EU AI Office — is being locked in through what looks like housekeeping legislation but is actually where compliance costs get defined.
A Federal Judge Just Ruled That Your AI Prompts Aren't Privileged
In a matter of first impression, U.S. District Judge Jed Rakoff of the Southern District of New York ruled that exchanges with a publicly available generative AI platform in connection with pending litigation are not protected by attorney-client privilege or work product doctrine. The court reasoned that the communications did not involve an attorney-client relationship, were not confidential, were not made for the purpose of obtaining legal advice, and did not reflect an attorney's trial strategy.
Read that again. Every AI-assisted legal analysis your in-house team has done using consumer-grade tools — ChatGPT, Claude, Gemini via browser — in connection with active or anticipated litigation may be discoverable. The ruling's implications extend to state attorney general investigations and regulatory inquiries. This is a district court ruling, not circuit precedent, and it will certainly be appealed — but as a first-impression decision from Judge Rakoff, it carries significant persuasive weight. Organizations using enterprise deployments with contractual data isolation may be able to distinguish their situation, but that argument hasn't been tested yet. The immediate action item: document the difference between your consumer and enterprise AI access, and update your litigation-hold protocols to address AI-generated work product.
⚡ What Most People Missed
The UK must publish two AI/copyright reports by March 18 under the Data (Use and Access) Act 2025. A December 2025 progress report noted that a majority of consultation respondents supported requiring licenses in all cases for AI training data — which, if adopted, would be industry-reshaping. Every AI company training on content that touches UK rightsholders should be watching closely; the outcome sets the table for legislation the government has already committed to.
Ireland just showed its AI Act enforcement hand. A March 4, 2026 briefing note from Ireland's Department of Enterprise sketches out who will supervise high-risk AI systems, how penalties and inspections will work, and how Dublin will participate in EU-wide coordination. The signal: Ireland intends to be a serious cop on AI, not just a friendly HQ jurisdiction — continuing the GDPR pattern that caught many U.S. platforms off guard.
Grammarly got sued for a theory nobody saw coming. A class action filed March 11, 2026 alleges Grammarly used the names and identities of Stephen King, Neil deGrasse Tyson, and hundreds of other writers in its "Expert Review" AI feature without consent. This isn't a training-data copyright case — it's a right-of-publicity claim, opening a new legal front that extends from how models are trained to how they're presented to users.
Italy's telecom regulator is stress-testing AI explainability requirements ahead of broader EU deadlines. AGCOM's consultation on algorithmic transparency for digital service providers has comments due March 20, 2026 — and if it sticks, other national regulators will copy the language quickly.
📅 What to Watch
- If the UK's March 18 copyright reports recommend mandatory licensing for AI training data, expect immediate relocation and licensing strategy reviews at every major AI lab — and a transatlantic divergence from the U.S. fair-use approach that could fragment model development.
- If FERC acts on PJM's capacity collar and interconnection reform filings, it will reveal whether the Commission is willing to bend market rules to accommodate AI-driven load growth — or whether Big Tech's "Ratepayer Protection Pledge" was political theater.
- If the EU AI Act Code of Practice feedback window (closing March 30, 2026) draws substantive platform comments backing open watermarking standards, that consensus will effectively lock in technical requirements months before the August compliance date — making non-participation a de facto opt-out of shaping the rules you'll live under.
- If early discovery motions in Mobley v. Workday demand access to model training logs and bias-testing documentation, they'll set the practical floor for what algorithmic transparency courts will require — a precedent that would ripple from HR tech into every vertical using automated decision-making.
- If BIS publishes entity list or Foreign Direct Product (FDP) expansions this week, it could trigger immediate export-license exposure for chip and cloud vendors who assumed their current classifications were stable.
A week where the FTC published an enforcement manual for suing AI companies, a federal judge told lawyers their ChatGPT sessions are fair game in discovery, and Big Tech signed a pinky-promise not to crash the electrical grid. Somewhere in Italy, a telecom regulator is quietly writing explainability rules that will be copy-pasted across Europe before anyone notices — which is, come to think of it, exactly how the GDPR happened too.
Until next week. —