The Lyceum: AI Daily — Mar 15, 2026
Photo: lyceumnews.com
Sunday, March 15, 2026
The Big Picture
The game industry just published its most damning survey in years, BuzzFeed's three-year AI-content experiment ended in a going-concern warning, and Paul Conyngham in Sydney used ChatGPT and AlphaFold to shrink his dog's cancer tumor by 75% over two months. Today's theme: AI is simultaneously the excuse companies use to cut people, the tool that hollows out the work those people used to do, and — in the right hands — something genuinely miraculous. Holding all three of those truths at once is the job now.
Today's Stories
The Game Industry's AI Reckoning Lands at GDC — And the Numbers Are Ugly
The GDC State of the Game Industry report was released in San Francisco on March 15, 2026, with Bloomberg's Jason Schreier reporting the takeaways, and the data is worse than the vibes suggested. In a survey of more than 2,300 professionals: 28% have been laid off in the past two years (33% of U.S. developers in the past two years), 52% believe generative AI is actively harming the industry in the survey, and — the number that should stop you cold — 87% of educators in the survey say their students won't find jobs after graduating.
The attitude split inside studios is revealing. 36% of professionals use AI tools daily in the survey, mostly for research and brainstorming. But artists reject AI at a 64% rate in the survey; programmers at 59% in the survey. Management and marketing adopt it fastest. The people making the things and the people managing the things are living in different industries now.
Gaming matters here because it's the canary: creative work, technical work, and a brutal labor market compressed into one sector. When nearly nine in ten educators in the survey can't tell students the degree will lead to a job, that's not a warning — it's a verdict.
BuzzFeed Is Done. The Three-Year AI Autopsy Begins.
In January 2023, BuzzFeed CEO Jonah Peretti bet the company on AI-generated content, promising it would "replace the majority of static content" — one month after shuttering the Pulitzer Prize–winning BuzzFeed News division. Three years later: a $57.3 million net loss for 2025, a formal "substantial doubt" going-concern warning to investors, and a Reddit thread with 750+ upvotes captioned "the readers know there's no one home."
The fascinating wrinkle: on March 13, BuzzFeed launched Branch Office, a spinoff building AI-powered social apps. Peretti's new line: "Software is the new content." Whether that's genuine evolution or a lifeboat is debatable. What isn't debatable is the lesson: AI as a replacement for having a point of view kills the thing it's supposed to scale. Publishers who used AI to do more interesting things faster are fine. Publishers who used it to do more boring things cheaper are filing going-concern notices.
A DIY Cancer Vaccine Shrank a Dog's Tumor by 75% — Built with ChatGPT and AlphaFold
Sydney tech entrepreneur Paul Conyngham — no biomedical degree — used ChatGPT to plan DNA sequencing for his dying dog Rosie, Google DeepMind's AlphaFold to predict the resulting protein structures, and labs at the University of New South Wales and the University of Queensland to produce a custom mRNA vaccine. Cost: roughly $3,000. Timeline: two months. Result: Rosie's mast cell tumor shrank by 75% over two months. Scientists involved call it the first personalized cancer vaccine ever designed for a dog.
The honest framing: ChatGPT was a research assistant, not a scientist. AlphaFold predicted structures; humans made the critical decisions. But what AI compressed was the pipeline — institutional resources and months of manual work collapsed into something one determined person could initiate. Scientists involved described the effort as "gobsmacking" and asked why similar pipelines aren't being rolled out to humans. That question — and the regulatory vacuum around DIY biotech it implies — is where this story goes next.
Hiring Managers Admit They Blame AI for Layoffs Because "It Plays Better"
A Bloomberg survey of 1,000 hiring managers published March 15, 2026, contains a number that reframes the entire AI-and-jobs discourse. 59% of hiring managers in the survey say they stress AI's role in layoffs or hiring freezes "because it plays better" — while only 9% in the survey say AI has fully replaced roles.
Read that gap again. The majority of managers are overstating AI's role in cutting jobs — presenting it as a simple efficiency story for shareholders, boards, and press cycles. Companies cite "AI efficiency" to justify headcount reductions amid cost pressure and post-pandemic overcorrection. Workers believe AI took their jobs. Investors reward the framing. The narrative compounds.
Nine percent is not zero. But the distance between "AI replaced 9% of roles" and "we told everyone AI replaced them" is where much of the real damage to worker confidence is happening right now. If you're trying to figure out whether your job is actually at risk, this is the most useful number published this month.
Iran Claims Underwater Drone Strike on Tankers — and AI Is in the Targeting Conversation
Two tankers — one US-owned, one Greek-owned — are ablaze in Iraqi waters after Iran claimed responsibility for an underwater drone strike. Lloyd's List is reporting the incident live. Whether this specific strike was AI-guided isn't confirmed by independent sources; Iran's claim should be treated as exactly that. But the country has been open about integrating machine-vision and autonomous guidance into its naval assets, and the Pentagon confirmed days ago that AI is helping plan strikes in the broader conflict.
The immediate consequence is commercial: Lloyd's of London war-risk premiums for the region were already elevated. A confirmed autonomous drone strike on commercial vessels in the world's most critical energy chokepoint will move those numbers. The shipping industry is now in the position aviation faced with GPS spoofing — you can't see the threat coming until it's already on top of you.
⚡ What Most People Missed
- ChatGPT can now draft your emails and create Google Docs directly. OpenAI quietly updated Google and Microsoft integrations to support write actions — drafting emails, creating spreadsheets, scheduling meetings from the chat window. Actions are off by default and require workspace admin approval. ChatGPT stops being a tab you copy-paste from and becomes a participant in your workflow.
- Hume AI open-sourced a speech model that reportedly produced zero hallucinations in testing. TADA, released under the MIT license, processes text and audio in sync. The zero-hallucination claim is self-reported and needs independent verification, but if it holds, this matters enormously for voice AI in healthcare, legal, and any domain where audio accuracy is non-negotiable.
- The community has Qwen3.5-397B running at 282 tokens per second on consumer hardware — up from 55. Developers on r/LocalLLaMA pushed Alibaba's 397-billion-parameter open model to genuinely fast inference using four consumer Blackwell GPUs. If these configs spread, frontier-class AI becomes a serious self-hosted option for organizations that can't send data to a cloud.
- arXiv — the preprint server where virtually all AI research appears first — is separating from Cornell University. With Simons Foundation support and a CEO search at roughly $300,000/year, this is infrastructure professionalization. arXiv's governance decisions affect every researcher and developer tracking the field.
- A new spec language for talking to LLMs is climbing Hacker News. CodeSpeak, from Kotlin creator Andrey Breslav, sits between English (too ambiguous) and code (too rigid) to give AI agents formal, reliable instructions. Think SQL for databases, but for model execution. Early, but the source is credible and the problem is real.
📅 What to Watch
- If Nvidia announces agentic server architectures at GTC tomorrow (March 16), it means the company is betting that orchestrating agents — not just training models — is the next hardware bottleneck worth owning.
- If the IMO or Lloyd's issues updated war-risk guidance on autonomous drone threats in the next 48 hours, commercial shipping will formally treat AI-guided weapons as a new category of maritime risk, with cascading effects on energy prices globally.
- If enterprise admins enable ChatGPT's new write-actions at scale this month, it becomes the first real test of whether an AI assistant can displace workflow tools like Notion or Asana — not by being better, but by already being open in a tab.
- If hospital systems publish hard before/after metrics from autonomous delivery pilots following HIMSS, it means AI in healthcare logistics is crossing from innovation theater to line-item operations — and the same stack will spread to airports and campuses fast.
- If regulators in Australia or elsewhere issue guidance on DIY AI-designed biologics in response to the Rosie vaccine story, it signals that the "garage biotech" era has officially arrived as a policy problem.
The Closer
A dying dog in Sydney getting a custom cancer vaccine from a laptop; 87% of game-design professors in the GDC survey telling students to find another career; and a Bloomberg survey of 1,000 hiring managers published March 15, 2026, admitting they blame AI for layoffs because the story "plays better" than the truth. The most honest number in AI today is 9% — the share of jobs actually replaced in the Bloomberg survey — and the most dangerous number is 59%, the share of managers who present otherwise. Somewhere between those two figures, a spec language for talking to machines is quietly climbing Hacker News, which feels about right for a species that can't even talk straight to each other.
Forward this to the person in your life who keeps asking "should I be worried about AI?" — they deserve better than a 59% answer.