Anthropic is Suing the Government
Episode Description
Anthropic sues the government, AI finds 22 Firefox bugs in two weeks, Astro 6 drops, plus Cloudflare, shadcn, Cursor, and Figma updates.
Show Notes
Anthropic is suing the US government after being labeled a "supply chain risk," AI found 22 high-severity Firefox bugs in just two weeks, Astro 6 dropped with a completely rebuilt dev server, and there's major tooling news across Cloudflare, shadcn, Cursor, Figma, and more.
This week in web development and AI news: the Anthropic lawsuit against the Trump administration, Claude's automated security scanning on the Firefox codebase, OpenAI acquiring Promptfoo, Apple Music AI transparency tags, Astro 6's new Vite Environment API and Cloudflare adapter, shadcn CLI v4 with AI agent skills, Cloudflare's new Browser Rendering crawl endpoint, Cursor plugins, Figma MCP updates, Claude Code releases, Next.js 16.2 canary progress, Railway, and Dokploy updates.
Transcript
What's up, everyone? Welcome to Next in Dev, a weekly overview of all the news I could find in the modern web dev industry. This week, Anthropic is suing the US government, AI just found a fifth of Firefox's annual high-severity bugs in two weeks, OpenAI is buying another company, Astro 6 just dropped, and there's a ton of tooling news across Cloudflare, shadcn, Cursor, Figma, and more. Let's dive in.
The biggest story this week is a lawsuit—it's always a lawsuit these days. Anthropic filed suit against the Trump administration, the Department of War, and 16 other government agencies after being designated a "supply chain risk" by Defense Secretary Pete Hegseth.
If you don't already know, Anthropic has had tools deployed in classified government work since 2024. Their contracts always included restrictions, specifically around lethal autonomous weapons and mass surveillance of Americans. Hegseth demanded that those restrictions be removed entirely. Anthropic tried to negotiate a compromise, and while those talks were still happening, Trump publicly called the company "left-wing nut jobs" and directed all agencies to stop using Anthropic tools. Hegseth followed up by labeling them a supply chain risk, which also prohibits any government contractor from using Claude.
Anthropic isn't seeking money. They want the court to declare the directive unconstitutional and reverse the supply chain designation. They say hundreds of millions of dollars in contracts (both government and private sector) are now in jeopardy.
Nearly 40 employees from Google and OpenAI filed a court brief supporting Anthropic's position on limiting dangerous AI uses. These are people from direct competitors publicly backing Anthropic's stance. Meanwhile, OpenAI's Sam Altman admitted to rushing through a new DOD contract to fill the gap—how convenient. Microsoft, Google, and Amazon said they'll keep using Claude outside of defense work. Legal experts expect this could go all the way to the Supreme Court.
Every AI company with government ambitions is watching this. The question it raises is really about whether any AI company can maintain usage restrictions when the government decides it wants unrestricted access. Well, I guess everyone but Sam Altman, who signed without any regard to standards.
Sticking with Anthropic, they partnered with Mozilla to sick Claude Opus 4.6 on the Firefox codebase as an automated bug sniffer. Over two weeks, it found 22 vulnerabilities, 14 of which Mozilla classified as high-severity. That's nearly a fifth of all high-severity Firefox bugs fixed in the entire year of 2025. The first vulnerability was found within twenty minutes.
The team scanned nearly 6,000 C++ files and submitted 112 unique reports total. Most fixes have already shipped in Firefox 148.
Anthropic also tested whether Claude could exploit the bugs it found. After several hundred attempts costing about $4,000 in API credits, it only succeeded twice, and only with security features like sandboxing disabled. So Claude is dramatically better at finding vulnerabilities than exploiting them. That's good news for defenders right now. But Anthropic itself warned that the gap is unlikely to last long. Time will tell if that's fear-mongering or just thinking ahead.
If a two-week automated scan can find this many high-severity bugs in one of the most heavily audited codebases on the internet, think about what that means for the average npm package or WordPress plugin nobody's seriously auditing. There's usually at least one WordPress vulnerability disclosed every day. The defensive applications are genuinely exciting. The offensive implications are genuinely concerning.
Anthropic also launched The Anthropic Institute this week, a research organization led by co-founder Jack Clark, focused on studying societal challenges from powerful AI. It consolidates their Frontier Red Team, Societal Impacts, and Economic Research groups under one roof, with new efforts around forecasting AI progress and understanding AI's interaction with the legal system.
Notable founding hires include one from Google DeepMind, leading work on AI and the rule of law, an economist from UVA, and a researcher who previously worked on AI's economic impacts at OpenAI. Anthropic's also opening a DC policy office this spring.
The cynic in me is worried that a company-funded institute studying the impacts of its own technology has obvious conflicts of interest. The more charitable side of me thinks that the people closest to the capabilities are best positioned to flag what's coming. Both can be true.
OpenAI is acquiring Promptfoo, the open-source AI security and evaluation platform used by a quarter of Fortune 500 companies. Promptfoo handles red-teaming, prompt injection detection, data leak testing, and compliance monitoring. The technology is being integrated into OpenAI Frontier, their enterprise platform for AI agents.
Promptfoo has about 130,000 monthly active users and an 11-person team. Deal terms weren't disclosed.
The "we'll keep it open source and multi-provider" promise is the part to watch. Promptfoo's value has always been that it works with any model: Claude, Gemini, open-source, whatever. OpenAI has every incentive to say the right things now and gradually tilt the best features toward their own ecosystem over time. If you rely on Promptfoo for non-OpenAI workflows, track this closely. I don't have a lot of trust in OpenAI to do the right thing currently.
Now let's move to a broader AI transparency conversation. Apple Music announced Transparency Tags this month, a metadata framework that lets labels and distributors flag whether a track, composition, artwork, or music video involved AI. The tags are optional for now, with Apple saying they'll eventually become mandatory.
This matters because the scale of AI-generated music is already staggering. Deezer reported it's receiving over 60,000 fully AI-generated tracks per day. This is up from 10,000 when it first deployed detection tools in early 2025. Synthetic content now makes up roughly 39% of all music delivered to the platform daily. And here's the stat that should bother you: Deezer found that up to 85% of streams on AI-generated tracks were fraudulent in 2025, used to cheat royalty payouts, not reflect real listeners.
This came into focus last fall when an AI-generated country song called "Walk My Walk" hit number one on Billboard's Country Digital Song Sales chart under a fictional artist with a fake cowboy persona. Over 2 million monthly Spotify listeners. No disclosure.
Apple's asking the people uploading synthetic content to voluntarily label it. That's more like a suggestion box than a defense. When 85% of streams on AI music are fraudulent, you're dealing with a monetization exploit that voluntary tagging will never solve. The platforms that actually invest in detection infrastructure, like Deezer, are the ones taking this seriously. Everyone else is publishing suggestions that let them say they tried.
Astro 6 finally dropped. The most important change is under the hood. The dev server has been completely rebuilt using Vite's Environment API, so it now runs your actual production runtime during development. For Cloudflare users, this is huge. The rebuilt Cloudflare adapter runs `workerd` at every stage (dev, prerendering, and production), so no more "works in dev, breaks in prod" with bindings like KV, D1, and R2.
Beyond that: a built-in Fonts API that handles downloading, caching, and self-hosting from config, a stable Content Security Policy API that auto-hashes scripts and styles, and Live Content Collections that fetch CMS content at request time without rebuilds. Infrastructure upgrades include Vite 7, Zod 4, and a Node 22 minimum.
There are three experimental features to watch. There's a Rust compiler replacing the original Go-based one that's already faster and more reliable in some cases, queued rendering showing up to 2x improvements, and a platform-agnostic route caching API with automatic invalidation tied to content collections. The Rust compiler move mirrors the industry trend, and if it becomes the default in 6, Astro's build performance gets significantly stronger.
Cloudflare's Browser Rendering API added a /crawl endpoint in open beta. One API call, submit a URL, and Cloudflare discovers pages via sitemaps and links, renders them in a headless browser, and returns content as HTML, Markdown, or structured JSON. You get controls for crawl depth, page limits, incremental crawling, and a static mode that skips rendering for faster crawls. It honors robots.txt and it's available on both Free and Paid plans.
shadcn/cli version 4 is seeks to make the CLI the interface layer between your design system and your AI coding agent. The headline feature is shadcn/skills, which gives agents like Claude, Codex, and v0 structured context about your components and registry. The new preset flag puts your entire design config into a single short code you can share, put into prompts, or use to set up projects instantly.
You also get dry-run, diff, and view flags for inspecting changes before they write. They added a new initialize template flag for setup across Next.js, Vite, TanStack Start, and more. Lastly, they're maintaining support for both Radix and Base UI primitives.
Cursor added over 30 new plugins from Atlassian, Datadog, GitLab, and more. Each plugin bundles MCP servers with agent-specific skills. They claim that their users report this combination is much more powerful than MCPs alone. Most plugins work with Cursor's cloud agents and can be triggered automatically through their Automations feature.
Three Figma updates worth mentioning this week. A new MCP server lets GitHub Copilot users push AI-generated UI directly onto the Figma canvas as editable frames and pull design context back into code. This matches what Figma has already done with Claude Code and Codex already. Next up, Figma Slots hit open beta, letting you add dynamic content to component instances without detaching, which has been a long-standing pain point for design system maintainers. And Figma Community now includes Apps alongside plugins and widgets in a unified Extensions hub.
Now for a few quick hits. Claude Code pushed five releases this week. Versions 2.1.70 through 2.1.74. Highlights include a new /loop command for recurring prompts, default Opus model updated to 4.6 on Bedrock and Vertex, a new /context command that identifies memory bloat and suggests optimizations, and a fix for a memory leak causing unbounded RSS growth on the Node.js path. Plus a massive batch of stability fixes across voice mode, plugin reliability, MCP OAuth, RTL text rendering, and lengthy sessions.
Next.js 16.2.0 is churning through canary releases 81 through 93 this week. Mostly Turbopack persistence and stability work, plus an experimental lightningCssFeatures config, cachedNavigations flag, and continued iteration on the Instant Navs devtools. At this point, I think we'll see Next.js 17 announced before 16.2.
Railway launched domain purchasing directly in the platform, refreshed the project dashboard UI, and added an AI Agent Panel that's aware of your services and deployments.
Dokploy v0.28.4 through v0.28.6 added whitelabeling support, GitHub labeled action deployments, and a steady stream of backup and Docker fixes.
That's it for this week. A lot going on: everything from constitutional law to Rust compilers.
