Skip to content
Podcast/Episode 16

OpenAI Killed Sora

11m 7sPodcast

Episode Description

OpenAI killed Sora the day after publishing its safety blog, Cursor got caught hiding Composer 2's origins, and a judge questioned the Pentagon's Anthropic ban.

Show Notes

OpenAI published a safety blog for Sora, then killed the entire product the next day and rug pulling Disney's $1B partnership in the process. Plus, Cursor got caught hiding that Composer 2 is built on an open source Chinese model, a federal judge called out the Pentagon's Anthropic ban, GitHub is opting you into AI training data, and Next.js finally addressed its biggest criticism with the new Adapter API.

Transcript

What's up, everyone? Welcome to Next in Dev, a weekly overview of all the news I could find in the modern web dev industry. This week was rough for transparency in the AI industry. OpenAI published a safety blog post for Sora and then killed the entire product a day later. Cursor launched a "frontier-level" coding model and "forgot" to mention it was built on someone else's work. And a federal judge told the Pentagon its Anthropic ban looks less like national security and more like punishment.

OPENAI / SORA

Earlier this week, OpenAI published a detailed blog post about Sora safety, including consent-based likeness controls, metadata on every video, layered content filters, teen protections, and more. It read like a company settling in for the long haul with its AI video platform. The next day, OpenAI killed Sora. The standalone app, the API, and the website are all shutting down. Publishing a safety blog post the day before you kill the product is peak OpenAI.

According to Reuters, Disney teams were actively collaborating with OpenAI on a Sora project Monday evening. Thirty minutes after that meeting wrapped, Disney found out Sora was being canceled. One person involved called it "a big rug-pull." The $1 billion deal Disney announced just three months ago, which licensed over 200 characters and allowed Disney to take a stake in OpenAI, never actually closed. No money ever changed hands, and Disney is now exiting the partnership entirely. Some Sora staffers were themselves blindsided.

Why are they doing this? Well, downloads dropped 45% from their November peak. Total in-app revenue amounted to roughly $2.1 million, which—let's be honest—is a rounding error for a $730 billion company. OpenAI says the compute needs to go elsewhere, and internally they're consolidating around a ChatGPT "super-app" strategy. Multiple outlets explicitly cite Anthropic's Claude Code gaining developer traction as a competitive pressure pushing OpenAI toward enterprise and coding tools. OpenAI is conceding ground in creative tools to fight on Anthropic's turf, and that's a significant pivot for a company that spent two years positioning AI video as a flagship capability.

My opinion is that we'll see more practical uses of AI take center stage because of this. Generating video and audio is fun, but there's not much use to it. Focusing on enterprise functions and coding are much more valuable uses of AI, so the shift makes sense. OpenAI's foray into AI-generated "creativity" definitely has them behind the ball, though.

Of course, OpenAI isn't the only company this week with a gap between what it said publicly and what was actually happening behind the scenes.

CURSOR / COMPOSER 2

Cursor released Composer 2 last week, positioning it as their own "frontier-level" coding model. It had strong benchmarks and is priced aggressively at 50 cents per million input tokens. The blog post credits their "first continued pretraining run" and reinforcement learning on long-running coding tasks.

Within hours of launch, a developer intercepted Cursor's API traffic and found the model identifier in plain text: surprise! It was Kimi all along. Composer 2 is built on Kimi K2.5, an open-source model from Moonshot AI, a Chinese startup backed by Alibaba. Cursor confirmed it the same day and Cursor's co-founder acknowledged it was "a miss" not to disclose it. Moonshot later confirmed the arrangement was an authorized commercial partnership through Fireworks AI.

Building on open-source models isn't scandalous. It's how a lot of AI development works now. But saying "our first continued pretraining run" is technically accurate and strategically misleading at the same time. The engineering Cursor did on top of Kimi is real, and that makes the lack of transparency even more confusing. If your added value is strong enough to stand on its own, why obscure the foundation?

It gets thornier when you look at licensing. Kimi K2.5's modified MIT license requires prominent attribution above 20 million dollars in monthly revenue. Cursor reportedly clears 160 million dollars a month. Their interface showed "Composer 2" with no mention of Kimi. And the headline benchmark they lead with? CursorBench...their own proprietary benchmark. They include third-party results too, but grading your own homework on a model already under scrutiny for origination is a pattern worth paying attention to. Attribution in the open-source AI ecosystem needs to be a norm, not an afterthought. It should not be something so easily missed.

Cursor isn't the only one learning this week that how you talk about your AI matters as much as the AI itself.

ANTHROPIC VS. THE PENTAGON, PART ??

A federal judge in San Francisco said this week that the Pentagon's ban on Anthropic looks like an attempt to punish the company for publicly criticizing the government.

If you're not aware, Anthropic's CEO said in February that Claude wouldn't be used for autonomous weapons or mass surveillance of American citizens. The Trump administration wanted Claude available for "all lawful purposes." When they couldn't agree, the Pentagon designated Anthropic a "supply chain risk" and Trump ordered all agencies to stop using Anthropic products.

At a hearing on Anthropic's request for a preliminary injunction, Judge Rita Lin called the government's actions "troubling," saying they "don't really seem to be tailored to the stated national security concern." If the worry is operational integrity, the Pentagon could just stop using Claude. Instead, Defense Secretary Hegseth announced that any contractor doing business with the military must cut all ties with Anthropic. One brief described it as "attempted corporate murder." Lin didn't go that far, but said "it looks like an attempt to cripple Anthropic."

The government's lawyer conceded during the hearing that the designation doesn't actually stop military contractors from using Anthropic on non-military work. This walks back Hegseth's broader statements and should assuage the fears of some enterprises that were on the fence about if they were able to use Claude or not.

This case matters well beyond this one company. The legal question is whether the government can repurpose supply chain risk designations against a domestic company for setting terms on its own product. The outcome sets a precedent for every AI company about what happens when you say no to the Pentagon. A ruling expected in days.

GITHUB COPILOT DATA POLICY

Taking a page out of Vercel's playbook, GitHub announced that starting April 24th, interaction data from Copilot Free, Pro, and Pro+ users will be used to train AI models unless you opt out. That includes your inputs, outputs, code snippets, surrounding context, file names, repository structure, and your feedback on suggestions. Business and Enterprise users are not affected.

If you previously opted out of the product improvement data collection setting, your preference carries over. But if you never touched that setting, you're now opted in by default. GitHub says incorporating real-world interaction data from Microsoft employees has already improved acceptance rates, and broadening that to all users will make models better at catching bugs and understanding workflows.

Your next step? Probably to go to your GitHub settings under "Privacy" and check your Copilot data preferences before April 24. After that date, opting out only prevents future use. It won't retroactively remove data already collected. Given that this comes just days after Vercel's own opt-in-by-default terms of service update for free-tier users, there's a clear pattern forming: if you're on a free or low-cost tier, your usage data is the product and will be used to feed AI.

NEXT.JS ADAPTER API

Next.js just addressed the single biggest criticism it's faced for years: platform lock-in. The Adapter API, which went stable in 16.2, gives any hosting provider a typed, versioned contract they can build against to deploy Next.js apps at full fidelity. Streaming, Server Components, Partial Prerendering, middleware, caching, and more are now formally documented and available through a public API.

You can bet that Cloudflare forking Next pushed Vercel over the edge on this one.

Vercel's own adapter uses this same public contract, with no private hooks or special integration path. It's open source. Any platform that passes the shared test suite can become a "verified adapter" hosted under the Next.js GitHub organization. Adapters for Netlify, Cloudflare, and AWS through OpenNext are in active development. A Bun adapter already exists as a reference implementation.

This happened through direct collaboration with OpenNext, Netlify, Cloudflare, AWS Amplify, and Google Cloud, formalized through a new Ecosystem Working Group with public meeting notes. The blog post includes quotes from engineers at Netlify and Cloudflare who were part of the process.

For years the argument against Next.js was that Vercel controlled both the framework and the only first-class deployment platform. This release doesn't completely resolve that tension because Vercel still has a head start on optimization and will always ship adapter support for new features first, but it fundamentally changes the relationship between Next.js and the rest of the hosting ecosystem. If you've been hesitant about Next.js because of the Vercel dependency, this is worth a serious second look.

FIGMA

Figma opened its canvas to AI agents this week through a new use_figma tool on their MCP server. Agents can now create and modify design assets using your existing components, variables, and tokens. This allows agents to read your library first so output reflects your design system instead of looking generic. The standout addition is "skills," markdown files that define how agents should work in Figma, including conventions, sequencing, and what good looks like. Anyone can author one without writing code. It's free during beta while Figma figures out pricing, and it works with Claude Code, Codex, Cursor, and Copilot. The skills system means the more you invest in your design system, the more value you get from agent workflows, but the pricing model is the thing to watch.

Also this week

Now for a few rapid-fire updates.

Payload v3.80.0 added multi-tenant slug uniqueness control plus fixes for Drizzle pagination, Postgres near queries, SQLite scheduled publish, and Lexical RTL support.

WebStorm 2026.1 enabled the service-powered TypeScript engine by default and added multi-agent AI chat via the Agent Client Protocol.

TanStack DB v0.6 added SQLite-backed persistence across browser, React Native, Electron, Tauri, and Durable Objects, plus hierarchical data projection with includes. TanStack also removed all third-party ads from tanstack.com.

Claude Code pushed versions 2.1.79 through 2.1.84 with transcript search, --bare mode for scripting, rate limit display, and PowerShell tool preview.

Railway redesigned its dashboard, added DNS management and raw queries for all database types, and rolled out Claude-powered deploy diagnostics.

What did I miss? There's so much happening in modern web dev that I know I missed something. Let me know by leaving a comment wherever you're watching or listening, or by joining my Discord server and subscribing to the Next in Dev newsletter at nlvcodes.com.

Thanks for watching or listening. See you in the next video.

Tags

web devaitransparency