👨‍🚀 Code Red at OpenAI

PLUS: Video editing via text—this new model delivers

In partnership with

Hello AInauten,

Welcome to the latest edition of your favorite newsletter!

Kling can not only speak, but also think. OpenAI declares a red alert and takes drastic measures. And we have some videos you’ve got to see. Three topics, one common thread: where AI stands today.

Here's what we have in store for you today:

  • 🏆 Kling's new video editor - Nano Banana for clips with sound?

  • 🚨 Code Red at OpenAI: Why Sam is sounding the alarm

  • 🎲 Video tips: AI between laboratory breakthrough and bullshit roulette

Let's go!

200+ AI Side Hustles to Start Right Now

AI isn't just changing business—it's creating entirely new income opportunities. The Hustle's guide features 200+ ways to make money with AI, from beginner-friendly gigs to advanced ventures. Each comes with realistic income projections and resource requirements. Join 1.5M professionals getting daily insights on emerging tech and business opportunities.

🏆 Kling's new video editor—Nano Banana for clips with sound?

Google's Veo 3 has had it since May. OpenAI's Sora 2 since September. And Kling? It has been producing silent films for months. That is about to change.

On December 1, Kling O1—the first video model with "reasoning"—was released, followed by Kling 2.6 with native audio (initially only 10 seconds). These are two separate models. O1 can edit, but not audio. 2.6 can handle audio, but has no editing features. Together, they are strong and on par with Veo 3 and Sora 2.

O1: The video model that "thinks" – combining generation and editing

But Kling O1 has another ingenious feature: instead of simply predicting pixels blindly, it builds up an understanding of the scene. This is called chain-of-thought reasoning—or, as we say, the AI does its homework first.

The genius: You can use text to say "remove the passers-by," "turn day into night," or "change the outfit"—no masks, no keyframes, no additional tools required. Plus, you can use up to 7 reference images for consistent characters, set start/end frames, and import camera movements from other videos.

It is ideal for:

  • Filmmaking & post-production: Add shots, ensure consistent characters/props, remove people, swap backgrounds, or stylistically unify shots.

  • Advertising & Fashion: Upload product, model, and background images and generate high-quality product videos, lookbooks, or social ads in seconds.

  • Content & B-roll: Create YouTube B-roll, intros, social clips, or explainer videos without a traditional setup—directly from photos, reference clips, and text.

What works in practice

We tested it. The reasoning basically works: movements appear more natural, and the physics are mostly plausible. But there are still some teething problems.

In a picture with two people, O1 interpreted both as the same person—its "understanding of the world" still needs some work. Actions sometimes seem cartoonish, faces smooth as glass.

But: There are also areas where Kling 2.6 spectacularly flops—such as animations!

Our take: Promising in many respects—but be patient.

The real differentiator is the O1 reasoning engine, which neither Google nor OpenAI offer. O1's editing-per-text is a real step forward. Character consistency is better than with other models—but not yet perfect. So expect to need several attempts here too.

Will this make a difference in practice? That remains to be seen. But the idea that a video model first "thinks" instead of just interpolating patterns sounds like the right approach.

The fact that audio is only generated after the video is created is not a real problem. But: only in English and Chinese! The lack of German support is a damper for DACH creators. And a maximum of 10 seconds is rather meh compared to the 60 seconds offered by Sora 2...

How much does it cost? For just a few dollars ($6.99 for Standard, $25.99 for Pro), you get a full-featured video tool with audio and reasoning. Kling even offers 66 credits/day (3-4 short clips) for free.

You can test it at app.klingai.com—the free tier is sufficient for initial experiments.

P.S.: We are eagerly awaiting the merger of O1 and 2.6. That would then be the real Nano Banana.

🚨 Code Red at OpenAI: Why Sam is sounding the alarm

Do you remember when Google declared "Code Red" three years ago when ChatGPT was launched and even brought co-founder Sergey Brin back to Mountain View? Plot twist: Now Sam Altman is doing exactly the same thing. Irony sometimes writes the best scripts.

What happened?

In an internal memo, Altman focused all resources on ChatGPT. Other projects? Put on hold. Even the planned advertising was postponed.

The trigger: Gemini 3 has surpassed ChatGPT in important benchmarks and has become a media darling with Nano Banana, NotebookLM, and Antigravity.

And it's not just sentiment that is positive; the hard facts also speak for themselves: in "Humanity's Last Exam" – a test for PhD-level reasoning – Gemini achieved 37.5%, while OpenAI only managed 31.6%.

Strong in B2C, weak in B2B – the uncomfortable truth about market shares

ChatGPT dominates among us ordinary people in the B2C environment: over 60% market share, 800 million weekly users. But in the enterprise sector? Anthropic has a 40% market share in AI spending—OpenAI only 29%.

And Google? It is still holding back with a 13.5% overall market share, but 63% of its Gemini usage comes from enterprise customers – and thanks to the Google Cloud Platform, Google is ideally positioned. Google CEO Sundar Pichai recently announced that by Q3 2025, they would have closed more billion-dollar deals than in the previous two years combined.

The $207 billion problem – where will the money come from?

Now things are getting wild, because OpenAI has a massive financing problem, and its finances read like an economic thriller without a happy ending...

  • 2024: $5 billion loss on ~$4 billion revenue

  • 2025: Projected $9 billion loss on $13 billion in revenue

  • By 2030: HSBC estimates a financing gap of $207 billion …

OpenAI thus spends $1.69 for every dollar it earns... Even OpenAI does not expect to turn a profit until 2029 at the earliest. And the fixed infrastructure costs are brutal: $792 billion for cloud and AI infrastructure by 2030, with $620 billion for data center rentals alone.

The funny thing is that the $100 billion deal with Nvidia has still not been signed. Analysts are talking about "circular financing"—Nvidia invests in OpenAI, OpenAI buys Nvidia chips. It sounds like a perpetual motion machine that will eventually come to a halt.

So OpenAI needs to find other ways: In March 2025, it already had $40 billion at a valuation of $300 billion. Parallel to this is the Stargate project: $500 billion for US data centers with SoftBank and Oracle – sounds like a safety net, but the money is going into infrastructure, not into the coffers.

So does OpenAI have enough cash in the bank? An IPO is conceivable for 2026/2027 – possibly with a valuation of up to a trillion dollars.

Circular financing explained... everyone with everyone, across the board

Our take: The crown is wobbling, but it won't fall off.

OpenAI is not finished. 800 million users, a strong brand, and a savvy dealmaker at the helm—that's what counts. But Anthropic is conquering the enterprise market, and Google has deep pockets and massive distribution.

That's why OpenAI needs to keep the hype going (GPT 5.2—codenamed Garlic—is expected to be released soon, which internally beats Gemini 3 and Opus 4.5). Unlike Google, they don't have a profitable core business.

OpenAI is therefore putting all its eggs in one basket: building the best consumer product, growing fast enough to justify the astronomical costs, and securing the next cash injection with a mega IPO in 2027.

It's a race against time. Against the competition, against our own burn rate. The question for us is: How long will we pay $20/month for ChatGPT when Gemini is included in the Google Suite?

🎲 Video tips: AI between laboratory breakthrough and bullshit roulette

Finally, here are two video recommendations that couldn't be more different—but both are definitely worth watching.

Germany’s ZDF Magazin Royale (use English subtitles) takes on AI – chatbots that deny the Holocaust, destroy relationships, and produce bullshit. It's exciting that a satirical show of all things is taking on the media literacy aspect that many tech blogs elegantly sidestep. Essential viewing for anyone who wants to see the dark side of AI and get a take on how the Germans sees AI.

In contrast, there is the documentary gem about Google's DeepMind, "The Thinking Game." It shows from a developer's perspective how hard the road to real scientific breakthroughs is. No marketing fluff, just honest insights—well worth watching!

You made it to the end—thanks for reading! We’ll be back soon with even more updates.

Reto & Fabian from the AInauts

P.S.: Follow us on social media - it motivates us to keep going 😁!
X, LinkedIn, Facebook, Insta, YouTube, TikTok

Your feedback means the world to us. We read every comment and message—just hit reply and tell us what you think!

🌠 Please rate this issue:

Your feedback is our rocket fuel - to the moon and beyond!

Login or Subscribe to participate in polls.