Midjourney enters the AI video arena

PLUS: MiniMax's new 1M-token model and an AI that never stops learning

Good morning, AI enthusiast.

AI image leader Midjourney has officially stepped into the video creation space. Its first model is already impressing creators with stunning, cinematic results derived from a single image prompt.

The release has surprised many in the AI community who thought the company was falling behind in the video race. By leveraging its powerful image synthesis foundation, is Midjourney proving that aesthetic quality is the key to standing out in a crowded market?

In today’s AI recap:

  • Midjourney's cinematic new AI video model

  • How we re-created CAL AI with n8n and Lovable (watch here)

  • MiniMax's new 1M-token model

  • Scale AI's Meta deal sparks client exodus

  • An AI that never stops learning

Midjourney Enters the AI Video Arena

The Recap: AI image giant Midjourney has released its first video model, an image-to-video tool that is already drawing praise from creators for its stunning, cinematic results. The V1 launch immediately re-establishes Midjourney as a major player in the generative media space.

Unpacked:

  • The model currently functions as an image-to-video only tool, generating four 5-second, 480p clips that can be extended to a maximum of 20 seconds.

  • Creators can direct motion with 'low' and 'high' settings or use a --raw parameter for more precise control, helping the model uniquely animate specific artistic styles.

  • Many in the community felt Midjourney was falling behind in the AI video race, but the quality of this first release has surprised users and exceeded expectations.

Bottom line: Midjourney is leveraging its powerful image synthesis foundation to create a video tool that prioritizes aesthetic quality and artistic control. This impressive debut proves that a focus on stylistic consistency can be a powerful differentiator in the crowded AI video market.

The Million-Token Mind

The Recap: MiniMax just released MiniMax-M1, a powerful new open-weight model with a huge 1 million token context window and a novel 'lightning attention' architecture.

Unpacked:

  • It uses a hybrid Mixture-of-Experts (MoE) design and a "lightning attention" mechanism, enabling it to process information while using 25% of the computing power of rivals like DeepSeek R1.

  • The model's 1 million token context window is eight times larger than DeepSeek R1's, allowing it to process and reason over extremely long documents or codebases.

  • Its reinforcement learning was completed in just three weeks, showcasing a new level of training efficiency at a cost of only $534,700.

Bottom line: This release gives developers a powerful, open-weight alternative for tackling complex tasks that require deep contextual understanding. Models like this push the boundaries of AI, making it possible to analyze entire code repositories or comprehensive legal documents in a single pass.

AI Training

The Recap: In this video, I’ll show you how to build a fully functional Calorie AI-style app, no coding required. We’ll use Lovable to generate the frontend with just a prompt, then wire it up with n8n as the backend and the OpenAI Vision API. By the end, you’ll have a working app where you can upload a photo of your meal and instantly get the calorie and nutrition breakdown — all powered by AI.

P.S We also launched a free community for AI Builders looking to master the art and science of building AI Automations — Come join us!

The Scale AI Shakeup

The Recap: Meta's massive $14.3 billion investment in data-labeling firm Scale AI is causing major industry ripples, as key clients are now halting projects over conflict-of-interest fears.

Unpacked:

  • Following the deal, AI rivals including Google (its largest customer), OpenAI, and xAI are reportedly halting or winding down their projects with the data provider.

  • In response, Scale AI's new interim CEO insists the company remains independent and is not winding down its operations or changing its course.

  • The core issue for clients is the fear that Meta, a direct competitor, could gain insights into their proprietary data and strategic AI roadmaps.

Bottom line: This shakeup underscores the strategic importance of data supply chains in the ultra-competitive AI landscape. The fallout creates a major opening for Scale's rivals and may push more AI labs to bring data operations in-house to ensure neutrality.

Where AI Experts Share Their Best Work

Join our Free AI Automation Community

Join our FREE community AI Automation Mastery — where entrepreneurs, AI builders, and AI agency owners share templates, solve problems together, and learn from each other's wins (and mistakes).

What makes our community different:

  • Real peer support from people building actual AI businesses

  • Complete access to download our automation library of battle-tested n8n templates

  • Collaborate and problem-solve with AI experts when you get stuck

Dive into our course materials, collaborate with experienced builders, and turn automation challenges into shared wins. Join here (completely free).

An AI That Never Stops Learning

The Recap: Researchers at MIT have developed a new method called SEAL that allows AI models to learn continuously from new information, a significant step toward solving the long-standing problem of 'catastrophic forgetting' in AI.

Unpacked:

  • SEAL works by prompting an AI to generate its own synthetic training data based on new inputs and then using that data to update its internal parameters.

  • The approach was successfully tested on versions of well-known open source models, including Alibaba's Qwen.

  • Despite its progress, the method is computationally intensive and doesn't yet fully solve catastrophic forgetting, where the model loses old knowledge as it learns new things.

Bottom line: This research could lead to AI assistants that adapt to your personal preferences and stay current without constant, large-scale retraining. It opens a promising new path for creating AI that can evolve over time, much like humans do.

The Shortlist

Amazon warned its corporate employees that "efficiency gains from using AI" are expected to reduce the company's workforce over the next few years.

OpenAI discovered hidden features inside its models that correspond to different "personas," including a toxic one, which can be turned up or down by adjusting the model's internal representations.

Deezer reported that up to 70% of streams for AI-generated music on its platform are fraudulent, with bots used to generate fake listens and claim royalty payments.

Cloudflare open-sourced 'use-mcp', a new React library that allows developers to connect any React application to a remote Model Context Protocol (MCP) server in just three lines of code.

What did you think of today's email?

Before you go we’d love to know what you thought of today's newsletter. We read every single message to help improve The Recap experience.

Login or Subscribe to participate in polls.

Signing off,

David, Lucas, Mitchell — The Recap editorial team