Google supercharges Gemini 2.5 Pro

PLUS: OpenAI's $3B coding buy, ServiceNow's new AI assistant, and an open video model

Good morning, AI enthusiast.

Google is making waves ahead of its I/O conference, delivering a significant upgrade to Gemini 2.5 Pro. This enhanced model brings notable improvements to its coding prowess, video understanding, and overall performance for the developer community.

With these enhancements, developers gain access to a more powerful tool for crafting sophisticated and interactive applications. Does this rapid advancement signal a new era for Google's AI, potentially setting new benchmarks for multimodal AI and complex task automation?

In today’s AI recap:

  • Google supercharges Gemini 2.5 Pro

  • OpenAI's $3B Windsurf acquisition

  • ServiceNow and NVIDIA’s new enterprise AI assistant

  • Lightricks unveils open-source LTX-Video 13B

Google Supercharges Gemini 2.5 Pro

The Recap: Google has rolled out a major update to Gemini 2.5 Pro ahead of its I/O conference, significantly boosting its coding capabilities, video understanding, and overall performance for developers.

Unpacked:

  • The model now excels in coding, topping the WebDev Arena leaderboard by 147 Elo points and showing substantial gains in code generation benchmarks like LiveCodeBench v5 (75.6%).

  • It demonstrates state-of-the-art performance in video understanding, scoring 84.8% on the VideoMME benchmark, enabling new flows like the interactive Video to Learning App example.

  • Developers can access this enhanced Gemini 2.5 Pro (gemini-2.5-pro-preview-05-06) via the Gemini API in Google AI Studio and Vertex AI, benefiting from reduced function calling errors at the same price.

Bottom line: This Gemini 2.5 Pro upgrade provides developers with a more potent tool for building sophisticated, interactive applications, particularly in web development and multimodal AI. It signals Google's commitment to rapidly advancing its AI offerings, empowering professionals to automate more complex tasks and push creative boundaries.

OpenAI To Acquire Windsurf in $3B Deal

The Recap: OpenAI to acquire Windsurf, an AI-assisted coding tool formerly known as Codeium, for approximately $3 billion. This deal, reportedly OpenAI's largest to date, signals a significant expansion into developer tools.

Unpacked:

  • Windsurf, formerly Codeium, is an AI-assisted coding tool that helps developers write code more efficiently.

  • OpenAI makes this its largest acquisition to date, signaling a strategic move to bolster its offerings in the AI coding space.

  • Both OpenAI and Windsurf have declined to provide official comments on the agreement.

Bottom line: This $3 billion acquisition highlights OpenAI's ambition to be a major player not just in foundational models but also in specialized AI tools for professionals. For developers, this could mean more powerful, integrated coding assistants are on the horizon, potentially reshaping how software is built.

ServiceNow and NVIDIA Launch Enterprise AI Assistant

The Recap: ServiceNow and NVIDIA unveiled a new AI assistant, Apriel Nemotron 15B, at ServiceNow's Knowledge 2025. This specialized model is designed to boost enterprise productivity by efficiently handling reasoning for IT, HR, and customer service tasks.

Unpacked:

  • Apriel Nemotron 15B is a compact 15-billion parameter model focused on enterprise-grade reasoning, making it faster and more cost-efficient than larger, general-purpose LLMs.

  • It was developed using NVIDIA NeMo, the NVIDIA Llama Nemotron Post-Training Dataset, ServiceNow domain-specific data, and trained on NVIDIA DGX Cloud running on AWS.

  • The collaboration introduces a new data flywheel architecture, integrating ServiceNow's Workflow Data Fabric with NeMo microservices to continuously refine AI performance using enterprise data securely.

  • An early deployment with AstraZeneca is projected to give 90,000 hours back to employees by helping them resolve issues and make decisions with greater speed and precision.

Bottom line: This partnership signifies a strategic move towards more intelligent, evolving AI systems within enterprises. For businesses, this means the potential for faster resolutions, increased productivity, and more responsive AI agents that scale with their needs, directly impacting your operational efficiency.

Lightricks Opens Up Video Generation

The Recap: Lightricks has released LTX-Video 13B, its most powerful open-source video generation model yet, a 13-billion parameter AI optimized for local GPU execution and enhanced creative control. This model aims to set a new standard for speed and quality in AI video.

Unpacked:

  • LTX-Video 13B leverages its 13 billion parameters and a novel "multiscale rendering" technique, which analyzes scenes at multiple resolutions simultaneously to deliver smoother motion and sharper visuals.

  • The model boasts speeds up to 30x faster than comparable alternatives, enabling rapid iteration and efficient production-scale video generation directly on your hardware.

  • Creators gain enhanced artistic direction with features like multi-keyframe conditioning and precise camera control, allowing for detailed frame-by-frame scene manipulation.

  • Lightricks offers full access, releasing the model as open source with its code and weights available, encouraging the community to build upon and explore its capabilities.

Bottom line: LTX-Video 13B's release signifies a major step in making high-quality video generation more accessible and customizable for developers and creators. By providing this powerful tool as open source, Lightricks empowers professionals to experiment and integrate advanced video AI into their workflows, potentially accelerating innovation across various applications.

The Shortlist

OpenAI detailed in a technical report that its newer o3 and o4-mini reasoning models are hallucinating more frequently than earlier versions, with rates as high as 79% on some benchmarks, and the reasons are still under investigation.

FastCompany reports that "prompt engineering" as a standalone job is rapidly becoming obsolete, evolving into an expected skill rather than a specialized role, as AI itself gets better at prompt generation.

Plexe-AI launched a new system allowing users to build machine learning models by describing them in plain language, utilizing an automated agentic approach and supporting various LLM providers like OpenAI, Anthropic, and local models via Ollama.

What did you think of today's email?

Before you go we’d love to know what you thought of today's newsletter. We read every single message to help improve The Recap experience.

Login or Subscribe to participate in polls.

Signing off,

David, Lucas, Mitchell — The Recap editorial team