Back to stories
Models

Google Releases Gemma 4 — Most Capable Open Models Yet, Under Apache 2.0

Michael Ouroumis2 min read
Google Releases Gemma 4 — Most Capable Open Models Yet, Under Apache 2.0

Google DeepMind has released Gemma 4, a family of four open-weight models that represent a major step forward for the open AI ecosystem — and a strategic shift in how Google distributes its frontier research.

Four Models, One Architecture

The release spans the full compute spectrum. At the bottom: Effective 2B and 4B models purpose-built for on-device inference on phones, tablets, and edge hardware. At the top: a 26B Mixture-of-Experts model and a 31B Dense model aimed at cloud and data center workloads.

All four are derived from the same research that produced Gemini 3, Google's proprietary frontier model. The 31B Dense variant currently sits at #3 on the Arena AI text leaderboard among open-weight models, with the 26B MoE variant at #6.

Perhaps most significantly, the entire family ships under Apache 2.0 — a fully permissive license with no usage restrictions. Previous Gemma releases carried more restrictive terms that limited commercial deployment.

Multimodal by Default

Every Gemma 4 model processes video and images natively, supporting variable resolutions and excelling at visual reasoning tasks including OCR, chart interpretation, and document understanding. The edge-targeted E2B and E4B models add native audio input for speech recognition and understanding — a first for Google's open model line.

All variants support over 140 languages out of the box, a reflection of Gemini 3's multilingual training corpus.

Context windows range from 128K tokens for the edge models to 256K for the 26B and 31B variants — long enough to process entire codebases, lengthy documents, or extended video sequences in a single pass.

Built for Agentic Workflows

Google explicitly designed Gemma 4 for the agentic AI workflows that have become the dominant deployment pattern in 2026. The models include native support for structured tool calling, multi-step planning, and autonomous task execution.

Android developers get early access through the AICore Developer Preview, which integrates Gemma 4 directly into the Android runtime for on-device agent capabilities without cloud round-trips.

The Strategic Play

The Apache 2.0 licensing is the real headline. By making its most capable open models fully permissive, Google is positioning Gemma as the default foundation for commercial AI applications that need to avoid proprietary lock-in.

The timing is pointed. Meta's Llama 4 Maverick uses a custom license with usage restrictions. DeepSeek V4, while impressively cheap to train, operates under Chinese export considerations that make some Western enterprises uneasy.

Google is betting that permissive licensing plus frontier-tier performance will make Gemma 4 the path of least resistance for enterprise adoption — and that widespread Gemma deployment will keep developers inside Google's broader cloud and tooling ecosystem.

The models are available now on Hugging Face, Google Cloud Vertex AI, and through the Kaggle platform.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support
Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support

xAI's new voice model scored 67.3% on the τ-voice Bench — well ahead of Gemini 3.1 Flash Live and GPT Realtime — and is now powering Starlink's phone sales and support with a 70% autonomous resolution rate.

2 days ago2 min read
Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao
Models

Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Tencent has open-sourced Hy3 Preview, a 295B/21B-activated mixture-of-experts model built in under three months. The Yuanbao chatbot is switching its primary engine from DeepSeek to the new in-house model.

4 days ago2 min read
DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M
Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M

DeepSeek on April 24 released preview versions of V4-Pro and V4-Flash, an open-weight MoE family with a 1M-token context window and pricing that undercuts Western frontier labs.

4 days ago2 min read