Back to stories
Models

Meta Is Quietly Routing Some AI Users Through Google Gemini While Avocado Falls Behind

Michael Ouroumis4 min read
Meta Is Quietly Routing Some AI Users Through Google Gemini While Avocado Falls Behind

Meta's AI division is in a more complicated position than the company's public silence suggests. While the company pushed its flagship Avocado model to May or later after internal testing found it underperforming against Google's Gemini 3, OpenAI's GPT-5.4, and Anthropic's Claude — it has been doing something that would have seemed unthinkable a year ago: routing some of its Meta AI users through Google's own models.

The details emerged from analysis of Meta's internal model selection infrastructure, published this week by TestingCatalog.

What the Testing Revealed

Meta's internal model selector — accessible through parts of the Meta AI interface — reveals several Avocado configurations currently in parallel evaluation:

The sheer number of release candidates in flight suggests Meta hasn't determined which configuration will ship — or in what order. It's the kind of parallelization you do when you're not sure your primary bet is going to hit the benchmark bar you need.

The Gemini A/B Test

The most striking finding is the Gemini routing. System prompt analysis and traffic patterns show that some requests within Meta AI are already being processed by Google's Gemini models rather than any version of Avocado or Llama. According to sources cited by TestingCatalog, Meta's AI leadership has held serious discussions about temporarily licensing Gemini technology to fill capability gaps while Avocado matures.

This is not a small thing. Meta has built one of the largest AI user bases in the world across Facebook, Instagram, and WhatsApp — hundreds of millions of active users who interact with Meta AI every day. If those users are getting Gemini responses without knowing it, Meta has effectively become a reseller of its competitor's product in its own ecosystem.

The arrangement makes sense from a short-term product perspective. Meta can't afford to have its AI products fall dramatically behind competitors while Avocado is being rebuilt. But it also carries reputational and strategic risks: it reveals just how far behind Avocado has fallen, and it creates a dependency on a company (Google) that is also a direct competitor.

The Delay That Triggered This

The backstory matters here. In early March, the New York Times reported that Meta had delayed Avocado's release to at least May after internal evaluations showed it couldn't match GPT-5.4, Gemini 3, or Claude on key benchmarks. Multiple sources described the internal testing as a significant disappointment, particularly given the resources Meta has poured into the project.

The benchmarks where Avocado struggled were notably not exotic edge cases. According to TestingCatalog's analysis of system prompts and capability probes, Avocado fell short on complex math reasoning problems that Gemini 3 and GPT had already solved months earlier. That's a meaningful gap for a model that's supposed to power everything from Instagram DMs to Meta's enterprise tools.

The Open Source Question

Perhaps the most consequential aspect of Avocado's development is what it signals about Meta's future relationship with open source. For the past several years, Meta has been the most prominent open-source champion in frontier AI — releasing the Llama family under permissive licenses, providing researchers and smaller companies access to state-of-the-art weights. That stance had strategic logic: if strong models are freely available, the competitive advantage of closed-source providers diminishes.

Avocado is expected to be proprietary.

Under CEO Mark Zuckerberg's mandate to pursue superintelligence, Meta has shifted toward treating its model research as a strategic asset rather than a community resource. Sources say the company views the open-source approach as inconsistent with the level of capability and resource concentration that frontier AI development now requires.

For the broader AI ecosystem, that shift matters. The open-source AI community has relied heavily on Meta's Llama releases as a foundation for research and startups. If Avocado ships closed-source, that pipeline dries up at exactly the moment when the gap between open and closed models was beginning to close.

What Users Experience

For Meta AI's hundreds of millions of users, Avocado will eventually represent a meaningful step up from the current Llama-based experience — even if it doesn't match the very frontier of GPT-5.4 or Gemini 3. Better reasoning, more capable agents, and multimodal improvements are all visible in the variants currently under testing.

The question is whether Meta can get there before users and developers choose alternative AI products that are already available. In the attention economy, a few months matters.

Whether Meta quietly ships these improvements through a soft rollout or waits for a high-profile launch moment remains unclear. What's certain is that the company's AI timeline is messier, and more interesting, than its public silence suggests.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support
Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support

xAI's new voice model scored 67.3% on the τ-voice Bench — well ahead of Gemini 3.1 Flash Live and GPT Realtime — and is now powering Starlink's phone sales and support with a 70% autonomous resolution rate.

2 days ago2 min read
Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao
Models

Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Tencent has open-sourced Hy3 Preview, a 295B/21B-activated mixture-of-experts model built in under three months. The Yuanbao chatbot is switching its primary engine from DeepSeek to the new in-house model.

4 days ago2 min read
DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M
Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M

DeepSeek on April 24 released preview versions of V4-Pro and V4-Flash, an open-weight MoE family with a 1M-token context window and pricing that undercuts Western frontier labs.

4 days ago2 min read