In the competitive landscape of AI agents, where businesses are closing investment deals everyday to build and expand their AI infrastructure and software, the companies that seemed to be leading the race are OpenAI, Anthropic, Microsoft, NVIDIA, Google, and Amazon.
But despite the success of its large language models (LLMs) family, one of the big tech companies that seem to be struggling to keep relevance is Meta.
Meta’s AI strategy is currently splitted between openness, scale, and control. The tech company’s next step has postponed its launching due to performance concerns, the code-named AI new model ‘Avocado’, while opening the door for a debate within the industry about open source and profitability.
Meta’s AI strategy: Meta AI, LlaMa and ‘Avocado’
The big tech company led by Mark Zuckerberg, has been accelerating its positioning into artificial intelligence with the introduction of Meta AI.
Originally launched as a chatbot integrated into WhatsApp, Instagram, Facebook, and Messenger in September 2023, the product took a significant step forward in April 2025 with the debut of a dedicated standalone app, unveiled at Meta’s LlamaCon developer conference, bringing with it a Discover Feed, voice capabilities, and deeper personalisation features.
It was designed as a consumer-facing interface for generative AI, allowing its users to generate content, hold conversations, interact through a Discover Feed and run ads within Meta’s ecosystem.
Behind the scenes, Meta AI is powered by LlaMa, which is Meta’s LLMs family. Llama was initially launched as a tool to help researchers and others who could not access large amounts of infrastructure to study AI models, positioning itself as a solution for democratizing access within the industry.
Over time, Llama has launched 4 models, as an open-source multimodal AI system. Additionally, Meta launched a limited preview of Llama API (Application Programming Interface) to enable developers to connect and use their Llama models.
But beyond Llama and Meta AI, reports suggest that Meta has been working on its next generation of AI models: ‘Avocado’. Despite that there is no official statement about it, a Meta spokesperson spoke with Reuters explaining how Meta has been working on this new frontier AI model different from their previous models.
While Llama is characterized for being open-source, ‘Avocado’ would be proprietary instead, making it impossible for outside developers to freely download its weights and related software components.
Avocado isn’t the main story, what it reveals about Meta is
Meta’s change of heart challenges the differentiation factor that Zuckerberg was so proud to embrace back in 2024 and which was the core idea after the deployment of Llama. That open-source would close the gap in AI development by allowing developers to improve it and create smaller versions.
One year after that first memo, Mark Zuckerberg shared a second one highlighting how he expected Meta to keep being a leader in Open Source, but due to safety concerns they would be more careful about “what we choose to open source”, suggesting a reconsideration in his initial approach.
This decision can be analyzed as a Meta’s shift from a clear AI strategy to a reactive one. First, after the rise of Deepseek as a major competitor in the AI landscape, where its R1 model, and a family of smaller distilled variants built on Llama and Qwen architectures, demonstrated that open-source components could be leveraged to build highly competitive systems.
This represented a significant disadvantage for Meta, since its open-source models provided crucial leverage to a competitor. Additionally, a closed-source AI model represents an economic relief for the massive investment Meta is doing to leverage their AI capabilities.
For instance, in June 2025, Meta invested $14.3 billion in data-labelling company Scale AI in exchange for a 49% stake, bringing Scale AI’s founder Alexandr Wang on board to lead the newly formed Meta Superintelligence Labs, the division now tasked with developing ‘Avocado’.
Disruption: Meta under pressure
However, recent developments suggest that Meta’s AI strategy may be entering an uncertainty phase. While the company initially positioned itself as a leader in open-source models with Llama, the rollout of the latest generation faced notable challenges.
The early reception of the new model was mixed, where some developers signaled an underperformance compared to competing systems. It also had a reduction in the reception and adoption compared to past models.
Additionally, Llama 4’s flagship model ‘Behemoth’ release, which was expected to be a much larger “teacher model”, has been repeatedly postponed as engineers struggle to improve its capabilities.
The new frontier model ‘Avocado’, was expected to be launched in March 2026, but seems to also be struggling to come to life. A person familiar with the matter told Reuters that the release was postponed to May or June.
The reason behind the delay is that ‘Avocado’ was falling short compared to Google’s Gemini 2.5 and Gemini 3 and other models from its AI competitors, in internal tests for reasoning, coding, and writing, said the sources.
Moreover, people with knowledge of the matter also said how Meta’s leadership is apparently discussing temporarily licensing Gemini from Google to power ‘Avocado’, and other company’s AI products, although no decisions have been made.
Implications
Taken together, these chain of events suggest not isolated issues, but instead, they raise questions about Meta’s long term strategy in AI. It seems that Meta is no longer executing a single, consistent AI strategy, but exploring multiple directions at once.
At the core of this shift in Meta’s AI strategy is the growing tension between openness and control. Llama’s success established Meta as a key player in the open-source AI ecosystem, enabling broad adoption and growth beyond the company.
The downfall was the challenge to maintain a competitive edge, since competitors such as DeepSeek took advantage of the open-source model to leverage their own models. The adoption of ‘Avocado’ suggests a strategic pivot in that sense, but it weakens the Meta’s differentiation factor to be an Open AI for everyone.
At the same time, the decision to change from open to close source models, comes as a solution to offset the expenses for AI development as a result for the intense investment strategy from Mark Zuckerberg to position Meta as one of the leaders in the AI competitive landscape, such as the investments of $600 billion committed to US AI infrastructure, data centres, energy projects, and workforce programmes by 2028.
An expenditure that seems to be very relevant for the company since the execution challenges raise further concerns. The delays, fixed receptions or models, and the reports of underperformance of ‘Avocado’ compared to the competition, indicate that Meta is falling behind at the frontier AI development, and needs to close that gap soon.
The most significant signal lies in the possibility of external dependency of Meta in Google, since the potential temporary licensing of models such as Gemini would outsource Meta’s technology to sustain its AI products. Marking a fundamental shift: from building core capabilities to acting as a distribution layer.
The final question is no longer whether Meta can build competitive AI models, but whether it can define a consistent, coherent strategy for them. Without that clarity, even its strategic positioning may not be enough to secure a leading position in the AI race.
Get the TNW newsletter
Get the most important tech news in your inbox each week.