AI News

Meta MTIA Chips: Four New AI Processors Target Recommendation Systems and Inference

Meta announced four new MTIA chips designed to power AI features and content ranking systems, marking the social media giant's continued push into custom AI hardware development.

LLMBase Editorial Updated March 11, 2026 2 min read
Meta MTIA AI chips inference recommendation systems AI hardware
Meta MTIA Chips: Four New AI Processors Target Recommendation Systems and Inference

MTIA 300 Already in Production for Content Ranking

The MTIA 300 is currently in production and will primarily handle training algorithms that rank and recommend content for hundreds of millions of daily users across Meta's social media platforms. This marks the first deployment of Meta's latest generation custom silicon for live production workloads.

Meta partnered with Broadcom for chip development, built on the open-source RISC-V architecture. Taiwan Semiconductor Manufacturing Corporation is handling fabrication. The rapid development timeline—unusual for both the semiconductor industry and social media companies—reflects Meta's strategy to iterate quickly as AI workloads evolve.

Three Inference Chips Planned Through 2027

The remaining three processors target inference workloads for running trained AI models. The MTIA 400, which Meta claims delivers performance "competitive with leading commercial products," has completed testing and should reach data centers soon.

The MTIA 450 will feature double the high-bandwidth memory of the MTIA 400 and is scheduled for early 2027. The MTIA 500, expected in late 2027, will include additional memory capacity and "innovations in low-precision data" for more efficient AI inference operations.

Strategic Context and Market Positioning

Meta's accelerated chip development timeline addresses a key challenge in AI infrastructure: traditional semiconductor development cycles often lag behind rapidly evolving AI model requirements. YJ Song, Meta's vice president of engineering, emphasized the company's "iterative approach" using modular chiplets to incorporate the latest AI workload insights.

The announcement follows reports earlier this year that Meta had scaled back some in-house chip efforts that would compete directly with NVIDIA's high-end processors. However, the company continues major purchasing agreements with NVIDIA, AMD, and Google for additional computing capacity, indicating custom chips will supplement rather than replace third-party hardware.

Implications for Enterprise AI Infrastructure

For European enterprises evaluating AI infrastructure strategies, Meta's approach demonstrates both the potential and limitations of custom silicon development. While Meta can optimize chips for specific workloads like content recommendation and inference, the enormous costs and technical complexity mean most organizations will continue relying on commercial processors from established vendors.

The modular chiplet approach and rapid iteration cycles may influence how other tech companies design custom AI accelerators, particularly for inference workloads where power efficiency and cost optimization matter more than raw training performance. Meta's focus on the MTIA chips for production AI features shows how custom silicon can target specific operational requirements rather than general-purpose computing.

Wired reported on Meta's chip development strategy and partnership details.

AI News Updates

Subscribe to our AI news digest

Weekly summaries of the latest AI news. Unsubscribe anytime.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.