Zuckerberg Confirms: Llama-4 Trained with 100,000 GPUs and Even Better Open-Source!
Meta's Mark Zuckerberg confirms Llama-4 was trained with over 100,000 GPUs, pushing AI boundaries. Plus, new open-source models like Llama-3.2 for mobile devices.
Meta's co-founder and CEO, Mark Zuckerberg, sat down for an interview with Cleo Abram, a former journalist from Vox.
The interview covered Meta's latest groundbreaking products, including holographic AR glasses, developments in open-source large models, generative AI, and the much-anticipated Llama-4.
Zuckerberg confirmed that Llama-4 was trained using over 100,000 GPUs.
Meta currently has access to 600,000 GPUs, which means Llama-4 has become a flagship product. Using more GPUs helps push AI boundaries.
Here’s the breakdown of the content:
Part 1 features clips of Zuckerberg discussing Llama-4.
Part 2 is the full 47-minute interview, with a brief summary. Skip directly to the video if you prefer.
Part 3 covers Meta’s latest open-source lightweight multimodal model, Llama-3.2, designed for mobile devices like phones and tablets, offering powerful performance.
Keep reading with a 7-day free trial
Subscribe to AI Disruption to keep reading this post and get 7 days of free access to the full post archives.