Microsoft open-sources Phi-4, surpassing GPT-4o, available for commercial use
Microsoft has open-sourced phi-4, the strongest small-parameter model, outperforming GPT-4o in key benchmarks. With 14B parameters, phi-4 excels in math and reasoning tasks and is now available under
Today, Microsoft Research has open-sourced the currently strongest small-parameter model—phi-4.
First showcased on December 12th last year, phi-4 has 14 billion parameters yet performs exceptionally well, surpassing OpenAI's GPT-4o and top open-source models like Qwen 2.5-14B and Llama-3.3-70B in GPQA and MATH benchmarks.
In the American Math Competition (AMC) tests, phi-4 scored 91.8, outshining Gemini Pro 1.5, GPT-4o, Claude 3.5 Sonnet, Qwen 2.5, and other well-known open and closed-source models. Its overall performance even rivals Llama-3.1, with 405 billion parameters.
Many had hoped for Microsoft to open-source this powerful small-parameter model, and some even uploaded pirated phi-4 weights to HuggingFace. Now, it has been officially open-sourced under the MIT license, supporting commercial use.