Ollama's Epic Upgrade: One-Click to Run 45K Huggingface and hf-mirror GGUF Models
Easily run 45K Huggingface GGUF models with Ollama's latest upgrade. Simple commands, no more manual downloads or Modelfile setups!
I believe many people, like me, first got into AI through Ollama. The reason Ollama stands out is because it uses the GGML format (a ‘lightweight’ large language model that runs at lower precision and can easily adapt to regular hardware), and it's simple enough—much like Docker. You only need simple commands to pull images, run them, and even customize the large model using the Modelfile file.
In the past, when Huggingface wasn’t supported, we had to manually download GGUF format models and then customize the build according to the Modelfile, which was quite inconvenient.
Now, it’s finally here—it finally supports GGUF format models from Huggingface! With approximately 45,000 GGUF models at your disposal, you’re no longer limited to Ollama’s own list of models, nor do you need to create Modelfile files anymore. This is quite a big leap forward.