For most users that wanted to run LLM locally, ollama solved the UX problem.
One command, and you are running the models even with the rocm drivers without knowing.
If llama provides such UX, they failed terrible at communicating that. Starting with the name. Llama.cpp: that's a cpp library! Ollama is the wrapper. That's the mental model. I don't want to build my own program! I just want to have fun :-P
Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:
Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.
No mention of the fact that Ollama is about 1000x easier to use. Llama.cpp is a great project, but it's also one of the least user friendly pieces of software I've used. I don't think anyone in the project cares about normal users.
I started with Ollama, and it was great. But I moved to llama.cpp to have more up-to-date fixes. I still use Ollama to pull and list my models because it's so easy. I then built my own set of scripts to populate a separate cache directory of hardlinks so llama-swap can load the gguf's into llama.cpp.
Exactly. The blog post states that the alternatives listed are similarly intuitive. They are not. If you just need a chat app, then sure, there’s plenty of options. But if you want an OpenAI compatible API with model management, accessibility breaks down fast.
I’m open to suggestions, but the alternatives outlined in the blog post ain’t it.
The reported alternatives seem pretty User-Friendly to me:
> LM Studio gives you a GUI if that’s what you want. It uses llama.cpp under the hood, exposes all the knobs, and supports any GGUF model without lock-in.
> Jan(https://www.jan.ai/) is another open-source desktop app with a clean chat interface and local-first design.
> Msty(https://msty.ai/) offers a polished GUI with multi-model support and built-in RAG. koboldcpp is another option with a web UI and extensive configuration options.
API wise: LM Studio has REST, OpenAI and more API Compatibilities.
I spend like 2 hours trying to get vulkan acceleration working with ollama, no luck (half models are not supported and crash it). With llama.cpp podman container starts and works in 5 minutes.
Thanks for writing this, I hope people here will actually read this and not assume this is some unfounded hit piece. I was involved a little bit in llama.cpp and knew most of what you wrote and it’s just disgusting how ollama founders behaved!
For people looking for alternatives, I would also recommend llama-file, it’s a one file executable for any OS that includes your chosen model: https://github.com/mozilla-ai/llamafile?tab=readme-ov-file
It’s truly open source, backed by Mozilla, openly uses llama.cpp and was created by wizard Justine Tunney of CosmopolitanC fame.
Do they still not let you change the default model folder? You had to go through this whole song and dance to manually register a model via a pointless dockerfile wannabe that then seemed to copy the original model into their hash storage (again, unable to change where that storage lived).
At the time I dropped it for LMStudio, which to be fair was not fully open source either, but at least exposed the model folder and integrated with HF rather than a proprietary model garden for no good reason.
This also annoyed me a lot. I was running it before upgrading the SSD storage and I wanted to compare with LM Studio. Figured it would be good to have both interfaces use the same models downloaded from HF.
Had to go down the same rabbit hole of finding where things are, how they're sorted/separated/etc. It was unnecessarily painful
The CLI is great locally, but the architecture fights you in production. Putting a stateful daemon that manages its own blob storage inside a container is a classic anti-pattern. I ended up moving to a proper stateless binary like llama-server for k8s.
I noticed the performance issues too. I started using Jan recently and tried running the same model via llama.cpp vs local ollama, and the llama.cpp one was noticeably faster.
It's a joke... but also not really? I mean VLC is "just" an interface to play videos. Videos are content files one "interact" with, mostly play/pause and few other functions like seeking. Because there are different video formats VLC relies on codecs to decode the videos, so basically delegating the "hard" part to codecs.
Now... what's the difference here? A model is a codec, the interactions are sending text/image/etc to it, output is text/image/etc out. It's not even radically bigger in size as videos can be huge, like models.
I'm confused as why this isn't a solved problem, especially (and yes I'm being a big sarcastic here, can't help myself) in a time where "AI" supposedly made all smart wise developers who rely on it 10x or even 1000x more productive.
What problem is it that you are confused isn't solved?
I think the codec analogy is neat but isn't the codec here llama.cpp, and the models are content files? Then the equivalent of VLC are things like LMStudio etc. which use llama.cpp to let you run models locally?
I'd guess one reason we haven't solved the "codec" layer is that there doesn't seem to be a standard that open model trainers have converged on yet?
> Ollama is a Y Combinator-backed (W21) startup, founded by engineers who previously built a Docker GUI that was acquired by Docker Inc. The playbook is familiar: wrap an existing open-source project in a user-friendly interface, build a user base, raise money, then figure out monetization.
The progression follows the pattern cleanly:
1. Launch on open source, build on llama.cpp, gain community trust
2. Minimize attribution, make the product look self-sufficient to investors
3. Create lock-in, proprietary model registry format, hashed filenames that don’t work with other tools
4. Launch closed-source components, the GUI app
5. Add cloud services, the monetization vector
I think the biggest advantage for me with ollama is the ability to "hotswap" models with different utility instead of restarting the server with different models combined with the simple "ollama pull model". In other words, it has been quite convenient.
Due to this post I had to search a bit and it seems that llama.cpp recently got router support[1], so I need to have a look at this.
My main use for this is a discord bot where I have different models for different features like replying to messages with images/video or pure text, and non reply generation of sentiment and image descriptions. These all perform best with different models and it has been very convenient for the server to just swap in and out models on request.
It feels like a bit of history is missing... If ollama was founded 3 years before llama.cpp was released, what engine did they use then? When did they transition?
I don't think that is the case. Llama.cpp appeared within weeks after meta released llama to select researchers (which then made it out to the public). 3 years before that nobody knew of the name llama. I'm sure that llama.cpp existed first
> This creates a recurring pattern on r/LocalLLaMA: new model launches, people try it through Ollama, it’s broken or slow or has botched chat templates, and the model gets blamed instead of the runtime.
Seems like maybe, at least some of the time, you’re being underwhelmed my ollama not the model.
The better performance point alone seems worth switching away
I follow the llama.cpp runtime improvements and it’s also true for this project. They may rush a bit less but you also have to wait for a few days after a model release to get a working runtime with most features.
Does it have a model registry with an API and hot swapping or you still have to use sometime like llama swap as suggested in the article ? Or is it CLI?
Another scummy YCombinator project, one of many lately. Looks like no-one is left at the wheel, at least as long as the valuations (and hence money) keep coming in.
I find the style of writing incredibly annoying (it doesn't make the point, full of hyperbole) and the website has the standard slopsite black background and glowing CSS.
For most users that wanted to run LLM locally, ollama solved the UX problem.
One command, and you are running the models even with the rocm drivers without knowing.
If llama provides such UX, they failed terrible at communicating that. Starting with the name. Llama.cpp: that's a cpp library! Ollama is the wrapper. That's the mental model. I don't want to build my own program! I just want to have fun :-P
Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.
Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:
brew install llama.cpp
llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000
Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.
While that might be true, for as long as its name is “.cpp”, people are going to think it’s a C++ library and avoid it.
This is the first I'm learning that it isn't just a C++ library.
In fact the first line of the wikipedia article is:
> llama.cpp is an open source software library
It would make sense to just make the GUI a separate project, they could call it llama.gui.
Frankly I think the cli UX and documentation is still much better for ollama.
It makes a bunch of decisions for you so you don't have to think much to get a model up and running.
but if ollama is much slower, that's cutting on your fun and you'll be having better fun with a faster GUI
No mention of the fact that Ollama is about 1000x easier to use. Llama.cpp is a great project, but it's also one of the least user friendly pieces of software I've used. I don't think anyone in the project cares about normal users.
I started with Ollama, and it was great. But I moved to llama.cpp to have more up-to-date fixes. I still use Ollama to pull and list my models because it's so easy. I then built my own set of scripts to populate a separate cache directory of hardlinks so llama-swap can load the gguf's into llama.cpp.
Exactly. The blog post states that the alternatives listed are similarly intuitive. They are not. If you just need a chat app, then sure, there’s plenty of options. But if you want an OpenAI compatible API with model management, accessibility breaks down fast.
I’m open to suggestions, but the alternatives outlined in the blog post ain’t it.
The reported alternatives seem pretty User-Friendly to me:
> LM Studio gives you a GUI if that’s what you want. It uses llama.cpp under the hood, exposes all the knobs, and supports any GGUF model without lock-in.
> Jan(https://www.jan.ai/) is another open-source desktop app with a clean chat interface and local-first design.
> Msty(https://msty.ai/) offers a polished GUI with multi-model support and built-in RAG. koboldcpp is another option with a web UI and extensive configuration options.
API wise: LM Studio has REST, OpenAI and more API Compatibilities.
> No mention of the fact that Ollama is about 1000x easier to use.
Easier than what?
I came across LM Studio (mentioned in the post) about 3 years ago before I even knew what Ollama as. It was far better even then.
I spend like 2 hours trying to get vulkan acceleration working with ollama, no luck (half models are not supported and crash it). With llama.cpp podman container starts and works in 5 minutes.
LM Studio is 1000x easier to use than ollama btw
I got tired of repeating the same points and having to dig up sources every time, so here's the timeline (as I know it) in one place with sources.
Thanks for writing this, I hope people here will actually read this and not assume this is some unfounded hit piece. I was involved a little bit in llama.cpp and knew most of what you wrote and it’s just disgusting how ollama founders behaved! For people looking for alternatives, I would also recommend llama-file, it’s a one file executable for any OS that includes your chosen model: https://github.com/mozilla-ai/llamafile?tab=readme-ov-file
It’s truly open source, backed by Mozilla, openly uses llama.cpp and was created by wizard Justine Tunney of CosmopolitanC fame.
Really nice. I wasn't aware of any of this.
Great writing, thanks for the summary and timeline.
Thanks, did not know any of this.
Alas people want convenience and don’t care about this sort of stuff.
Do they still not let you change the default model folder? You had to go through this whole song and dance to manually register a model via a pointless dockerfile wannabe that then seemed to copy the original model into their hash storage (again, unable to change where that storage lived).
At the time I dropped it for LMStudio, which to be fair was not fully open source either, but at least exposed the model folder and integrated with HF rather than a proprietary model garden for no good reason.
This also annoyed me a lot. I was running it before upgrading the SSD storage and I wanted to compare with LM Studio. Figured it would be good to have both interfaces use the same models downloaded from HF.
Had to go down the same rabbit hole of finding where things are, how they're sorted/separated/etc. It was unnecessarily painful
The performance issues are crazy. Thanks for sharing this
The CLI is great locally, but the architecture fights you in production. Putting a stateful daemon that manages its own blob storage inside a container is a classic anti-pattern. I ended up moving to a proper stateless binary like llama-server for k8s.
drop ollama in the bin, no one needs it.
I noticed the performance issues too. I started using Jan recently and tried running the same model via llama.cpp vs local ollama, and the llama.cpp one was noticeably faster.
Not sure why VLC doesn't do that.
It's a joke... but also not really? I mean VLC is "just" an interface to play videos. Videos are content files one "interact" with, mostly play/pause and few other functions like seeking. Because there are different video formats VLC relies on codecs to decode the videos, so basically delegating the "hard" part to codecs.
Now... what's the difference here? A model is a codec, the interactions are sending text/image/etc to it, output is text/image/etc out. It's not even radically bigger in size as videos can be huge, like models.
I'm confused as why this isn't a solved problem, especially (and yes I'm being a big sarcastic here, can't help myself) in a time where "AI" supposedly made all smart wise developers who rely on it 10x or even 1000x more productive.
Weird.
What problem is it that you are confused isn't solved?
I think the codec analogy is neat but isn't the codec here llama.cpp, and the models are content files? Then the equivalent of VLC are things like LMStudio etc. which use llama.cpp to let you run models locally?
I'd guess one reason we haven't solved the "codec" layer is that there doesn't seem to be a standard that open model trainers have converged on yet?
> Ollama is a Y Combinator-backed (W21) startup, founded by engineers who previously built a Docker GUI that was acquired by Docker Inc. The playbook is familiar: wrap an existing open-source project in a user-friendly interface, build a user base, raise money, then figure out monetization.
I think the biggest advantage for me with ollama is the ability to "hotswap" models with different utility instead of restarting the server with different models combined with the simple "ollama pull model". In other words, it has been quite convenient.
Due to this post I had to search a bit and it seems that llama.cpp recently got router support[1], so I need to have a look at this.
My main use for this is a discord bot where I have different models for different features like replying to messages with images/video or pure text, and non reply generation of sentiment and image descriptions. These all perform best with different models and it has been very convenient for the server to just swap in and out models on request.
[1] https://huggingface.co/blog/ggml-org/model-management-in-lla...
> the ability to "hotswap" models with different utility instead of restarting the server
The article mentions llama-swap does this
Llama.cpp added the ability load/switch models on demand with the max-models and models preset flags.
You can do that with llama-server
It feels like a bit of history is missing... If ollama was founded 3 years before llama.cpp was released, what engine did they use then? When did they transition?
I don't think that is the case. Llama.cpp appeared within weeks after meta released llama to select researchers (which then made it out to the public). 3 years before that nobody knew of the name llama. I'm sure that llama.cpp existed first
They spent several years in stealth mode but the initial release was llama.cpp.
Ollama v0.0.1 "Fast inference server written in Go, powered by llama.cpp" https://github.com/ollama/ollama/tree/v0.0.1
I see no mention of vLLM in the article.
I prefer Ollama over the suggested alternatives.
I will switch once we have good user experience on simple features.
A new model is released on HF or the Ollama registry? One `ollama pull` and it's available. It's underwhelming? `ollama rm`.
> This creates a recurring pattern on r/LocalLLaMA: new model launches, people try it through Ollama, it’s broken or slow or has botched chat templates, and the model gets blamed instead of the runtime.
Seems like maybe, at least some of the time, you’re being underwhelmed my ollama not the model.
The better performance point alone seems worth switching away
I follow the llama.cpp runtime improvements and it’s also true for this project. They may rush a bit less but you also have to wait for a few days after a model release to get a working runtime with most features.
Model authors are welcome to add support to llama.cpp before release like IBM did for granite 4 https://github.com/ggml-org/llama.cpp/pull/13550
you can pull directly from huggingface with llama.cpp, and it also has a decent web chat included
Does it have a model registry with an API and hot swapping or you still have to use sometime like llama swap as suggested in the article ? Or is it CLI?
You can have multiple models served now with loading/unloading with just the server binary.
https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...
ollama is pretty intuitive to use still - dont see why will stop.
i had no idea about all this. especially the performance and bugs. thanks for informing me!
On a practical note if fumbles connection handling as to be unusable to download anything.
The missing attribution pattern is nasty.
i use goose by block
seems pretty unrelated to the post?
also you might be the only person in the wild I've seen admit to this
Another scummy YCombinator project, one of many lately. Looks like no-one is left at the wheel, at least as long as the valuations (and hence money) keep coming in.
I find the style of writing incredibly annoying (it doesn't make the point, full of hyperbole) and the website has the standard slopsite black background and glowing CSS.