Ollama and Open WebUI
Most of us have low resource computers and getting the heavy LLM models working on the laptop or desktop is not an easy task. So how does AI agents and the LLM modes will work?
This has been my question so far. I am trying to find the models and the model aggregators who work in such case that works on the low computing resources.
So I found the Ollama. You can dowload the Ollama for Windows here.
It’s free and however it has a high size like 650MB worth of the download. And also the model that is inside is say like 4GB around or something.
Once downloaded you can open up the Windows Terminal. You can choose to either use the Terminal or the web version. Either one of that would work out and it would get your work done with the AI.
I personally prefer the command prompt version for the casual chat. And it works with the openlama and the mistral AI too.
Say you want to chat with the llama. And in such case you can run the example like this.
You can run the command
ollama run llama3.1
Just run this code and you are ready to go.
You can then now ask he questions through the command prompt to the AI.
OpenWebUI requires you to install the docker. And then you would be able to run the web version of the ollama. And this means you would need a bit of resources and the RAM even in this case. So not a good solution.
However you now have the command prompt, terminal based version of the Ollama running and you have local or say native AI prompt to use. :)
Web223 Channel.Here’s where we drop all the tech & marketing shit. Think Product Hunt but for Grill. We're talking everything interesting in tech, B2B and B2C, anything that’s worth discussing. You gotta add your own thoughts about the tech, what you found, and why it's cool. No lazy posts, alright?
Check the 223 User Manual before posting.
3 comments