Logo

dev-resources.site

for different kinds of informations.

How to run Ollama on Windows using WSL

Published at
1/6/2025
Categories
linux
genai
ai
rag
Author
suryasekhar
Categories
4 categories in total
linux
open
genai
open
ai
open
rag
open
Author
11 person written this
suryasekhar
open
How to run Ollama on Windows using WSL

Ever wanted to ask something to ChatGPT or Gemini, but stopped, worrying about your private data? But what if you could run your own LLM locally? That is exactly what Ollama is here to do. Follow along to learn how to run Ollama on Windows, using the Windows Subsystem for Linux (WSL).

For steps on MacOS, please refer to https://medium.com/@suryasekhar/how-to-run-ollama-on-macos-040d731ca3d3

Ollama

1. Pre-Requisites

First, you need to have WSL installed on your system. To do that, execute:

wsl --install

This will prompt you to set a new username and password for your Linux Subsystem. After you are done with that, there are a few things that you need to install in your new system.

If you already have Ubuntu installed in WSL, connect with it using:

wsl -d Ubuntu

Hereā€™s everything you need to do now:

Add Docker's official GPG key:

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Enter fullscreen mode Exit fullscreen mode

Add the repository to Apt sources:

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Enter fullscreen mode Exit fullscreen mode

Install Docker

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Install everything one by one. This sets up your system by installing the required command line tools, and most importantly, ā€˜dockerā€™ that will be useful later on.

After this is done, let us go ahead and install Ollama.

2. Installing Ollama

Now that you have installed WSL and logged in, you need to install Ollama. Head over to the Ollama website, copy the command, and execute it in WSL

Ollama for Linux

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

After it is successfully installed, head over to localhost:11434. It will verify whether Ollama is running or not. If successful, it should show something like this:

Ollama is running!

3. Pull Models

Ollama has its own CLI, and you can verify if it was successfully installed or not by using ollama -v. Ollama supports a lot of LLMs. Here is a complete list, https://ollama.com/library and you can use all of them.

Letā€™s try ā€˜llama3ā€™, the latest LLM from Meta. To install the model, we need to run the command

ollama pull llama3
Enter fullscreen mode Exit fullscreen mode

llama3 pull

After that is done, you can also install another multi-modal model, that can also understand images, called ā€˜llavaā€™.

ollama pull llava
Enter fullscreen mode Exit fullscreen mode

4. Start models and chat

After all the installations are done, it is time for us to start our Local LLM and have a chat! To run ā€˜llama3ā€™, we have to execute the command:

ollama run llama3
Enter fullscreen mode Exit fullscreen mode

Here you can ask anything. For example, this:

llama3

Keep an eye out for your machineā€™s performance, If you are using your own local resources, make sure not to stress it too much!

5. Installing and using Open-WebUI for easy GUI

We have the functionality, but chatting with an LLM from a command line is a bit difficult, no?

Letā€™s fix that. For this, we need to run the docker image for open-webui. Execute the following command:

sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Enter fullscreen mode Exit fullscreen mode

After it is done, head over to localhost:8080, which is the default port for open-webui. You will be greeted by a login screen. Click on ā€œSign Upā€, enter your Full Name, Email and create a new Password. Donā€™t worry, these are not going anywhere, and are only stored locally. After all this is done, here is the screen that you are greeted with. Just like ChatGPT, isnā€™t it? But better.

open-webui Interface

Select your model from up top, and get started!

Thatā€™s all you need to know, to run your own LLM Locally. Itā€™s as simple as that. Have fun!

genai Article's
30 articles in total
Favicon
A Magic Line That Cuts Your LLM Latency by >40% on Amazon Bedrock
Favicon
All Data and AI Weekly #172 for 13 January 2025
Favicon
Evolution of language models
Favicon
Spoken Language Models
Favicon
What is ollama? Is it also a LLM?
Favicon
What is Gen AI and how does it work?
Favicon
Building an Audio Conversation Bot with Twilio, FastAPI, and Google Gemini
Favicon
mkdev's top 10 GenAI gifts of 2024
Favicon
Opinions wanted: how do we identify AI misinformation?
Favicon
Tensorflow on AWS
Favicon
Party Rock Application - GenAI
Favicon
AI Basics: Understanding Artificial Intelligence and Its Everyday Applications
Favicon
Gen AI vs LLM: Understanding the Core Differences and Practical Insights
Favicon
Amazon Bedrock and its benefits in a RAG project
Favicon
My Tech Blog
Favicon
2025: When Computers Started Creating Things
Favicon
How to run Ollama on Windows using WSL
Favicon
Generative AI Cost Optimization Strategies
Favicon
Simplifying Data Extraction with OpenAI JSON Mode and JSON Schemas
Favicon
Driving business efficiency: Integrating Needleā€™s GenAI framework into your applications
Favicon
AI + Data Weekly 169 for 23 December 2024
Favicon
A PAGE TALKS ABOUT (TIME UNBOXED: The @reWireByAutomation Story (2024))
Favicon
AIOps : DĆ©boguer son cluster Kubernetes en utilisant lā€™intelligence artificielle gĆ©nĆ©rative viaā€¦
Favicon
State of AI at the End of 2024
Favicon
Why Function-Calling GenAI Must Be Built by AI, Not Manually Coded
Favicon
GenAI Developer Roadmap šŸš€ | Week 1, Day 1
Favicon
Transforming Enterprises with Needle: A Generative AI Framework
Favicon
The Dev Tools Evolution: LLMs, Wasm, and What's Next for 2025
Favicon
My Journey into Novel Creation Using Generative AI: Day 1
Favicon
Function calling with Google Gemini chat AI

Featured ones: