Logo

dev-resources.site

for different kinds of informations.

How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

Published at
12/14/2024
Categories
ubuntu
ai
ollama
Author
korak997
Categories
3 categories in total
ubuntu
open
ai
open
ollama
open
Author
8 person written this
korak997
open
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

Are you ready to set up a powerful local server to host Ollama models and interact with them via a sleek WebUI? This guide will take you through each step, from preparing your Ubuntu server to installing Ollama and integrating OpenWebUI for seamless interaction.

Whether you're a beginner or an experienced user, this comprehensive guide will make the process straightforward and error-free. Let's get started!

Installing Ubuntu Server on a PC

Before diving into the server setup, you need to install Ubuntu Server on your PC. Follow these steps to get started:

Step 1: Download Ubuntu Server ISO

  1. Visit the Ubuntu Server Download Page.
  2. Download the latest version of the Ubuntu Server ISO file.

Step 2: Create a Bootable USB Drive

Use tools like Rufus (Windows) or dd (Linux/Mac) to create a bootable USB drive:

  • For Rufus: Select the ISO file and your USB drive, then click "Start."
  • For dd on Linux/Mac:

     sudo dd if=/path/to/ubuntu-server.iso of=/dev/sdX bs=4M status=progress
    

    Replace /dev/sdX with the appropriate USB device.

Step 3: Boot from USB and Install Ubuntu Server

  1. Insert the USB drive into the PC and restart it.
  2. Enter the BIOS/UEFI (usually by pressing DEL, F2, or F12 during startup).
  3. Set the USB drive as the primary boot device and save the changes.
  4. Follow the on-screen instructions to install Ubuntu Server.
  • Select your language, keyboard layout, and network configuration.
  • Partition the disk as needed (guided options work for most setups).
  • Set up a username, password, and hostname for the server.

Complete the installation and reboot the system. Remove the USB drive during the reboot.


Setting Up Your Ubuntu Server

Step 1: Update and Install Essential Packages

To ensure your server is up-to-date and has the necessary tools, run the following commands:

sudo apt update && sudo apt upgrade -y
sudo apt install build-essential dkms linux-headers-$(uname -r) software-properties-common -y
Enter fullscreen mode Exit fullscreen mode

Step 2: Add NVIDIA Repository and Install Drivers

If your server includes an NVIDIA GPU, follow these steps to install the appropriate drivers:

  • Add the NVIDIA PPA:
   sudo add-apt-repository ppa:graphics-drivers/ppa -y
   sudo apt update
Enter fullscreen mode Exit fullscreen mode
  • Detect the recommended driver:
   ubuntu-drivers devices
Enter fullscreen mode Exit fullscreen mode

Example output:

   driver   : nvidia-driver-560 - third-party non-free recommended
Enter fullscreen mode Exit fullscreen mode
  • Install the recommended driver:
   sudo apt install nvidia-driver-560 -y
   sudo reboot
Enter fullscreen mode Exit fullscreen mode
  • Verify the installation:
   nvidia-smi
Enter fullscreen mode Exit fullscreen mode

This should display GPU details and driver version. If not, revisit the steps.


Step 3: Configure NVIDIA GPU as Default

If your system has an integrated GPU, disable it to ensure NVIDIA is the default:

  • Identify GPUs:
   lspci | grep -i vga
Enter fullscreen mode Exit fullscreen mode
  • Blacklist the integrated GPU driver:
   sudo nano /etc/modprobe.d/blacklist-integrated-gpu.conf
Enter fullscreen mode Exit fullscreen mode

Add the following lines based on your GPU type:

For Intel:

   blacklist i915
   options i915 modeset=0
Enter fullscreen mode Exit fullscreen mode

For AMD:

   blacklist amdgpu
   options amdgpu modeset=0
Enter fullscreen mode Exit fullscreen mode
  • Update and reboot:
   sudo update-initramfs -u
   sudo reboot
Enter fullscreen mode Exit fullscreen mode

Verify again with:

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

Installing and Setting Up Ollama

Step 1: Install Ollama

Download and install Ollama using the following command:

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Step 2: Add Models to Ollama

Ollama allows you to work with different models. For example, to add the llama3 model, run:

ollama pull llama3
Enter fullscreen mode Exit fullscreen mode

Setting Up OpenWebUI for Seamless Interaction

To enhance your experience with Ollama, integrate OpenWebUI—a user-friendly interface for interacting with models:

  • Run the following Docker command to set up OpenWebUI:
   sudo docker run -d --network=host -v open-webui:/app/backend/data \
       -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
       --name open-webui --restart always \
       ghcr.io/open-webui/open-webui:main
Enter fullscreen mode Exit fullscreen mode
  • This command sets up a containerized WebUI with:

    • Data persistence via the open-webui volume.
    • Ollama base URL configuration for model interaction.
  • Access the WebUI through your server's IP address.


Testing and Troubleshooting

Verify NVIDIA GPU Functionality

Use nvidia-smi to confirm the GPU is functioning properly. If you encounter errors like Command not found, revisit the driver installation process.

Common Errors and Fixes

Error: ERROR:root:aplay command not found

  • Fix: Install alsa-utils:
  sudo apt install alsa-utils -y
Enter fullscreen mode Exit fullscreen mode

Error: udevadm hwdb is deprecated. Use systemd-hwdb instead.

  • Fix: Update system packages:
  sudo hwdb update
  sudo apt update && sudo apt full-upgrade -y
Enter fullscreen mode Exit fullscreen mode

Optional: CUDA Setup for Compute Workloads

For advanced compute tasks, install CUDA tools:

  • Install CUDA:
   sudo apt install nvidia-cuda-toolkit -y
Enter fullscreen mode Exit fullscreen mode
  • Verify CUDA installation:
   nvcc --version
Enter fullscreen mode Exit fullscreen mode

Congratulations! You've set up a robust local Ubuntu server for hosting Ollama models and interacting with them via OpenWebUI. This setup is perfect for experimenting with AI models in a controlled, local environment.

If you encounter any issues, double-check the steps and consult the documentation. Enjoy exploring the possibilities of Ollama and OpenWebUI!

ollama Article's
30 articles in total
Favicon
What is ollama? Is it also a LLM?
Favicon
Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire
Favicon
Local AI apps with C#, Semantic Kernel and Ollama
Favicon
Running Out of Space? Move Your Ollama Models to a Different Drive 🚀
Favicon
Working with LLMs in .NET using Microsoft.Extensions.AI
Favicon
Building an Ollama-Powered GitHub Copilot Extension
Favicon
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#
Favicon
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start
Favicon
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI
Favicon
Ollama 0.5 Is Here: Generate Structured Outputs
Favicon
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server
Favicon
Run Llama 3 Locally
Favicon
Building 5 AI Agents with phidata and Ollama
Favicon
Run Ollama on Intel Arc GPU (IPEX)
Favicon
Quick tip: Running OpenAI's Swarm locally using Ollama
Favicon
Langchain4J musings
Favicon
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?
Favicon
Ollama - Custom Model - llama3.2
Favicon
Coding Assistants and Artificial Intelligence for the Rest of Us
Favicon
Using a Locally-Installed LLM to Fill in Client Requirement Gaps
Favicon
Create Your Own Local AI Chatbot with Ollama and LangChain
Favicon
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API
Favicon
Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama
Favicon
Ollama Unveiled: Run LLMs Locally
Favicon
No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg
Favicon
Unloading a model from Ollama
Favicon
OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)
Favicon
The 6 Best LLM Tools To Run Models Locally
Favicon
Demystifying AI of Your Own
Favicon
Langchain Chat Assistant using Chainlit App

Featured ones: