dev-resources.site
for different kinds of informations.
Unloading a model from Ollama
Published at
9/2/2024
Categories
ollama
Author
tallesl
Categories
1 categories in total
ollama
open
Author
7 person written this
tallesl
open
Unfortunately, restarting the service is the only way:
$ sudo systemctl restart ollama
Edit: nevermind, thankfully now you can ollama stop <model name>
ollama Article's
30 articles in total
What is ollama? Is it also a LLM?
read article
Semantic Kernel: Crea un API para GeneraciΓ³n de Texto con Ollama y Aspire
read article
Local AI apps with C#, Semantic Kernel and Ollama
read article
Running Out of Space? Move Your Ollama Models to a Different Drive π
read article
Working with LLMs in .NET using Microsoft.Extensions.AI
read article
Building an Ollama-Powered GitHub Copilot Extension
read article
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#
read article
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start
read article
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI
read article
Ollama 0.5 Is Here: Generate Structured Outputs
read article
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server
read article
Run Llama 3 Locally
read article
Building 5 AI Agents with phidata and Ollama
read article
Run Ollama on Intel Arc GPU (IPEX)
read article
Quick tip: Running OpenAI's Swarm locally using Ollama
read article
Langchain4J musings
read article
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?
read article
Ollama - Custom Model - llama3.2
read article
Coding Assistants and Artificial Intelligence for the Rest of Us
read article
Using a Locally-Installed LLM to Fill in Client Requirement Gaps
read article
Create Your Own Local AI Chatbot with Ollama and LangChain
read article
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API
read article
Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama
read article
Ollama Unveiled: Run LLMs Locally
read article
No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg
read article
Unloading a model from Ollama
currently reading
OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)
read article
The 6 Best LLM Tools To Run Models Locally
read article
Demystifying AI of Your Own
read article
Langchain Chat Assistant using Chainlit App
read article
Featured ones: