Logo

dev-resources.site

for different kinds of informations.

OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)

Published at
8/31/2024
Categories
milvus
rag
genai
ollama
Author
tspannhw
Categories
4 categories in total
milvus
open
rag
open
genai
open
ollama
open
Author
8 person written this
tspannhw
open
OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)

Image description

Image description

https://dzone.com/articles/multiple-vectors-and-advanced-search-data-model-design

https://github.com/tspannhw/AIM-NYCStreetCams/blob/main/MultipleVectorsAdvanced%20SearchDataModelDesign/streetcamsrag.ipynb

Utilizing Multiple Vectors and Advanced Search Data Model Design
Goal of this Application
In this application, we will build an advanced data model and use it for ingest and various search options.

1️⃣ Ingest Data Fields, Enrich Data With Lookups, and Format :
Learn to ingest data from including JSON and Images, format and transform to optimize hybrid searches. This is done inside the streetcams.py application.

2️⃣ Store Data into Milvus:
Learn to store data into Milvus, an efficient vector database designed for high-speed similarity searches and AI applications. In this step we are optimizing data model with scalar and multiple vector fields -- one for text and one for the camera image. We do this in the streetcams.py application.

3️⃣ Use Open Source Models for Data Queries in a Hybrid Multi-Modal, Multi-Vector Search:
Discover how to use scalars and multiple vectors to query data stored in Milvus and re-rank the final results in this notebook.

4️⃣ Display resulting text and images:
Build a quick output for validation and checking in this notebook.

5️⃣ Simple Retrieval-Augmented Generation (RAG) with LangChain:
Build a simple Python RAG application (streetcamrag.py) to use Milvus for asking about the current weather via OLLAMA. While outputing to the screen we also send the results to Slack formatted as Markdown.

🔍 Summary
By the end of this application, you’ll have a comprehensive understanding of using Milvus, data ingest object semi-structured and unstructured data, and using Open Source models to build a robust and efficient data retrieval system. For future enhancements, we can use these results to build prompts for LLM, slack bots, streaming data to Kafka and as a Street Camera search engine.

diagram

Future Features
https://github.com/tspannhw/AIM-AirQuality/tree/main
Resources
https://511ny.org/developers/help/api/get-api-getcameras_key_format
https://zilliz.com/blog/building-multilingual-rag-milvus-langchain-openai
https://medium.com/@tspann/utilizing-multiple-vectors-and-advanced-search-data-model-design-for-city-data-705d68d8daf2
https://www.youtube.com/watch?v=HaRc0rsaMo0
https://dzone.com/articles/multiple-vectors-and-advanced-search-data-model-design
logo

ollama Article's
30 articles in total
Favicon
What is ollama? Is it also a LLM?
Favicon
Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire
Favicon
Local AI apps with C#, Semantic Kernel and Ollama
Favicon
Running Out of Space? Move Your Ollama Models to a Different Drive 🚀
Favicon
Working with LLMs in .NET using Microsoft.Extensions.AI
Favicon
Building an Ollama-Powered GitHub Copilot Extension
Favicon
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#
Favicon
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start
Favicon
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI
Favicon
Ollama 0.5 Is Here: Generate Structured Outputs
Favicon
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server
Favicon
Run Llama 3 Locally
Favicon
Building 5 AI Agents with phidata and Ollama
Favicon
Run Ollama on Intel Arc GPU (IPEX)
Favicon
Quick tip: Running OpenAI's Swarm locally using Ollama
Favicon
Langchain4J musings
Favicon
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?
Favicon
Ollama - Custom Model - llama3.2
Favicon
Coding Assistants and Artificial Intelligence for the Rest of Us
Favicon
Using a Locally-Installed LLM to Fill in Client Requirement Gaps
Favicon
Create Your Own Local AI Chatbot with Ollama and LangChain
Favicon
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API
Favicon
Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama
Favicon
Ollama Unveiled: Run LLMs Locally
Favicon
No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg
Favicon
Unloading a model from Ollama
Favicon
OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)
Favicon
The 6 Best LLM Tools To Run Models Locally
Favicon
Demystifying AI of Your Own
Favicon
Langchain Chat Assistant using Chainlit App

Featured ones: