Logo

dev-resources.site

for different kinds of informations.

Working with multiple language models in Semantic Kernel

Published at
12/28/2024
Categories
ai
dotnet
semantickernel
openai
Author
stormhub
Categories
4 categories in total
ai
open
dotnet
open
semantickernel
open
openai
open
Author
8 person written this
stormhub
open
Working with multiple language models in Semantic Kernel

It is common to work with multiple large language models (LLMs) simultaneously, especially when running evaluations or tests. Semantic Kernel supports registering multiple text generation and embedding services using serviceId and modelId.

Register 'serviceId' and 'modelId'

Suppose we have the following setup

 builder.AddAzureOpenAIChatCompletion(
    deploymentName: "gpt-4-1106-Preview",
    endpoint: "https://resource-name.openai.azure.com",
    apiKey: "api-key",
    modelId: "gpt-4",
    serviceId: "azure:gpt-4");

builder.AddAzureOpenAIChatCompletion(
    deploymentName: "gpt-4o-2024-08-06",
    endpoint: "https://resource-name.openai.azure.com",
    apiKey: "api-key",
    modelId: "gpt-4o",
    serviceId: "azure:gpt-4o");

 builder.AddOllamaChatCompletion(
    modelId: "phi3",
    endpoint: new Uri("http://localhost:11434"),
    serviceId: "local:phi3");
Enter fullscreen mode Exit fullscreen mode

When execute kernel functions or prompts, 'serviceId' and 'modelId' can be passed into 'PromptExecutionSettings' like the following shows

var promptExecutionSettings  = new PromptExecutionSettings
{
    ServiceId = "local:phi3"
};
// 
// or just modelId 
//    new PromptExecutionSettings
//     {
//         ModelId = "gpt-4o"
//     }
//
var result = await kernel.InvokePromptAsync(
    """
    Answer with the given fact:
    Sky is blue and violets are purple

    input:
    What color is sky?
    """, 
    new KernelArguments(promptExecutionSettings));
Enter fullscreen mode Exit fullscreen mode

When registering chat completion services, if serviceId is provided, Semantic Kernel also registers chat completion services as keyed. With the above registration, the following would work:

var chatCompletionService = kernel.Services
    .GetRequiredKeyedService<IChatCompletionService>("azure:gpt-4o");
Enter fullscreen mode Exit fullscreen mode

IAIService and IAIServiceSelector

All AI-related services, including chat completion and text embedding, implement the IAIService interface, which defines a metadata property. This metadata contains attributes specific to the service implementation. For instance, the AzureOpenAIChatCompletionService includes the deployment name and model name. The default IAIServiceSelector resolves services by serviceId first, and then by modelId to match the IAIService metadata. To gain full control over AI service selection, you can implement a custom IAIServiceSelector and register it as a service with Semantic Kernel.

Sample code here

Please feel free to reach out on twitter @roamingcode

semantickernel Article's
25 articles in total
Favicon
Semantic Kernel: Crea un API para GeneraciΓ³n de Texto con Ollama y Aspire
Favicon
Azure OpenAI Error Handling in Semantic Kernel
Favicon
Local AI apps with C#, Semantic Kernel and Ollama
Favicon
Working with multiple language models in Semantic Kernel
Favicon
Lightweight AI Evaluation with SemanticKernel
Favicon
Chatbot with Semantic Kernel - Part 4: Whisper πŸ‘‚
Favicon
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#
Favicon
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start
Favicon
OpenAI chat completion with Json output format
Favicon
Chatbot with Semantic Kernel - Part 3: Inspector & tokens πŸ”Ž
Favicon
Building a Digital Dungeon Master with Semantic Kernel, C#, and Azure
Favicon
Chatbot with Semantic Kernel - Part 2: Plugins 🧩
Favicon
Chatbot with Semantic Kernel - Part 1: Setup and first steps πŸ‘£
Favicon
Building a Semantic Kernel with F# for Enhanced AI Interaction
Favicon
Smooth Streaming Control with System.Threading.Channels in GPT API Programming
Favicon
Discover the Future of AI with My New Book: Developing GenAI Applications with LangChain, Semantic Kernel and Powered by LLMs
Favicon
Introducing Semantic Kernel
Favicon
Natural Programming ❀️️ with TypeChat & Semantic Kernel
Favicon
How to Use #SemanticKernel, Plugins – 2/N
Favicon
#SemanticKernel – πŸ“ŽChat Service demo running Phi-2 LLM locally with #LMStudio
Favicon
Harnessing Semantic Kernel for LLM Integration
Favicon
A Quick Walkthrough of Semantic Kernel's Kusto Connector for Vector Database Integration
Favicon
Semantic Tests for SemanticKernel Plugins using skUnit
Favicon
SemanticValidation: A Library for Semantic Checks with OpenAI
Favicon
Getting Started with Semantic Kernel and C#

Featured ones: