Logo

dev-resources.site

for different kinds of informations.

How to run any LLM model with Hugging-Face 🤗

Published at
5/17/2024
Categories
ai
huggingface
llm
genai
Author
rswijesena
Categories
4 categories in total
ai
open
huggingface
open
llm
open
genai
open
Author
10 person written this
rswijesena
open
How to run any LLM model with Hugging-Face 🤗

Hugging-face 🤗 is a repository to host all the LLM models available in the world. https://huggingface.co/

If you go to the models sections of the repo, you would see thousands of models available to download or use as it is.

Let's get an example to use google/flan-t5-large to generate text2text prompts

  1. Install below python libs
!pip install huggingface_hub
!pip install transformers
!pip install accelerate
!pip install bitsandbytes
!pip install langchain
Enter fullscreen mode Exit fullscreen mode
  1. Get a huggingface API Key - https://huggingface.co/settings/tokens

  2. You can run below python code now with your Key

from langchain import PromptTemplate, HuggingFaceHub, LLMChain
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "<HUGGINGFACEKEY>"
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is the good name for a company that makes {product}",
)

chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-large",model_kwargs={"temperature":0.1, "max_length":64}))

chain.run("fruits")

Results from Model = Fruits is a footballer from the United States.
Enter fullscreen mode Exit fullscreen mode
huggingface Article's
30 articles in total
Favicon
HuggingFace login on Google Colab
Favicon
Machine learning for web developers
Favicon
How to Run stable-diffusion-3.5-large-turbo on Google Colab
Favicon
Hugging Face: The AI Revolution You Can't Ignore!
Favicon
Seamless Integration of Hugging Face AI Models via API for Any Application
Favicon
Using Hugging Face Models in Google Colab: A Beginner's Guide
Favicon
Build a Text Extractor App with Python Code Under 30 Lines Using Gradio and Hugging Face
Favicon
Exploring NVIDIA’s Llama 3.1 Nemotron 70B Instruct Model: A Breakthrough in AI Language Models
Favicon
Fine-Tuning LLMs Using HuggingFace!
Favicon
🤗 How to create spaces in Hugging Face?🤗
Favicon
Parler-TTS: Text-to-Speech Technology — An AI Engineer’s Perspective
Favicon
Hugging Face: Interacting with Roberta and Hugging Face for the first time
Favicon
Hugging Face Zero GPU Spaces: ShieldGemma Application
Favicon
Exploring Hugging Face: The GitHub for the Machine Learning Community
Favicon
📚 Announcing My New Book: Building an LLMOps Pipeline Using Hugging Face📚
Favicon
End to end LLMOps Pipeline - Part 1 - Hugging Face
Favicon
huggingface fix: you either need to activate Developer Mode or to run Python as an administrator
Favicon
Part 1: Basic Implementation of Phi-3-Vision in MLX
Favicon
How to add memory to LLM Bot using DynamoDB
Favicon
Generating replies using Huggingface interference and Mistral in NestJS
Favicon
Mastering AutoModelForCausalLM: A Handbook for Novices
Favicon
Deploying Llama3 with Inference Endpoints and AWS Inferentia2
Favicon
Here's how I achieved faster code runs for running Docker Containers in Jetson Nano L4T
Favicon
How to run LLM modal locally with Hugging-Face 🤗
Favicon
How to run any LLM model with Hugging-Face 🤗
Favicon
Migrating from OpenAI models to Hugging Face models
Favicon
Beginner's guide to code with Generative AI and LLM
Favicon
Fine tune your pre-trained model using this notebook
Favicon
Top 10 LLM Models on Hugging Face
Favicon
Videos: Deploying Hugging Face models on Google Cloud

Featured ones: