dev-resources.site
for different kinds of informations.
Deep Dive: Parameter-Efficient Model Adaptation with LoRA and Spectrum
Published at
11/29/2024
Categories
ai
datascience
llm
opensource
Author
Julien Simon
In this deep dive video, we zoom in on two popular techniques for parameter-efficient training, LoRA/QLoRA and Spectrum.
We discuss their mathematical foundations in detail, including Singular Value Decomposition (SVD). Then, we look at some benchmarks on popular Small Language Models, Mistral-7b and Llama-3.1–8b. We conclude that Spectrum is the better choice, both in terms of training speed and model quality, and is even competitive with the accuracy of full fine-tuning.
Articles
12 articles in total
Arcee Orchestra and Arcee Model Engine
read article
Deep Dive: Parameter-Efficient Model Adaptation with LoRA and Spectrum
currently reading
In this video, you will learn how to deploy Arcee Small Language Models on Amazon SageMaker…
read article
Video: SLM inference on AWS Graviton
read article
Video: SuperNova-Medius, a high-performance 14B model
read article
So YouTube sent me this thing…
read article
Talk @ Mobile World Congress, Las Vegas (10/2024)
read article
Talk @ AWS Telco hackathon, Dallas, TX (09/2024)
read article
Deploying SuperNova-Lite on Inferentia2: the best 8B model for $1 an hour!
read article
Arcee.ai Llama-3.1-SuperNova-Lite is officially the 8-billion parameter model
read article
Sneak peek: Arcee SuperNova on Amazon Bedrock!
read article
Deploying Arcee SuperNova on AWS
read article
Featured ones: