Logo

dev-resources.site

for different kinds of informations.

Virtualization on Debian with virsh&QEMU&KVM — Installation of virtualization tools and first VM creation

Published at
1/12/2025
Categories
debian
kvm
virtualmachine
linux
Author
dev-charodeyka
Categories
4 categories in total
debian
open
kvm
open
virtualmachine
open
linux
open
Author
14 person written this
dev-charodeyka
open
Virtualization on Debian with virsh&QEMU&KVM — Installation of virtualization tools and first VM creation

Image description


In this article, I will not cover the basics of virtualization—what it is and when you might need it. This article is for those who are more or less familiar with the concept, but don’t know how to get started with it on Debian.


Here’s the road map for this article:

➀ Virtualization and self-hosting
➁ Virtualization as a tool for resource-Constrained application development
➂ Virtualization on Debian: how it works

  • ➂.➀ CPU virtualization support
  • ➂.➁ Hypervisor

➃ KVM & QEMU
➄ Libvirt
➅ Validation of virtualization tools installation

  • ➅.➀ Important! qemu:///system vs qemu:///session

➆ Creation of first VM:

  • ➆.➀ OS image file
  • ➆.➁ Preparing storage
  • ➆.➂ VM creation with virt-install

➇ Let the Networking begin!

  • ➇.➀ Userspace (SLIRP or passt) connection
  • ➇.➁ NAT forwarding (aka "virtual networks")

First, I will introduce my use case - why do I need virtual machines on my personal PC.

Long story short, I’m an "on-premise" girl (read "not very pro with cloud infrastructures"). I have an experience with dealing with on-prem infrastructure, and now I’m facing the need to deploy SOMEWHERE my personal project—a web app—so it sees the real world and real world sees it. And no, this isn’t a static website; it has a backend and a database. And hypothetically, some components will need horizontal scaling in the future.


➀ Virtualization and self-hosting

My personal PC isn’t bad at all in terms of specs—perfectly capable for development purposes and even, theoretically, for serving all the needs of my small app in production. However, hosting anything exposed to the web on a personal PC is out of the question. If your first thought is that the only obstacle is my PC needing to run 24/7, it is not about it. Using a machine that has some personal data for hosting of something exposed to the web is a VERY BAD IDEA. If you don’t understand why, you can check out this article.

If I create Virtual Machine(s) on my personal machine, and configure very well the networks, does that solve the problem? NO! And here’s why. The main impediment to hosting anything from "home"—even if you bought a proper server for it—is your router and internet provider. Is this about internet speed? Nope. It’s about...your public IP address.

Let's say you move into a new house, and it doesn’t have Wi-Fi. So you contact the internet providers in your city, check the prices, select the most advantageous offer, and… sign the contract. If you’re just an average user and didn’t specify otherwise, the contract gives you “internet for your house” from the chosen provider.

A technician arrives at the scheduled appointment, brings some cables, and a plastic box—which is the router. They deal with the cables, connect them to the router, hand you a manual along with the Wi-Fi network name and password, and voilà! You can connect all your home devices and enjoy browsing the web.

When you just use the internet, you’re most likely never even aware of your public IP address. But chances are, it’s not static at all. It changes periodically, and this is done by your internet provider—because that’s how they often manage their clients with “home” use.

When it comes to hosting something, like a website, even if you buy a domain name like my-cool-site.it, how will people find it? Who will bind YOUR PC WITH CODE of this site (where all the site’s needs and dependencies reside) to that domain? Domain name of your web app needs to be resolved in such a way that the correct IP address behind it is revealed.

Theoretically, you don’t even need to buy a domain name; your site can work perfectly fine with just an IP address like https://12.34.56.78/home. But it’s not a top if you want your site to be searchable on Google and not just accessible by people who already have the link.

If your internet provider changes your public IP address periodically, it’s like frequently moving houses. People trying to send you letters would still send them to your old address unless you keep updating them, and the letters would never reach you. The same logic applies to the hosting having a dynamic IP. You could, of course, manually update everything and rebind the domain to your new public IP address, but that’s hardly convenient.

If you want to have a static public IP address, you should contact your internet provider and find out the conditions under which you can get it. It will probably come with an increased payment for internet service. Is sticking a certain IP address to your Wi-Fi router that hard and does it require extra effort to keep it like this to cover "technical" costs? No. The increased payment is not even very related to the fact that the need for fixed public IP address can hint that the internet access is for business use, so it’s about earning something, and therefore why not to charge you more. Well... it’s because a unique public IP address is a scarce resource! Actually, a unique public IPv4 address is in deficit. Internet Protocol version 4 (IPv4) forms the foundation of most Global Internet traffic today. An IP Address represented under IPv4 is composed of four sets of numbers ranging from 0 to 255, separated by periods(.).

If you do the straightforward math - total four numbers in an IPv4 address; each number can be in range between 0 and 255 (256 possible values) - 256 * 256 * 256 * 256 = 4,294,967,296 total addresses.

So here, on the market for internet service, a basic economic rule comes into play: demand is growing with increasing digitalization around the world, but the supply is restricted by the very nature (mathematical) of the good (unique IPv4 address), so the prices for this good are increasing. In the next article, I will cover more details on IPv4, explain a bit about IPv6 (the solution for this IP deficit situationship), and also cover some interesting aspects of networking that are consequences of this IPv4 address deficit (NAT).

Plus, another obstacle for self-hosting is a router, provided by your internet provider. They often have very restricting measures in terms of incoming https/s traffic (it gets blocked), and those restrictions (thankfully) will impede any hosting attempts. I say thankfully, because if you truly do not understand how it works, it is better that these restriction, firewall rules, are up and protecting you.

However, keep in mind, hosting on the **same network **you use for any personal device is not a perfect idea if you are unable to configure all the security mechanisms, firewalls, and configure networks properly.

Summing this up, currently, it is not an option for me to "selfhost".


➁ Virtualization as a tool for resource-Constrained application development

So, virtualization is not a solution for my problem with deployment of the web-app. Then where it can be deployed? The cloud. I can choose a cloud provider, rent the instances that match my app's needs, configure them, and deploy my app. Simple, right? Well, not so fast—because every instance, every service, comes with a price. And those prices... For someone like me, who’s built a pretty powerful PC for around $1,000, seeing cloud pricing for "little server" instances can be a bit confusing. To give you an idea, you can explore pricing on AWS using their calculator. I’ll share some screenshots of EC2 instance pricing:

Image description

Image description

2 vCPUs, 4 GB of RAM, and storage for an additional cost. All yours for around $30 per month if you want to host something that has a server side operations.

Image description

Actually, 2 vCPUs, not CPUs. vCPUs stand for virtual CPUs because they aren’t real physical CPUs—they’re virtualized. And an EC2 instances are essentially just virtual machines.

Now as we are back to the word virtualization, let’s talk about my use case—my needs. When it comes to the development of my app on my PC, even though I could install everything needed directly (since I’m always using the same stack as a developer), it’s far from optimal. Why clutter my PC with installations os stuff like Nginx and MongoDB, leaving them hanging around unnecessarily when the project is finished?

To keep everything tidy for development purposes, virtual machines hosted on my PC is the great solution. However, the real issue with development directly on my PC is this: when I develop on a machine with 20 CPU cores of the latest generation, 64 GB of RAM, and 12 GB of GPU memory, how can I be sure that what I’ve developed will actually run on a small EC2 instances? Or more importantly, how can I evaluate the resource requirements for my app in general? (let's leave code-based evaluation aside for now)

This is where virtualization will really help me. I can evaluate my code’s performance right from the start by creating VMs with small resources attached and placing my app's components there!

Note for the Dockerists/Dockerphiles/Containerphiles

I can already foresee the "Gosh, just learn Docker—it’s easy! Developing on bare metal is dinosauric; containerization is the key!" argument. I have no doubt Docker can handle everything. In fact, I personally enjoy Docker Swarm quite a bit (can’t say the same for Kubernetes, though).

However, let’s not forget that Docker has under a virtualization technology. And as I mentioned earlier, EC2 instances are nothing more than virtual machines. So, when you spin up an EC2 instance, you’re essentially getting a VM—a virtual layer. Then, when you install Docker on top of that, you’re adding… yet another virtual layer! And all of this is happening on a modest machine with just a few CPUs and some RAM.

You know what happens when you pile on more and more virtualization layers? They take you farther and farther away from the bare-metal performance of the hardware.

And Kubernetes for small apps? That’s like using a bazooka to kill a fly. Sure, I know Docker apps can be deployed in various ways on AWS (not only on top of EC2), but that’s not the point. My small-scope web app doesn’t need any of the "perks" Docker van bring.

"with Docker, my app can run everywhere"—because it’s no longer tied to OS. But I don’t plan to run my app anywhere except on Debian. I know how my app component's VMs work; I will set them up myself and I will know exactly what’s there.
"Docker provides an isolated environment" Sure, but isolated from what? Separate VMs already provide plenty of isolation.

As for bundling and isolating software of different components, and managing version conflicts. If it is your primary need for Docker even in small projects...Naughty, naughty - did you give up on pure TypeScript/Python and relied on external libraries a lot? Not my case, by the way.

Why would one follow the containerization hype just because everyone else is doing it?

That said, I’m not completely throwing Docker out of my stack. But for me, dockerization is something I’ll consider only when everything else is ready.

Moreover, Docker/Containerization is as same easy as virtualization/Virtual Machines. Different sintaxis, some different concepts, but the logic behind is more or less the same. Docker is easy when it comes to setting up everything in a default way, but if you need something more advanced, most probably you will get very frustrated if you do not know anything about virtualization of hardware and virtual machines. Docker is not a rocket science at all for those who have some experience with Virtual Machines.

So let's start with virtualization on Debian!


➂ Virtualization on Debian: how it works

Virtualization process is happening under the "instructions" of your PC's physical CPU, so it is important that your CPU is supporting it. Yes, virtual machines can access (if allowed so) various hardware components of your PC, but it is exactly the CPU that is responsible for isolation of process running on guest VMs from the host (your physical PC). If your CPU supports the virtualization, first, it needs to be enabled on your PC:

➂.➀ CPU virtualization support

To know if you have virtualization support enabled, you can check if the relevant flag is enabled with grep. If the following command for your processor returns some text, you already have virtualization support enabled:
For Intel processors you can execute grep vmx /proc/cpuinfo to check for Intel's Virtual Machine Extensions.
For AMD processors you can execute grep svm /proc/cpuinfo to check for AMD's Secure Virtual Machine. (The Debian Administrator's Handbook: Virtualization)

In my case:

#this command is counting the number of times 'vmx flags' is mentioned in the output of /proc/cpuinfo. It is equal to 20, meaning that all my CPU cores support virtualization (I have 20 cores totale)
$ egrep -c '(vmx flags)' /proc/cpuinfo
20
#additional command
$ lscpu | grep Virtualization
Virtualization:       VT-x
Enter fullscreen mode Exit fullscreen mode

If in your case the output of command grep vmx /proc/cpuinfo is empty, but your CPU is quite modern and is supposed to support virtualization, you’ll have to enter the BIOS during boot and enable it there. The steps to follow in BIOS are something like described in this guide. The interface of your BIOS depends on the brand of your motherboard, so if you are lost you’ll need to check instructions on how to enable virtualization on your PC in web.

All the CPU cores are ready to virtualize something! Who starts?

➂.➁ Hypervisor

A hypervisor! hypervisor is a bit generic term:

A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. (Wikipedia)

There are different types of hypervisors. To simplify their perception, they can be divided into two types (left and right images of the scheme below):

Image description

The second type of hypervisors might be familiar to you if you’ve ever used VirtualBox. It runs on top on Windows, just like any other app on Windows. On the other hand, Type 1 hypervisors run directly on bare metal. They often have their own OS, specifically tuned for virtualization purposes. And they are often used for enterprise scope.

I included Proxmox in the category of hypervisors type 2, because it comes as a Debian-based OS. To use it, you’ll need to replace your current desktop Debian with Proxmox OS. By the way, Proxmox is pretty great—easy to use and functionality-rich. I use this hypervisor for work, and I find it awesome. But it is not fully technically correct that Proxmox is hypervisor of type 1, as it is based on KVM&QEMU.

Let's get to KVM, that is schematized in the middle on the image above. Is it a hypervisor? Well... yes and no. The term "hypervisor" is generic, so you could call it that. But technically, KVM is a Linux kernel module. You don’t have to build it yourself—it comes shipped with the Linux kernel that is core part of your Debian, just like other kernel modules (for example, drivers).

In this article, I’ll be using KVM to set up virtualization tools on my PC. A popular alternative to KVM on Debian is Xen. Xen is a truly Type 1 hypervisor, even though it can also run alongside Debian OS for personal use.

Xen is a “paravirtualization” solution. It introduces a thin abstraction layer, called a “hypervisor”, between the hardware and the upper systems; this acts as a referee that controls access to hardware from the virtual machines. (The Debian Administrator's Handbook: Virtualization)

As Xen runs between the hardware and the upper systems it qualifies as a Type 1 hypervisor. VMware ESXi is another example of a Type 1 hypervisor.

I’ll be using a KVM-based virtualization setup instead of Xen—just a personal preference.


➃ KVM & QEMU

But what exactly is KVM, besides being a kernel module?

The Kernel Virtual Machine, or KVM, is a full virtualization solution for Linux on x86 (64-bit included) and ARM hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. (Debian Wiki: KVM)

While KVM is providing most of the infrastructure that can be used by a virtualizer, but it is not a virtualizer by itself. Actual control for the virtualization is handled by a QEMU-based application.
Unlike other virtualization systems, KVM was merged into the Linux kernel right from the start. Its developers chose to take advantage of the processor instruction sets dedicated to virtualization (Intel-VT and AMD-V), which keeps KVM lightweight, elegant and not resource-hungry. The counterpart, of course, is that KVM doesn't work on any computer but only on those with appropriate processors.
Unlike such tools as VirtualBox, KVM itself doesn't include any user-interface for creating and managing virtual machines.(The Debian Administrator's Handbook: Virtualization)

Before, I showed how to check if the virtualization is enabled on your PC, the following command will show you if you have KVM kernel module and it can be used:

# checking for presence of KVM kernel modules
$ lsmod | grep kvm
kvm_intel             327680  0
kvm                   983040  1 kvm_intel
#Additional check with cpu-checker package:
$ sudo apt install cpu-checker 
$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
Enter fullscreen mode Exit fullscreen mode

As the Debian Manual on virtualization stated in the quote above KVM alone goes alongside with qemu for virualization porocesses.

QEMU (stands for Quick Emulator) is a generic and open source machine emulator and virtualizer.
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.
When used as a virtualizer, QEMU achieves near native performance by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, 64-bit POWER, S390, 32-bit and 64-bit ARM, and MIPS guests. (QEMU Wiki)

  • Is it possible to virtualize using only KVM? THEORETICALLY yes, but KVM has neither GUI nor CLI, so one has to write code in C in order to virtualize something, but KVM alone will not emulate virtual CPUs or virtual RAM.

  • Can you use QEMU without KVM? Yes. QEMU alone can emulate a full system with its built-in binary translator Tiny Code Generator (TCG), which is purely emulated (= compute-intensive) and overall performance of fully emulated system can be slow. Thus, using QEMU without an accelerator is inefficient and generally best for experimental purposes (e.g if your CPU has an architecture A, but you’re curious about exploring how it all works on CPU architecture B). To “accelerate” the emulated system that run on the same architecture to the host’s one QEMU is using accelerators; and KVM is one of them. However, QEMU can use alternative accelerators like XEN (QEMU: Virtualisation Accelerators).

QEMU is not a software package that comes pre-installed on Debian, so you’ll need to install it manually. And here’s where confusion can arise. If you Google around, you’ll most likely find something like this for Debian-based systems sudo apt install qemu-kvm virt-manager bridge-utils. At first glance, this seems fine— you actually need QEMU to work with KVM. But here’s the tricky part: qemu-kvm isn’t even a real package. It’s a virtual package, which actually point to something else:

Image description

For my case, it’s fine because I plan to have all my guests using this architecture. But if you want to use a different architecture for your guest VMs, qemu-kvmwill bring in redundant package. There are other packages that will install QEMU besides qemu-system-x86, like qemu-system-arm, qemu-system-misc, qemu-system-ppc and qemu-system, which will bring you dependencies to virtualize/emulate various architectures with qemu.

I will install QEMU in this way:

$ sudo apt install qemu-system-x86
Enter fullscreen mode Exit fullscreen mode

To shape your choice a bit, according to QEMU documentation on virtualization with KVM:

QEMU can make use of KVM when running a target architecture that is the same as the host architecture. For instance, when running qemu-system-x86 on an x86 compatible processor, you can take advantage of the KVM acceleration — giving you benefit for your host and your guest system (QEMU: features KVM)

Technically, if you try to create a guest with a CPU architecture different from your host machine’s CPU, KVM won’t be used, because, remember, QEMU can fully emulate machines (I haven’t tested this myself, though).

QEMU is installed, KVM is ready to virtualize, so everything is technically set up. However, the QEMU CLI syntax is far from simple and pretty particular. I would prefer to use a syntax which is more familiar to me. And this is where Libvirt will help me.


➄ Libvirt

Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management.
An primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors. No need to learn the hypervisor specific tools! (Libvirt FAQ)

Libvirt is a bundle of software that includes an API library, a daemon (libvirtd), and a command line utility (virsh).

Libvirt tools for management of virtual machines are 'virsh', 'virt-manager', and 'virt-install', which are all built around libvirt functionality.

Image description

  • virt-manager is a GUI tool for creation and management of VMs entirely through a graphical user interface (GUI) <-- can be a viable option in the beginning.
  • virt-install is a CLI tool that enables the creation and management of VMs via commands and parameters. If created VMs are supposed to have a display and graphical sessions, they can be accessed with virt-viewer (for a display).
  • virsh is a command-line utility that can do a lot of stuff - staring from very simple tasks like VM creation up to advanced virtualization. virsh works tightly with XML configuration files, that can be used to configure domains, virtual machine specs, networks ecc. Virsh gives an option to connect to existing VMs remotely via SSH <--I will be using this tool.

I install libvirt with the following command:

$ sudo apt install libvirt-daemon-system
$ systemctl status libvirtd
● libvirtd.service - libvirt legacy monolithic daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-01-10 22:39:29 CET; 2min 17s ago
#if not enabled in your case:
# $ sudo systemctl enable libvirtd
Enter fullscreen mode Exit fullscreen mode

In this way, I will have libvirt-clients installed as well, as it is a dependency of this package.

Everything needed is supposed to be installed for now. First, I want to validate that everything is OK, and then I can proceed with first VM creation.


➅ Validation of installed virtualization tools

$ virt-host-validate
  QEMU: Checking for hardware virtualization  : PASS
  QEMU: Checking if device '/dev/kvm' exists  : PASS
  QEMU: Checking if device '/dev/kvm' is accessible : PASS
  ...
$ virsh version
Compiled against library: libvirt 10.10.0
Using library: libvirt 10.10.0
Using API: QEMU 10.10.0
Running hypervisor: QEMU 9.2.0

#command to check, which guest machines you can emulate with QEMU features you have installed:
$ virsh capabilities
...
  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'> <---
      <wordsize>32</wordsize> <---
      <emulator>/usr/bin/qemu-system-i386</emulator>
      ...
      <domain type='qemu'/> <---
      <domain type='kvm'/>  <---
    </arch>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'> <---
      <wordsize>64</wordsize> <---
      <emulator>/usr/bin/qemu-system-x86_64</emulator>
      ...
      <domain type='qemu'/> <---
      <domain type='kvm'/>  <---
    </arch>
  </guest>
Enter fullscreen mode Exit fullscreen mode

➅.➀ Important! qemu:///system vs qemu:///session

Here is the command I want you to pay attention to:

$ virsh uri
qemu:///session

$ sudo virsh uri
qemu:///system
Enter fullscreen mode Exit fullscreen mode

As you can see, there’s a difference between running virsh with sudo or without it. When you run virsh with sudo, it connects to the system libvirtd service, the one launched by systemd. libvirtd is running as root, so has access to all host resources. Daemon config is in /etc/libvirt, VM logs and other bits are stored in /var/lib/libvirt.
On the contrary, if you run virsh without sudo, it connects to qemu:///session, that is a session libvirtd service running as the app user, the daemon is auto-launched if it's not already running. libvirt and all VMs run as the user. All config and logs and disk images are stored in $HOME directory of a user. This means each user has their own qemu:///session VMs, separate from all other users. Details are taken from here.

If, for some reason, your output of virsh uri is empty, you can connect manually. And if you mess up between the session or system, you’ll be informed about it in the output.

$ virsh
virsh # connect qemu:///system
==== AUTHENTICATING FOR org.libvirt.unix.manage ====
System policy prevents management of local virtualized systems

#The correct way if you want to connect to user-space session:
virsh # connect qemu:///session
#The correct way if you want to connect to system wide session:
$ sudo virsh
virsh # connect qemu:///system
Enter fullscreen mode Exit fullscreen mode

THIS INFO IS VERY IMPORTANT:

With qemu:///session, libvirtd and VMs run as your unprivileged user. This integrates better with desktop use cases since permissions aren't an issue, no root password is required, and each user has their own separate pool of VMs.
However because nothing in the chain is privileged, any VM setup tasks that need host admin privileges aren't an option. Unfortunately this includes most general purpose networking options.
The default qemu network mode when running unprivleged is usermode networking (or SLIRP). This is an IP stack implemented in userspace. This has many drawbacks: the VM can not easily be accessed by the outside world, the VM can talk to the outside world but only over a limited number of networking protocols, and it's very slow. (Source)

If this quote does not tell you much, here is the key takeaway: qemu:///session integrates better with desktop use cases. Any VM setup tasks that need host admin privileges aren't an option. This means that your VMs in the scope of qemu:///session will have ONLY general purpose networking options.


➆ Creation of first VM

NB! For the demonstration purposes, I will be creating first VM in the scope of qemu:///session. In this way, I will be able to demonstrate the constraints of created VM. Then, I will show you, how to move create VM under qemu:///system.

➆.➀ OS image file

To create my first virtual machine, which will, of course, be Debian Stable (Bookworm), I need an .iso file. I’ll go for the minimal netinstall image to keep the system tidy and install later only tools I will need.

$ cd #to teleport to $HOME directory
$ mkdir -p .local/share/libvirt/images/
$ cd .local/share/libvirt/images/
$ wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.9.0-amd64-netinst.iso
Enter fullscreen mode Exit fullscreen mode

➆.➁ Preparing storage

Then, I want to specify the storage device I plan to use for all my virtual machines (as they will share portions of it). Since I manage all storage devices on my PC using LVM, the first step is to create a new logical volume using the available free space in my existing logical volume group.

$ sudo vgs
  VG            #PV #LV #SN Attr   VSize    VFree
  MY-vg       1   5   0 wz--n- <372.53g <129.02g
Enter fullscreen mode Exit fullscreen mode

How will my virtual machines use the storage space I plan to create for them? Each machine will have one or more virtual disks, and these virtual disks are essentially "disk images." As everything is a file on Linux OS, these disk images are files. And they can have different formats. The .qcow format is a file format for disk image files used by QEMU. Its updated version, .qcow2, offers better optimization to the original .qcow. I can also create disk images for VMs in the .raw format. However, .qcow2 is generally more space-efficient and can be snapshot-ed and compressed.

So the task is the following: I need to create a .qcow2 disk image for my to be created VM. And I want to use available space in my logical volume group.

There are two options:

  • I can create a fairly large new logical volume to provide space for multiple virtual machines. In this case, I need to place a file system on top of the new logical volume, mount it, and then create .qcow2 disk images over it. Why? Because a logical volume without a file system is a single, contiguous block of storage. A single .qcow2 can occupy the entire block device, but there’s no mechanism to store multiple files on the same device unless a file system is present.

  • The second option is to create separate logical volumes sized to the needs of each virtual machine, with each logical volume fully allocated to a single .qcow2 image. By the way, you can also use physical partitions for your VMs, as logical volumes are just my preferred method of managing storage space. However, even if under each .qcow2 image there is its "personal" logical volume, this doesn’t mean you can expand the size of .qcow2 image by expanding
    logical volume. No, not at all. If my VM runs out of space, I’ll need to attach a new "virtual disk", create an additional logical volume, create new .qcow2 image over it.... This quickly becomes a mess.

So, I prefer the first option: a single logical volume as a kind of storage pool for all my virtual machines disk images.

If your understanding of LVM terminology is a bit wobbly and you still confuse logical volume groups with logical volumes, I recommend to read this article.

#I create new logical volume with name 'virt-machines' inside of the existing volume group
$ sudo lvcreate -L 100G -n virt-machines MY-vg
# I create filesystem on top of it 
$ sudo mkfs.ext4 /dev/MY-vg/virt-machines
# I create a mounting point for it
$ sudo mkdir -p /mnt/virt-machines
# I mount it
$ sudo mount /dev/MY-vg/virt-machines /mnt/virt-machines
# i add automounting option on boot with by modifying /etc/fstab
$ sudo vim.tiny /etc/fstab
# I add this line 
/dev/mapper/MY--vg-virt--machines /mnt/virt-machines ext4 defaults 0 0
# to validate syntax:
$ sudo mount -a
Enter fullscreen mode Exit fullscreen mode

Now, I can create a .qcow2 disk image in this directory. Since I plan to create and run VMs in my user space, I’ve given ownership of this directory to my user to avoid any permission issues later.

# I create a .qcow2 disk image
$ sudo qemu-img create -f qcow2 /mnt/virt-machines/deb-nginx.qcow2 10G
Enter fullscreen mode Exit fullscreen mode

I have an .iso file from which the VM will boot (which will be attached as a virtual CDROM), and I have a virtual disk where the new system will be installed. Now, I just need to create a VM and allocate CPU cores and RAM to it. For the first VM creation, I will use virt-install instead of virsh to demonstrate the logic, and then proceed with XML configuration explanations. The virt-install CLI is part of the virtinst package and is not included in the libvirt-clients package. It needs to be installed separately.

➆.➂ VM creation with virt-install

I will be using the default options of the virt-install command, with two exceptions: --graphics none and --extra-args='console=ttyS0'. My VMs don’t need any graphical interface as they will not have display servers; I will access them via the console. Debian offers not only a graphical installer but also a terminal user interface (TUI) installer, which will guide through the installation process.

$ sudo apt install virtinst
$ virt-install \
  --connect qemu:///session \
  --name deb-nginx \
  --ram 4096 \
  --vcpus 2 \
  --disk path=/mnt/virt-machines/deb-nginx.qcow2,size=10 \
  --location $HOME/.local/share/libvirt/images/debian-12.8.iso \
  --os-variant debian12 \
  --graphics none \
  --extra-args='console=ttyS0'
Enter fullscreen mode Exit fullscreen mode

If you encounter an error:

Traceback (most recent call last):
  File "/usr/bin/virt-install", line 6, in <module>
    from virtinst import virtinstall
  File "/usr/share/virt-manager/virtinst/__init__.py", line 8, in <module>
    import gi
ModuleNotFoundError: No module named 'gi'
Enter fullscreen mode Exit fullscreen mode

Check if you are currently not in any active Python environment. I use anaconda, so the base conda environment is always activated. I just deactivate it with conda deactivate command before executing virt-install command.
You will see this:

Image description

And then installation of Debian (in TUI only) should pop up.

Image description

After installation is finished, VM is rebooted, I close active console ad reconnect with virsh.

$ virsh --connect qemu:///session console deb-nginx
Enter fullscreen mode Exit fullscreen mode

Image description

Actually, everything is ready, VM is usable, so technically I can try to go into SSH...


➇ Let the Networking begin!

To SSH into the VM, I need to know the private IP address it was assigned (and I believe it was, as I expect some default network to have been configured and the VM joined it during the creation process via QEMU). I will leave the details about how SSH works from a networking perspective for now and will cover it in the next article of this virtualization series.

To find out IP of created VM:

root@deb-nginx:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN 
....
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether MA:CA:DD:RE:SS:VM brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp1s0
       valid_lft 77570sec preferred_lft 77570sec
    inet6 XXXXXXXXXXXXXX/64 scope site dynamic mngtmpaddr
       valid_lft 86291sec preferred_lft 14291sec
    inet6 XXXXXXXXXXXXXXXX/64 scope link
       valid_lft forever preferred_lft forever
Enter fullscreen mode Exit fullscreen mode

The network interface is enp1s0, and the IPv4 address is 10.0.2.15. So, let's try ssh!

#from Host machine!
$ ssh 10.0.2.15

Nothing!
Enter fullscreen mode Exit fullscreen mode

➇.➀ Userspace (SLIRP or passt) connection

However, as I mentioned earlier, qemu:///session VMs are primarily intended for desktop use, such as trying out a new distro. The network used under qemu:///session is somewhat primitive and restrictive—it does not allow incoming connections to the VMs and cannot be properly modified. For instance, you cannot configure more sophisticated network settings without sudo privileges to create network components like bridges, change their states, etc.

I can check which network is configured for this VM, and in general, I can review the full configuration of created VM. When I used virt-install, I simply passed some options during the creation process to specify how I wanted my VM, and those parameters were translated into configuration file. This file is much more detailed and "technical" than the option list I provided when I was creating VM. QEMU thoroughly translated my requirements into technical specifications, allocated the necessary hardware, and configured other components for my VM to work. The configuration format used by virsh for almost everything is XML.

$ virsh dumpxml deb-nginx

<domain type='kvm' id='3'>
  <name>deb-nginx</name>
  <uuid>0057aa74-392a-4b4b-ac89-8557d7b9312d</uuid>
  ...
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>2</vcpu> <--interesting
  <os>
    <type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
  </os>
  <features>
   ...
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'/> <--interesting
   ...
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' discard='unmap'/>
      <source file='/mnt/virt-machines/deb-nginx.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu'/>
      <target dev='sda' bus='sata'/>
      <readonly/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    ....
--------------> HERE IT IS, NETWORK INTERFACE <-------------------
    <interface type='user'>
      <mac address='MA:CA:DD:RE:SS:VM'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    ....
 ---> O! Mouse and keyboard: <---------
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    ....
</domain>
Enter fullscreen mode Exit fullscreen mode

Created VM has a network interface, only one. But in XML configuration file I see that this interface has type "user" <interface type='user'>, and normally the type should be "network".

I can check for existing alternatives (other network interfaces). Under qemu:///session, as expected, there is nothing:

$ virsh net-list --all

 Name   State   Autostart   Persistent
----------------------------------------
Enter fullscreen mode Exit fullscreen mode

However, it seems that this userspace network thingy can now be configured quite extensively, because newer versions of libvirt have introduced more advanced features and options:

Since 9.0.0 an alternate backend implementation of the user interface type can be selected by setting the interface's subelement type attribute to passt. In this case, the passt transport (https://passt.top) is used. Similar to SLIRP, passt has an internal DHCP server that provides a requesting guest with one ipv4 and one ipv6 address; it then uses userspace proxies and a separate network namespace to provide outgoing UDP/TCP/ICMP sessions, and optionally redirect incoming traffic destined for the host toward the guest instead.(Libvirt: Userspace connection

However, configuration of userspace connection is beyond the scope of this article (and the next article on networking as well). For my use case, I don’t actually need port forwarding—I need something different.

➇.➁ NAT forwarding (aka "virtual networks")

However, qemu:///system has one default interface (NAT):

 $ sudo virsh net-list --all

 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   no          yes
Enter fullscreen mode Exit fullscreen mode

So, I just recreate a VM into qemu:///system scope and this VM will be using by default this 'default' network.

First, I have to destroy and undefine VM in qemu:///session.

$ virsh destroy deb-nginx
$ virsh undefine deb-nginx
#(optionally, recreare disk image)
$ sudo rm /mnt/virt-machines/deb-nginx.qcow2
$ sudo qemu-img create -f qcow2 /mnt/virt-machines/deb-nginx.qcow2 10G
#IMPORTANT! Start dafault network if it is not started yet
$ sudo virsh net-start default

$ sudo virt-install \
  --connect qemu:///system \
  --name test \
  --ram 4096 \
  --vcpus 2 \
  --disk path=/mnt/virt-machines/deb-nginx.qcow2,size=10 \
  --location /var/lib/libvirt/images/debian-12.9.iso \
  --os-variant debian12 --graphics none \
--extra-args='console=ttyS0'
Enter fullscreen mode Exit fullscreen mode

Please note where I placed the ISO file. If you place it inside /etc/libvirt/in some folder, as it might seem like the right place, you could encounter a weird and misleading error, such as:
error: internal error cannot load AppArmor profile 'libvirt-9cb01efc-ed3b-ff8e-4de5-7227d311dd15'.

If you put the ISO file somewhere under your $HOME directory, you might see a warning like this:
WARNING /home/..../debian-12.8.iso may not be accessible by the hypervisor. You will need to grant the 'libvirt-qemu' user search permissions for the following directories: ['/home/...', '/home/....local', '/home/.../.local/share'].

Both errors are related to the fact that the VM creation process cannot access the ISO file.

Meanwhile I proceed with new Installation via TUI. I setup LVM on this VM, and I put /var on separate logical volume, because this VM is meant for NGINX and NGINX can be very talkative in its logs, especially if configured badly. If you do not know how to do it, refer to this article. I also installed SSH server, so I can ssh into this VM from host.

NB if you have ufw up! During installation, Debian should auto-configure the network. If it fails and you see this:

Image description

...disable ufw temporarily for the process of installation, then enable it again afterward. It's not the best solution, the best approach is to adjust ufw rules so it doesn't block DHCP requests and DNS resolution, which libvirt uses to configure the VM's network through the default NAT setup.

Image description

When Installation is completed, I reopen the console with virsh and login into the freshly created VM. First, lets I check connectivity, disccover the IP address, try to ssh from host:

$ sudo virsh console deb-nginx
# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=15.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=112 time=17.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=112 time=16.3 ms
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP>
.....
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether XXXXXXXXXXXXXXXX brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.125/24 brd 192.168.122.255 scope global dynamic enp1s0
       valid_lft 3217sec preferred_lft 3217sec
    inet6 XXXXXXXXXXXXXXXX scope link
       valid_lft forever preferred_lft forever
Enter fullscreen mode Exit fullscreen mode

So, the IP is 192.168.122.125.
*NB! If you try to execute ssh 192.168.122.125 from the host, the login will fail because you will automatically be attempting to ssh as root, and root login via ssh is disabled by default on Debian.
*

I do:

$ ssh [email protected]
->yes
[email protected]'s password:
Enter fullscreen mode Exit fullscreen mode

So, what can I do from this VM from the network standpoint:

  • I can access the Internet (e.g., run apt update and apt upgrade).
  • I can SSH into this VM from the host machine.

What I cannot do:

  • I have a laptop connected to the same home network as my PC (via WiFi). Can I SSH into this VM from the laptop? No. This is why:
$ ip route
default ...
.....
.....
.....
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
Enter fullscreen mode Exit fullscreen mode

The network 192.168.122.0/24 is accessible only via the virtual bridge virbr0.


You may have many questions and few answers about network configuration, but this article is already quite long, so I’ll move the networking setups to the second part of this series

virtualmachine Article's
30 articles in total
Favicon
Azure Marketplace for Virtual Machines
Favicon
Implementasi Infrastruktur Jaringan Virtual dengan Protokol OSPF
Favicon
How to Shutdown Azure VM
Favicon
How to configure Free SSL Certificate on Nginx using Certbot
Favicon
Live Azure VM Workshop | Deploy and Manage Azure VMs: RDP, Resize, Troubleshoot, IIS Setup
Favicon
Virtualization on Debian with virsh&QEMU&KVM — Installation of virtualization tools and first VM creation
Favicon
Connecting a Virtual Machine to Windows OS: A Step-by-Step Guide
Favicon
Azure VM Resize: Effortlessly Scale Your Virtual Machines in Azure
Favicon
Automating VM Disaster Recovery Using AWS Elastic Disaster Recovery (DRS)
Favicon
Docker vs Virtual Machines: What’s the Difference?
Favicon
Understand Amazon Elastic Compute Cloud (EC2) for launching virtual machines
Favicon
Increase Virtual Machine Quota: A Step-by-Step Guide
Favicon
Efficient Scaling: An Introduction to Virtual Machine Scale Sets (VMSS
Favicon
Create a Linux Virtual Machine in Azure
Favicon
Exploring the Potential of Deepfakes VR: Immersive Experiences for a Virtual Future https://cloudastra.co/blogs/exploring-the-potential-of-deepfakes-vr-immersive-experiences-for-a-virtual-future
Favicon
Creating a Linux VM and installing nginx on it
Favicon
Should You Try Azure Virtual Desktop? Here’s the Simple Breakdown
Favicon
Mastering Azure Monitor: A Step-by-Step Guide to Monitoring Your Azure Virtual Machine Like a Pro
Favicon
Creating and Connecting to a Linux Virtual Machine Scale Set
Favicon
Find the Exact and all connected device's IPs on the virtual machine
Favicon
Simplify Your Workflow: Effortless File Transfer Between VM and Windows
Favicon
Create a Windows 11 VM that is highly available
Favicon
Creating A Window Virtual Machine, RDP into it, Add a data disc to window virtual machine.
Favicon
Creating and connecting to a Linux Virtual Machine Scale Set
Favicon
How to Backup Virtual Machines to Object Storage with Borg, Borgmatic, Rclone, and Cron's Rsync
Favicon
Docker ও VM-এর মধ্যে পার্থক্যগুলো কী?
Favicon
What is Docker? The Simplest Explanation You'll Ever Need
Favicon
¿Cómo clonar una máquina virtual en Azure?
Favicon
Comandos Básicos de Vagrant
Favicon
create an NVA and virtual machine in azure

Featured ones: