Virtualization (Part 1)
LINUXLPIC1-101
2/19/2026


Difference between Type 1 virtualization and Type 2
This is a fundamental concept for the "Virtualization" section of the LPIC-1. The difference essentially comes down to what the hypervisor sits on top of.
In the world of virtualization, think of the Hardware (CPU, RAM, Disk) as the foundation of a building. The "Hypervisor" is the manager that decides how that foundation is shared.
1. Type 1 Hypervisor (The "Bare Metal" Manager)
A Type 1 hypervisor is installed directly on the physical hardware. There is no Windows or Linux OS underneath it. The hypervisor is the operating system, but its only job is to run Virtual Machines (VMs).
Efficiency: Extremely high. There is no "middleman" consuming RAM or CPU.
Stability: Very high. If a VM crashes, the others are unaffected. If there is no general-purpose OS to catch a virus, the whole system is more secure.
Examples: VMware ESXi, Microsoft Hyper-V (the server version), and Xen.
LPIC-1 Context: You will often hear this called "Bare Metal."
2. Type 2 Hypervisor (The "Hosted" Manager)
A Type 2 hypervisor runs as an application inside a normal operating system (like the CentOS or Ubuntu you are currently using).
The Chain: Hardware → Operating System (Host) → Hypervisor → Virtual Machines (Guests).
Efficiency: Lower. The "Host" OS (Windows/Linux) takes up resources for itself (GUI, background tasks, etc.) before the VMs even start.
Convenience: Very high. You can browse the web on your host while running a Linux VM in a window.
Examples: Oracle VirtualBox, VMware Workstation, and Virtual Machine Manager.
3. The hybrid: KVM
Virtual Machine Manager uses KVM (Kernel-based Virtual Machine). KVM is a bit of a "hybrid." Because it is built directly into the Linux Kernel, it transforms Linux into a Type 1 hypervisor, but because you are still running a full Linux desktop/OS, it feels like a Type 2.
Summary Mnemonic
Type 1: 1 layer between Hardware and VM (The Hypervisor).
Type 2: 2 layers between Hardware and VM (The OS + The Hypervisor).
What are paravirtualized (PV) drivers?
In the context of the LPIC-1, paravirtualization (PV) is the "middle ground" between a slow, faked system and a fast, native one.
1. The Problem: Full Virtualization
In "Full Virtualization," the VM has no idea it’s a VM. It thinks it is talking to real, physical hardware.
The issue: Every time the VM wants to move the hard drive head or send a network packet, the hypervisor has to intercept that request and "translate" it. This translation creates a lot of overhead and slows things down.
2. The Solution: Paravirtualization (PV)
With Paravirtualization, the Guest OS (your VM) is "aware" that it is running on a hypervisor.
The Trick: Instead of the VM trying to "pretend" it's talking to real hardware, it uses special drivers (PV Drivers) that act like a direct "express lane" to the hypervisor.
The Result: The VM sends high-level commands directly to the hypervisor, skipping the expensive hardware emulation. It is significantly faster, especially for Disk I/O and Networking.
+1
3. PV Drivers in the Real World: VirtIO
If you are using Virtual Machine Manager (KVM), you have likely seen the term VirtIO.
VirtIO is the industry standard for PV drivers in Linux.
When you set your VM's Network Card or Disk Bus to "VirtIO" in the settings, you are telling the VM to use Paravirtualized drivers.
4. How to see if they are actually in use
Check your disk name:
Run lsblk.
If your disk is named vda, you are using VirtIO.
Summary
PV Drivers = "Aware" drivers that talk directly to the hypervisor.
VirtIO = The specific name for these drivers in KVM.
Benefit = Much better performance for disks and networks.
If you want to know more about modules, check the following entry: Modules.
To know more about Virtualization, go to Part 2.
Contact
hello@unixtips.eu
© 2025. All rights reserved.