If you’ve arrived at this blog post I’d have to assume you’re familiar with what KVM is, but for the benefit of those who are unaware or are just interested in reading more, I’ll give a bit of a background… Kernel-based Virtual Machine (KVM) is a kernel module that was originally developed by an Israeli organisation called Qumranet to provide native virtualisation technology for Linux-based platforms; essentially turning the kernel into a Tier-1 hypervisor. It has since been ported to multiple other platforms and architectures other than 32/64-bit x86. It got initially adopted into the upstream Linux kernel as of 2.6.20 (back in 2007).
Typically KVM is designed to run on-top of a bare-metal Linux machine with a CPU that supports virtualisation extensions, i.e. Intel VMX and AMD SVM. This allowed a physical machine to run multiple virtual machines on-top (using associated components such as libvirt and qemu), but there’s a new neat technology known as ‘nested KVM’, in other words, KVM support within a KVM-based guest or a hypervisor within a guest. You may ask the question ‘why do we need this?’… well, in my position I’m often running into situations where I have to carry out product demonstrations or debugging hypervisor environments, having another layer of virtualisation abstraction with nested-KVM is great, especially when on the train or on a plane!
There are, of course, some performance problems with doing this but for debugging or wanting to spin up a test environment with technologies such as Red Hat Enterprise Virtualisation or VMware on a single machine it’s quite a nice solution. So let’s look at how to enable it first, by default it’s usually disabled, at least on my Fedora 16 machine (you can replace Intel with AMD here if you have an AMD-based processor)…
$ cat /sys/module/kvm_intel/parameters/nested N
To enable it, we need to make sure the KVM architecture specific module is loaded with the nested option. There are a few options for enabling it, the first way to do this is just update your boot loader to specify the nested option; that way it persists with a reboot or kernel upgrade. Assuming you’re using Fedora with GRUB2, (as root) you need to update the ‘/etc/default/grub‘ file and append ‘kvm-intel.nested=1‘ to the ‘GRUB_CMDLINE_LINUX‘ line. For reference, mine is specified below, remember to replace ‘intel’ with ‘amd’ if required.
# cat /etc/default/grub | grep CMDLINE GRUB_CMDLINE_LINUX="rd.lvm.lv=vol0/swapVol rd.md=0 rd.dm=0 KEYTABLE=us quiet rd.lvm.lv=vol0/rootVol rhgb rd.luks=0 SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 kvm-intel.nested=1"
Once this is specified, you’ll need to rebuild your GRUB configuration files so that when you next reboot the command line arguments you just specified are loaded, note the GRUB2 configuration location may be different on a non-Fedora machine…
# grub2-mkconfig -o /boot/grub2/grub.cfg
Alternatively, (thanks to Dominic Cleal for the suggestion) this can be modified using the modprobe configuration files, making things slightly easier-
$ echo “options kvm-intel nested=1″ | sudo tee /etc/modprobe.d/kvm-intel.conf
I would now recommend rebooting your machine to verify the changes have been made. Once again, you can re-run the previous command to check this and you should see that the changes to the module have been made.
$ cat /sys/module/kvm_intel/parameters/nested Y
And that’s it, you’ve successfully enabled nested-KVM. Next, when you create new virtual machines, e.g. with virt-manager, you will need to ‘require’ vmx or svm to be presented to the virtual machine; that way the guests can make use of the underlying nested-KVM features that have been enabled. This can also be done via direct modification of the libvirt XML definition of a given virtual machine, an example of one of my VM’s is shown below-
<cpu match='exact'> <model>Westmere</model> <feature policy='require' name='vmx'/> </cpu>
Any questions please feel free to get in touch, I’d be happy to help out.
Further reading: https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt