Difference between revisions of "KVM"

From ArchWiki
Jump to: navigation, search
m (Nested virtualization: link to section)
m (Enabling huge pages: Configure file location moved.)
(22 intermediate revisions by 4 users not shown)
Line 12: Line 12:
 
{{Article summary wiki|VMware}}
 
{{Article summary wiki|VMware}}
 
{{Article summary end}}
 
{{Article summary end}}
'''KVM''', Kernel-based Virtual Machine, is a hypervisor built right into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. Unlike [[QEMU]], which uses emulation, KVM uses CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. KVM originally supported {{ic|x86}} and {{ic|x86_64}} architectures and has been ported to {{ic|S/390}}, {{ic|PowerPC}}, {{ic|IA-64}} and since Linux kernel 3.9 also {{ic|arm}}.
+
'''KVM''', Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. Unlike native [[QEMU]], which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization via a kernel module. KVM originally supported x86 and x86_64 architectures and has been ported to S/390, PowerPC, IA-64, and ARM (since Linux 3.9).
  
 
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status] for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.
 
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status] for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.
Line 35: Line 35:
  
 
You can check if necessary modules ({{ic|kvm}} and one of {{ic|kvm_amd}}, {{ic|kvm_intel}}) are available in your kernel with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):
 
You can check if necessary modules ({{ic|kvm}} and one of {{ic|kvm_amd}}, {{ic|kvm_intel}}) are available in your kernel with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):
  $ zgrep KVM /proc/config.gz
+
  $ zgrep CONFIG_KVM /proc/config.gz
  
 
{{Note|Arch Linux kernels provide the appropriate [[Kernel_modules|kernel modules]] to support KVM.}}
 
{{Note|Arch Linux kernels provide the appropriate [[Kernel_modules|kernel modules]] to support KVM.}}
 
=== User access to {{ic|/dev/kvm}} ===
 
 
You need to add your user account into the {{ic|kvm}} group to use the {{ic|/dev/kvm}} device.
 
# gpasswd -a <login_name> kvm
 
 
{{Note|If you use systemd and are a local user, this is not necessary, as access is now granted by systemd/udev.}}
 
  
 
=== Loading kernel modules ===
 
=== Loading kernel modules ===
Line 55: Line 48:
  
 
== How to use KVM ==
 
== How to use KVM ==
 
+
See the main article: [[QEMU]].
{{ic|qemu-kvm}} has been fully merged with upstream {{pkg|qemu}} starting with version 1.3.0, so the {{ic|qemu-kvm}} package is gone. See the main article [[QEMU]], and especially section [[QEMU#Enabling KVM]].
+
 
+
== Enable HugePages ==
+
 
+
{{Accuracy|With systemd, {{ic|hugetlbfs}} is mounted on {{ic|/dev/hugepages}} by default, but with mode 0755 and root's uid and gid.}}
+
{{Merge|QEMU|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. After the above issue is cleared, I suggest merging this section into [[QEMU]].}}
+
 
+
You may also want to enable hugepages to improve the performance of your virtual machine.
+
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not create it.
+
Now we need the right permissions to use this directory. Check if the group {{ic|kvm}} exist and if you are member of this group. This should be the case if you already have a running virtual machine.
+
{{hc|$ getent group kvm|
+
kvm:x:78:USERNAMES
+
}}
+
 
+
Add to your {{ic|/etc/fstab}}:
+
hugetlbfs      /dev/hugepages  hugetlbfs      mode=1770,gid=78        0 0
+
 
+
Of course the gid must match that of the {{ic|kvm}} group. The mode of {{ic|1770}} allows anyone in the group to create files but not unlink or rename each other's files. Make sure {{ic|/dev/hugepages}} is mounted properly:
+
{{hc|# umount /dev/hugepages
+
# mount /dev/hugepages
+
$ mount <nowiki>|</nowiki> grep huge|
+
2=hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)
+
}}
+
 
+
Now you can calculate how many hugepages you need. Check how large your hugepages are:
+
$ cat /proc/meminfo | grep Hugepagesize
+
 
+
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
+
# echo 550 > /proc/sys/vm/nr_hugepages
+
 
+
If you had enough free memory you should see:
+
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages_Total|
+
HugesPages_Total:  550
+
}}
+
 
+
If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):
+
$ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]
+
 
+
Note the {{ic|-mem-path}} parameter. This will make use of the hugepages.
+
 
+
Now you can check, while your virtual machine is running, how many pages are used:
+
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages|
+
HugePages_Total:    550
+
HugePages_Free:      48
+
HugePages_Rsvd:        6
+
HugePages_Surp:        0
+
}}
+
 
+
Now that everything seems to work you can enable hugepages by default if you like. Add to your {{ic|/etc/sysctl.conf}}:
+
vm.nr_hugepages = 550
+
 
+
See also:
+
* https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
+
* http://wiki.debian.org/Hugepages
+
* http://www.linux-kvm.com/content/get-performance-boost-backing-your-kvm-guest-hugetlbfs
+
 
+
== Bridged Networking ==
+
 
+
=== Using netcfg ===
+
{{Out of date|Netcfg has been superseded by [[netctl]].}}
+
 
+
Bridged networking is used when you want your VM to be on the same network as your host machine. This will allow it to get a static or DHCP IP address on your network, and then you can access it using that IP address from anywhere on your LAN. The preferred method for setting up bridged networking for KVM is to use the {{Pkg|netcfg}} package. You will also need to install {{Pkg|bridge-utils}}.
+
 
+
For more information, see: [[Netcfg Tips#Configuring a bridge for use with virtual machines (VMs)]]
+
 
+
You can follow this page to configure the bridge: [[Libvirt#Bridged Networking]]
+
 
+
=== Additional notes ===
+
 
+
{{Merge|QEMU#Tap networking with QEMU|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. This section also duplicates part of [[QEMU]]'s content.}}
+
 
+
Other information can be found here: [[QEMU#Tap Networking with QEMU]] and [[QEMU#Networking with VDE2]].
+
 
+
If you are using {{Pkg|iptables}}, it is recommended for performance and security reasons to disable the firewall on the bridge:
+
# cat >> /etc/sysctl.conf <<EOF
+
net.bridge.bridge-nf-call-ip6tables = 0
+
net.bridge.bridge-nf-call-iptables = 0
+
net.bridge.bridge-nf-call-arptables = 0
+
EOF
+
# sysctl -p /etc/sysctl.conf
+
 
+
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during init (boot) about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel_modules#Loading]].
+
 
+
Alternatively, you can configure {{Pkg|iptables}} to allow all traffic to be forwarded across the bridge by adding a rule like this:
+
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
+
  
 
== Tips and tricks ==
 
== Tips and tricks ==
  
{{Note|See [[QEMU#Tips and tricks]] and [[QEMU#Troubleshooting]] for more tricks.}}
+
{{Note|See [[QEMU#Tips and tricks]] and [[QEMU#Troubleshooting]] for general tips and tricks.}}
  
=== Live snapshots ===
+
=== Nested virtualization ===
  
{{Merge|libvirt|{{ic|virsh}} is part of {{Pkg|libvirt}}}}
+
{{Expansion|Is it possible also with {{ic|kvm_amd}}?}}
  
A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.
+
On host, enable nested feature for {{ic|kvm_intel}}:
 +
# modprobe -r kvm_intel
 +
# modprobe kvm_intel nested=1
  
Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.
+
To make it permanent (see [[Kernel modules#Setting module options]]):
 +
{{hc|/etc/modprobe.d/modprobe.conf|<nowiki>
 +
options kvm_intel nested=1
 +
</nowiki>}}
  
Here's how it works.
+
Verify that feature is activated: 
 +
{{hc|<nowiki>$ systool -m kvm_intel -v | grep nested</nowiki>|<nowiki>
 +
    nested              = "Y"
 +
</nowiki>}}
  
Current running vm
+
Run guest VM with following command:
  # virsh list --all
+
  $ qemu-system-x86_64 -enable-kvm -cpu host
Id    Name                          State
+
----------------------------------------------------
+
3    archey                            running
+
  
List all its current images
+
Boot VM and check if vmx flag is present:
# virsh domblklist archey
+
  $ grep -E "(vmx|svm)" /proc/cpuinfo
Target    Source
+
------------------------------------------------
+
vda        /vms/archey.img
+
 
+
Notice the image file properties
+
# qemu-img info /vms/archey.img
+
image: /vms/archey.img
+
file format: qcow2
+
virtual size: 50G (53687091200 bytes)
+
disk size: 2.1G
+
cluster_size: 65536
+
 
+
Create a disk-only snapshot. The switch {{ic|--atomic}} makes sure that the VM is not modified if snapshot creation fails.
+
  # virsh snapshot-create-as archey snapshot1 --disk-only --atomic
+
 
+
List if you want to see the snapshots
+
# virsh snapshot-list archey
+
Name                Creation Time            State
+
------------------------------------------------------------
+
snapshot1          2012-10-21 17:12:57 -0700 disk-snapshot
+
 
+
Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".
+
# qemu-img info /vms/archey.snapshot1
+
image: /vms/archey.snapshot1
+
file format: qcow2
+
virtual size: 50G (53687091200 bytes)
+
disk size: 18M
+
cluster_size: 65536
+
backing file: /vms/archey.img
+
 
+
At this point, you can go ahead and copy the original image with {{ic|1=cp -sparse=true}} or {{ic|rsync -S}}.
+
Then you can merge the original image back into the snapshot.
+
# virsh blockpull --domain archey --path /vms/archey.snapshot1
+
 
+
Now that you have pulled the blocks out of original image, the file {{ic|/vms/archey.snapshot1}} becomes the new disk image. Check its disk size to see what it means. After that is done, the original image {{ic|/vms/archey.img}} and the snapshot metadata can be deleted safely. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.
+
 
+
This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.
+
  
 
=== Poor Man's Networking ===
 
=== Poor Man's Networking ===
Line 234: Line 110:
 
{{Accuracy|Isn't this option enough? I think it should have the same effect: {{ic|-redir tcp:2222:10.0.2.15:22}} (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address.}}
 
{{Accuracy|Isn't this option enough? I think it should have the same effect: {{ic|-redir tcp:2222:10.0.2.15:22}} (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address.}}
  
=== Nested virtualization ===
+
=== Enabling huge pages ===
  
{{Expansion|Is it possible also with {{ic|kvm_amd}}?}}
+
{{Accuracy|With systemd, {{ic|hugetlbfs}} is mounted on {{ic|/dev/hugepages}} by default, but with mode 0755 and root's uid and gid.}}
 +
{{Merge|QEMU|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. After the above issue is cleared, I suggest merging this section into [[QEMU]].}}
  
On host, enable nested feature for {{ic|kvm_intel}}:
+
You may also want to enable hugepages to improve the performance of your virtual machine.
# modprobe -r kvm_intel
+
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not, create it.
# modprobe kvm_intel nested=1
+
Now we need the right permissions to use this directory.
  
To make it permanent (see [[Kernel modules#Setting module options]]):
+
Add to your {{ic|/etc/fstab}}:
{{hc|/etc/modprobe.d/modprobe.conf|<nowiki>
+
hugetlbfs      /dev/hugepages  hugetlbfs      mode=1770,gid=78        0 0
option kvm_intel nested=1
+
</nowiki>}}
+
  
Verify that feature is activated:
+
Of course the gid must match that of the {{ic|kvm}} group. The mode of {{ic|1770}} allows anyone in the group to create files but not unlink or rename each other's files. Make sure {{ic|/dev/hugepages}} is mounted properly:
{{hc|<nowiki># systool -m kvm_intel -v | grep nested</nowiki>|<nowiki>
+
{{hc|# umount /dev/hugepages
    nested              = "Y"
+
# mount /dev/hugepages
</nowiki>}}
+
$ mount <nowiki>|</nowiki> grep huge|
 +
2=hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)
 +
}}
  
Run guest VM with following command:
+
Now you can calculate how many hugepages you need. Check how large your hugepages are:
  $ qemu-system-x86_64 -enable-kvm -cpu host
+
$ cat /proc/meminfo | grep Hugepagesize
 +
 
 +
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
 +
# echo 550 > /proc/sys/vm/nr_hugepages
 +
 
 +
If you had enough free memory you should see:
 +
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages_Total|
 +
HugesPages_Total:  550
 +
}}
 +
 
 +
If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):
 +
  $ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]
 +
 
 +
Note the {{ic|-mem-path}} parameter. This will make use of the hugepages.
  
Boot VM and check if vmx flag is present:
+
Now you can check, while your virtual machine is running, how many pages are used:
$ grep vmx /proc/cpuinfo
+
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages|
 +
HugePages_Total:    550
 +
HugePages_Free:      48
 +
HugePages_Rsvd:        6
 +
HugePages_Surp:        0
 +
}}
 +
 
 +
Now that everything seems to work you can enable hugepages by default if you like. Add to your {{ic|/etc/sysctl.d/40-hugepage.conf}}:
 +
vm.nr_hugepages = 550
 +
 
 +
See also:
 +
* https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
 +
* http://wiki.debian.org/Hugepages
 +
* http://www.linux-kvm.com/content/get-performance-boost-backing-your-kvm-guest-hugetlbfs
  
 
== See also ==
 
== See also ==
 
* [http://www.linux-kvm.org/page/HOWTO KVM Howto]
 
* [http://www.linux-kvm.org/page/HOWTO KVM Howto]
 
* [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ]
 
* [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ]

Revision as of 12:59, 12 November 2013

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary end KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module. KVM originally supported x86 and x86_64 architectures and has been ported to S/390, PowerPC, IA-64, and ARM (since Linux 3.9).

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.

Differences among KVM, Xen, VMware, and QEMU can be found at the KVM FAQ.

Checking support for KVM

Hardware support

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

$ lscpu

Your processor supports virtualization only if there is a line telling you so.

You can also run:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.

Kernel support

You can check if necessary modules (kvm and one of kvm_amd, kvm_intel) are available in your kernel with the following command (assuming your kernel is built with CONFIG_IKCONFIG_PROC):

$ zgrep CONFIG_KVM /proc/config.gz
Note: Arch Linux kernels provide the appropriate kernel modules to support KVM.

Loading kernel modules

You need to load kvm module and one of kvm_amd and kvm_intel depending on the manufacturer of the VM host's CPU. See Kernel modules#Loading and Kernel modules#Manual module handling for information about loading kernel modules.

If modprobing kvm_intel or kvm_amd fails but modprobing kvm succeeds, (and lscpu claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from dmesg after having failed to modprobe will tell.

Note: Newer versions of udev should load these modules automatically, so manual intervention is not required.

How to use KVM

See the main article: QEMU.

Tips and tricks

Note: See QEMU#Tips and tricks and QEMU#Troubleshooting for general tips and tricks.

Nested virtualization

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Is it possible also with kvm_amd? (Discuss in Talk:KVM#)

On host, enable nested feature for kvm_intel:

# modprobe -r kvm_intel
# modprobe kvm_intel nested=1

To make it permanent (see Kernel modules#Setting module options):

/etc/modprobe.d/modprobe.conf
options kvm_intel nested=1

Verify that feature is activated:

$ systool -m kvm_intel -v | grep nested
    nested              = "Y"

Run guest VM with following command:

$ qemu-system-x86_64 -enable-kvm -cpu host

Boot VM and check if vmx flag is present:

$ grep -E "(vmx|svm)" /proc/cpuinfo

Poor Man's Networking

Merge-arrows-2.pngThis article or section is a candidate for merging with QEMU.Merge-arrows-2.png

Notes: This section is not KVM-specific, it's generally applicable to all QEMU VMs. (Discuss in Talk:KVM#)

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

The basic steps are as follows:

  • Setup an SSH server in the host OS
  • (optional) Create a designated user used for the tunneling (e.g. tunneluser)
  • Install SSH in the VM
  • Setup authentication

See: SSH for the setup of SSH, especially SSH#Forwarding_Other_Ports.

When using the default user network stack, the host is reachable at address 10.0.2.2.

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: Usage of /etc/rc.local is discouraged. This should be a proper systemd service file. (Discuss in Talk:KVM#)

If everything works and you can SSH into the host, simply add something like the following to your /etc/rc.local

# Local SSH Server
echo "Starting SSH tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -R 2213:127.0.0.1:22 -f
# Random remote port (e.g. from another VM)
echo "Starting random tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -L 2345:127.0.0.1:2345 -f

In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: Isn't this option enough? I think it should have the same effect: -redir tcp:2222:10.0.2.15:22 (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address. (Discuss in Talk:KVM#)

Enabling huge pages

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: With systemd, hugetlbfs is mounted on /dev/hugepages by default, but with mode 0755 and root's uid and gid. (Discuss in Talk:KVM#)

Merge-arrows-2.pngThis article or section is a candidate for merging with QEMU.Merge-arrows-2.png

Notes: qemu-kvm no longer exists as all of its features have been merged into qemu. After the above issue is cleared, I suggest merging this section into QEMU. (Discuss in Talk:KVM#)

You may also want to enable hugepages to improve the performance of your virtual machine. With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory /dev/hugepages. If not, create it. Now we need the right permissions to use this directory.

Add to your /etc/fstab:

hugetlbfs       /dev/hugepages  hugetlbfs       mode=1770,gid=78        0 0

Of course the gid must match that of the kvm group. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure /dev/hugepages is mounted properly:

# umount /dev/hugepages
# mount /dev/hugepages
$ mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)

Now you can calculate how many hugepages you need. Check how large your hugepages are:

$ cat /proc/meminfo | grep Hugepagesize

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

# echo 550 > /proc/sys/vm/nr_hugepages

If you had enough free memory you should see:

$ cat /proc/meminfo | grep HugePages_Total
HugesPages_Total:  550

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):

$ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]

Note the -mem-path parameter. This will make use of the hugepages.

Now you can check, while your virtual machine is running, how many pages are used:

$ cat /proc/meminfo | grep HugePages
HugePages_Total:     550
HugePages_Free:       48
HugePages_Rsvd:        6
HugePages_Surp:        0

Now that everything seems to work you can enable hugepages by default if you like. Add to your /etc/sysctl.d/40-hugepage.conf:

vm.nr_hugepages = 550

See also:

See also