.\" -*- mode: troff; coding: utf-8 -*- .\" Automatically generated by Podwrapper::Man 1.52.0 (Pod::Simple 3.43) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" \*(C` and \*(C' are quotes in nroff, nothing in troff, for use with C<>. .ie n \{\ . ds C` "" . ds C' "" 'br\} .el\{\ . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "guestfs-performance 1" .TH guestfs-performance 1 2024-01-05 libguestfs-1.52.0 "Virtualization Support" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH NAME guestfs\-performance \- engineering libguestfs for greatest performance .SH DESCRIPTION .IX Header "DESCRIPTION" This page documents how to get the greatest performance out of libguestfs, especially when you expect to use libguestfs to manipulate thousands of virtual machines or disk images. .PP Three main areas are covered. Libguestfs runs an appliance (a small Linux distribution) inside qemu/KVM. The first two areas are: minimizing the time taken to start this appliance, and the number of times the appliance has to be started. The third area is shortening the time taken for inspection of VMs. .SH "BASELINE MEASUREMENTS" .IX Header "BASELINE MEASUREMENTS" Before making changes to how you use libguestfs, take baseline measurements. .SS "Baseline: Starting the appliance" .IX Subsection "Baseline: Starting the appliance" On an unloaded machine, time how long it takes to start up the appliance: .PP .Vb 1 \& time guestfish \-a /dev/null run .Ve .PP Run this command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. .PP \&\fISide note for developers:\fR There is a program called \&\fIboot-benchmark\fR in https://github.com/libguestfs/libguestfs\-analysis\-tools which does the same thing, but performs multiple runs and prints the mean and standard deviation. .PP \fIExplanation\fR .IX Subsection "Explanation" .PP The guestfish command above starts up the libguestfs appliance on a null disk, and then immediately shuts it down. The first time you run the command, it will create an appliance and cache it (usually under \&\fI/var/tmp/.guestfs\-*\fR). Subsequent runs should reuse the cached appliance. .PP \fIExpected results\fR .IX Subsection "Expected results" .PP You should expect to be getting times under 6 seconds. If the times you see on an unloaded machine are above this, then see the section "TROUBLESHOOTING POOR PERFORMANCE" below. .SS "Baseline: Performing inspection of a guest" .IX Subsection "Baseline: Performing inspection of a guest" For this test you will need an unloaded machine and at least one real guest or disk image. If you are planning to use libguestfs against only X guests (eg. X = Windows), then using an X guest here would be most appropriate. If you are planning to run libguestfs against a mix of guests, then use a mix of guests for testing here. .PP Time how long it takes to perform inspection and mount the disks of the guest. Use the first command if you will be using disk images, and the second command if you will be using libvirt. .PP .Vb 1 \& time guestfish \-\-ro \-a disk.img \-i exit \& \& time guestfish \-\-ro \-d GuestName \-i exit .Ve .PP Run the command several times in a row and discard the first few runs, so that you are measuring a typical "hot cache" case. .PP \fIExplanation\fR .IX Subsection "Explanation" .PP This command starts up the libguestfs appliance on the named disk image or libvirt guest, performs libguestfs inspection on it (see "INSPECTION" in \fBguestfs\fR\|(3)), mounts the guest’s disks, then discards all these results and shuts down. .PP The first time you run the command, it will create an appliance and cache it (usually under \fI/var/tmp/.guestfs\-*\fR). Subsequent runs should reuse the cached appliance. .PP \fIExpected results\fR .IX Subsection "Expected results" .PP You should expect times which are ≤ 5 seconds greater than measured in the first baseline test above. (For example, if the first baseline test ran in 5 seconds, then this test should run in ≤ 10 seconds). .SH "UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT/CACHED" .IX Header "UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT/CACHED" The first time you use libguestfs, it will build and cache an appliance. This is usually in \fI/var/tmp/.guestfs\-*\fR, unless you have set \f(CW$TMPDIR\fR or \f(CW$LIBGUESTFS_CACHEDIR\fR in which case it will be under that temporary directory. .PP For more information about how the appliance is constructed, see "SUPERMIN APPLIANCES" in \fBsupermin\fR\|(1). .PP Every time libguestfs runs it will check that no host files used by the appliance have changed. If any have, then the appliance is rebuilt. This usually happens when a package is installed or updated on the host (eg. using programs like \f(CW\*(C`yum\*(C'\fR or \f(CW\*(C`apt\-get\*(C'\fR). The reason for reconstructing the appliance is security: the new program that has been installed might contain a security fix, and so we want to include the fixed program in the appliance automatically. .PP These are the performance implications: .IP \(bu 4 The process of building (or rebuilding) the cached appliance is slow, and you can avoid this happening by using a fixed appliance (see below). .IP \(bu 4 If not using a fixed appliance, be aware that updating software on the host will cause a one time rebuild of the appliance. .IP \(bu 4 \&\fI/var/tmp\fR (or \f(CW$TMPDIR\fR, \f(CW$LIBGUESTFS_CACHEDIR\fR) should be on a fast disk, and have plenty of space for the appliance. .SH "USING A FIXED APPLIANCE" .IX Header "USING A FIXED APPLIANCE" To fully control when the appliance is built, you can build a fixed appliance. This appliance should be stored on a fast local disk. .PP To build the appliance, run the command: .PP .Vb 1 \& libguestfs\-make\-fixed\-appliance .Ve .PP replacing \f(CW\*(C`\*(C'\fR with the name of a directory where the appliance will be stored (normally you would name a subdirectory, for example: \fI/usr/local/lib/guestfs/appliance\fR or \&\fI/dev/shm/appliance\fR). .PP Then set \f(CW$LIBGUESTFS_PATH\fR (and ensure this environment variable is set in your libguestfs program), or modify your program so it calls \&\f(CW\*(C`guestfs_set_path\*(C'\fR. For example: .PP .Vb 1 \& export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance .Ve .PP Now you can run libguestfs programs, virt tools, guestfish etc. as normal. The programs will use your fixed appliance, and will not ever build, rebuild, or cache their own appliance. .PP (For detailed information on this subject, see: \&\fBlibguestfs\-make\-fixed\-appliance\fR\|(1)). .SS "Performance of the fixed appliance" .IX Subsection "Performance of the fixed appliance" In our testing we did not find that using a fixed appliance gave any measurable performance benefit, even when the appliance was located in memory (ie. on \fI/dev/shm\fR). However there are two points to consider: .IP 1. 4 Using a fixed appliance stops libguestfs from ever rebuilding the appliance, meaning that libguestfs will have more predictable start-up times. .IP 2. 4 The appliance is loaded on demand. A simple test such as: .Sp .Vb 1 \& time guestfish \-a /dev/null run .Ve .Sp does not load very much of the appliance. A real libguestfs program using complicated API calls would demand-load a lot more of the appliance. Being able to store the appliance in a specified location makes the performance more predictable. .SH "REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED" .IX Header "REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED" By far the most effective, though not always the simplest way to get good performance is to ensure that the appliance is launched the minimum number of times. This will probably involve changing your libguestfs application. .PP Try to call \f(CW\*(C`guestfs_launch\*(C'\fR at most once per target virtual machine or disk image. .PP Instead of using a separate instance of \fBguestfish\fR\|(1) to make a series of changes to the same guest, use a single instance of guestfish and/or use the guestfish \fI\-\-listen\fR option. .PP Consider writing your program as a daemon which holds a guest open while making a series of changes. Or marshal all the operations you want to perform before opening the guest. .PP You can also try adding disks from multiple guests to a single appliance. Before trying this, note the following points: .IP 1. 4 Adding multiple guests to one appliance is a security problem because it may allow one guest to interfere with the disks of another guest. Only do it if you trust all the guests, or if you can group guests by trust. .IP 2. 4 There is a hard limit to the number of disks you can add to a single appliance. Call "guestfs_max_disks" in \fBguestfs\fR\|(3) to get this limit. For further information see "LIMITS" in \fBguestfs\fR\|(3). .IP 3. 4 Using libguestfs this way is complicated. Disks can have unexpected interactions: for example, if two guests use the same UUID for a filesystem (because they were cloned), or have volume groups with the same name (but see \f(CW\*(C`guestfs_lvm_set_filter\*(C'\fR). .PP \&\fBvirt\-df\fR\|(1) adds multiple disks by default, so the source code for this program would be a good place to start. .SH "SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs" .IX Header "SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs" The main advice is obvious: Do not perform inspection (which is expensive) unless you need the results. .PP If you previously performed inspection on the guest, then it may be safe to cache and reuse the results from last time. .PP Some disks don’t need to be inspected at all: for example, if you are creating a disk image, or if the disk image is not a VM, or if the disk image has a known layout. .PP Even when basic inspection (\f(CW\*(C`guestfs_inspect_os\*(C'\fR) is required, auxiliary inspection operations may be avoided: .IP \(bu 4 Mounting disks is only necessary to get further filesystem information. .IP \(bu 4 Listing applications (\f(CW\*(C`guestfs_inspect_list_applications\*(C'\fR) is an expensive operation on Linux, but almost free on Windows. .IP \(bu 4 Generating a guest icon (\f(CW\*(C`guestfs_inspect_get_icon\*(C'\fR) is cheap on Linux but expensive on Windows. .SH "PARALLEL APPLIANCES" .IX Header "PARALLEL APPLIANCES" Libguestfs appliances are mostly I/O bound and you can launch multiple appliances in parallel. Provided there is enough free memory, there should be little difference in launching 1 appliance vs N appliances in parallel. .PP On a 2\-core (4\-thread) laptop with 16 GB of RAM, using the (not especially realistic) test Perl script below, the following plot shows excellent scalability when running between 1 and 20 appliances in parallel: .PP .Vb 10 \& 12 ++\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-++ \& + + + + + + + + + + * \& | | \& | * | \& 11 ++ ++ \& | | \& | | \& | * * | \& 10 ++ ++ \& | * | \& | | \& s | | \& 9 ++ ++ \& e | | \& | * | \& c | | \& 8 ++ * ++ \& o | * | \& | | \& n 7 ++ ++ \& | * | \& d | * | \& | | \& s 6 ++ ++ \& | * * | \& | * | \& | | \& 5 ++ ++ \& | | \& | * | \& | * * | \& 4 ++ ++ \& | | \& | | \& + * * * + + + + + + + + \& 3 ++\-*\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-++ \& 0 2 4 6 8 10 12 14 16 18 20 \& number of parallel appliances .Ve .PP It is possible to run many more than 20 appliances in parallel, but if you are using the libvirt backend then you should be aware that out of the box libvirt limits the number of client connections to 20. .PP The simple Perl script below was used to collect the data for the plot above, but there is much more information on this subject, including more advanced test scripts and graphs, available in the following blog postings: .PP http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-1/ http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-2/ http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-3/ http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-4/ .PP .Vb 1 \& #!/usr/bin/env perl \& \& use strict; \& use threads; \& use warnings; \& use Sys::Guestfs; \& use Time::HiRes qw(time); \& \& sub test { \& my $g = Sys::Guestfs\->new; \& $g\->add_drive_ro ("/dev/null"); \& $g\->launch (); \& \& # You could add some work for libguestfs to do here. \& \& $g\->close (); \& } \& \& # Get everything into cache. \& test (); test (); test (); \& \& for my $nr_threads (1..20) { \& my $start_t = time (); \& my @threads; \& foreach (1..$nr_threads) { \& push @threads, threads\->create (\e&test) \& } \& foreach (@threads) { \& $_\->join (); \& if (my $err = $_\->error ()) { \& die "launch failed with $nr_threads threads: $err" \& } \& } \& my $end_t = time (); \& printf ("%d %.2f\en", $nr_threads, $end_t \- $start_t); \& } .Ve .SH "TROUBLESHOOTING POOR PERFORMANCE" .IX Header "TROUBLESHOOTING POOR PERFORMANCE" .SS "Ensure hardware virtualization is available" .IX Subsection "Ensure hardware virtualization is available" Use \fI/proc/cpuinfo\fR to ensure that hardware virtualization is available. Note that you may need to enable it in your BIOS. .PP Hardware virt is not usually available inside VMs, and libguestfs will run slowly inside another virtual machine whatever you do. Nested virtualization does not work well in our experience, and is certainly no substitute for running libguestfs on baremetal. .SS "Ensure KVM is available" .IX Subsection "Ensure KVM is available" Ensure that KVM is enabled and available to the user that will run libguestfs. It should be safe to set 0666 permissions on \fI/dev/kvm\fR and most distributions now do this. .SS "Processors to avoid" .IX Subsection "Processors to avoid" Avoid processors that don’t have hardware virtualization, and some processors which are simply very slow (AMD Geode being a great example). .SS "Xen dom0" .IX Subsection "Xen dom0" In Xen, dom0 is a virtual machine, and so hardware virtualization is not available. .SS "Use libguestfs ≥ 1.34 and qemu ≥ 2.7" .IX Subsection "Use libguestfs ≥ 1.34 and qemu ≥ 2.7" During the libguestfs 1.33 development cycle, we spent a large amount of time concentrating on boot performance, and added some patches to libguestfs, qemu and Linux which in some cases can reduce boot times to well under 1 second. You may therefore get much better performance by moving to the versions of libguestfs or qemu mentioned in the heading. .SH "DETAILED ANALYSIS" .IX Header "DETAILED ANALYSIS" .SS "Boot analysis" .IX Subsection "Boot analysis" In https://github.com/libguestfs/libguestfs\-analysis\-tools is a program called \f(CW\*(C`boot\-analysis\*(C'\fR. This program is able to produce a very detailed breakdown of the boot steps (eg. qemu, BIOS, kernel, libguestfs init script), and can measure how long it takes to perform each step. .SS "Detailed timings using ts" .IX Subsection "Detailed timings using ts" Use the \fBts\fR\|(1) command (from moreutils) to show detailed timings: .PP .Vb 10 \& $ guestfish \-a /dev/null run \-v |& ts \-i \*(Aq%.s\*(Aq \& 0.000022 libguestfs: launch: program=guestfish \& 0.000134 libguestfs: launch: version=1.29.31fedora=23,release=2.fc23,libvirt \& 0.000044 libguestfs: launch: backend registered: unix \& 0.000035 libguestfs: launch: backend registered: uml \& 0.000035 libguestfs: launch: backend registered: libvirt \& 0.000032 libguestfs: launch: backend registered: direct \& 0.000030 libguestfs: launch: backend=libvirt \& 0.000031 libguestfs: launch: tmpdir=/tmp/libguestfsw18rBQ \& 0.000029 libguestfs: launch: umask=0002 \& 0.000031 libguestfs: launch: euid=1000 \& 0.000030 libguestfs: libvirt version = 1002012 (1.2.12) \& [etc] .Ve .PP The timestamps are seconds (incrementally since the previous line). .SS "Detailed debugging using gdb" .IX Subsection "Detailed debugging using gdb" You can attach to the appliance BIOS/kernel using gdb. If you know what you're doing, this can be a useful way to diagnose boot regressions. .PP Firstly, you have to change qemu so it runs with the \f(CW\*(C`\-S\*(C'\fR and \f(CW\*(C`\-s\*(C'\fR options. These options cause qemu to pause at boot and allow you to attach a debugger. Read \fBqemu\fR\|(1) for further information. Libguestfs invokes qemu several times (to scan the help output and so on) and you only want the final invocation of qemu to use these options, so use a qemu wrapper script like this: .PP .Vb 1 \& #!/bin/bash \- \& \& # Set this to point to the real qemu binary. \& qemu=/usr/bin/qemu\-kvm \& \& if [ "$1" != "\-global" ]; then \& # Scanning help output etc. \& exec $qemu "$@" \& else \& # Really running qemu. \& exec $qemu \-S \-s "$@" \& fi .Ve .PP Now run guestfish or another libguestfs tool with the qemu wrapper (see "QEMU WRAPPERS" in \fBguestfs\fR\|(3) to understand what this is doing): .PP .Vb 1 \& LIBGUESTFS_HV=/path/to/qemu\-wrapper guestfish \-a /dev/null \-v run .Ve .PP This should pause just after qemu launches. In another window, attach to qemu using gdb: .PP .Vb 7 \& $ gdb \& (gdb) set architecture i8086 \& The target architecture is assumed to be i8086 \& (gdb) target remote :1234 \& Remote debugging using :1234 \& 0x0000fff0 in ?? () \& (gdb) cont .Ve .PP At this point you can use standard gdb techniques, eg. hitting \f(CW\*(C`^C\*(C'\fR to interrupt the boot and \f(CW\*(C`bt\*(C'\fR get a stack trace, setting breakpoints, etc. Note that when you are past the BIOS and into the Linux kernel, you'll want to change the architecture back to 32 or 64 bit. .SH "PERFORMANCE REGRESSIONS IN OTHER PROGRAMS" .IX Header "PERFORMANCE REGRESSIONS IN OTHER PROGRAMS" Sometimes performance regressions happen in other programs (eg. qemu, the kernel) that cause problems for libguestfs. .PP In https://github.com/libguestfs/libguestfs\-analysis\-tools \&\fIboot\-benchmark/boot\-benchmark\-range.pl\fR is a script which can be used to benchmark libguestfs across a range of git commits in another project to find out if any commit is causing a slowdown (or speedup). .PP To find out how to use this script, consult the manual: .PP .Vb 1 \& ./boot\-benchmark/boot\-benchmark\-range.pl \-\-man .Ve .SH "SEE ALSO" .IX Header "SEE ALSO" \&\fBsupermin\fR\|(1), \&\fBguestfish\fR\|(1), \&\fBguestfs\fR\|(3), \&\fBguestfs\-examples\fR\|(3), \&\fBguestfs\-internals\fR\|(1), \&\fBlibguestfs\-make\-fixed\-appliance\fR\|(1), \&\fBstap\fR\|(1), \&\fBqemu\fR\|(1), \&\fBgdb\fR\|(1), http://libguestfs.org/. .SH AUTHORS .IX Header "AUTHORS" Richard W.M. Jones (\f(CW\*(C`rjones at redhat dot com\*(C'\fR) .SH COPYRIGHT .IX Header "COPYRIGHT" Copyright (C) 2012\-2023 Red Hat Inc. .SH LICENSE .IX Header "LICENSE" This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .PP This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. .PP You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110\-1301 USA .SH BUGS .IX Header "BUGS" To get a list of bugs against libguestfs, use this link: https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools .PP To report a new bug against libguestfs, use this link: https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools .PP When reporting a bug, please supply: .IP \(bu 4 The version of libguestfs. .IP \(bu 4 Where you got libguestfs (eg. which Linux distro, compiled from source, etc) .IP \(bu 4 Describe the bug accurately and give a way to reproduce it. .IP \(bu 4 Run \fBlibguestfs\-test\-tool\fR\|(1) and paste the \fBcomplete, unedited\fR output into the bug report.