Mastering KVM Virtualization
上QQ阅读APP看书,第一时间看更新

Introducing virt-manager

The virt-manager application is a Python-based desktop user interface for managing virtual machines through libvirt. It primarily targets KVM VMs, but also manages Xen and LXC (Linux containers) among others. virt-manager displays a summary view of running VMs, supplying their performance and resource utilization statistics. Using the virt-manager graphical interface, one can easily create new VMs, monitor them, and make configuration changes when required. An embedded VNC and SPICE client viewer presents a full graphical console to the VM.

As we mentioned in Chapter 3, Setting Up Standalone KVM Virtualization, virtual machines need CPU, memory, storage, and networking resources from the host. In this chapter we will explain the basic configuration of the KVM host and creating virtual machines using virt-manager.

Let's start the Virtual Machine Manager by executing the virt-manager command or by pressing Alt + F2 and it will then display the dialog box of virt-manager.

If you are not the root user, you will be prompted for the root password before continuing. Here the password authentication is handled by the polkit framework. polkit is an authorization API intended to be used by privileged programs (for example, system daemons) offering services to unprivileged programs.

If you wish to allow certain groups or users to access virt-manager without providing root credentials, a polkit rule needs to be created. The rule file has to be created in the /etc/polkit-1/rules.d directory.

For example, if you want all the users in the wheel group to have direct access to virt-manager without entering root password, create the /etc/polkit-1/rules.d/70-libvirtd.rules file and then write:

polkit.addRule(function(action, subject) {
  if (action.id == "org.libvirt.unix.manage" && subject.local && subject.active && subject.isInGroup("wheel")) {
  return polkit.Result.YES;
  }
});

Save and close the file. The libvirtd daemon monitors polikit's rules.d directory for changed content and automatically reloads the rules if changes are detected, so you don't need to reload the process with systemctl. If you've done it right, you should see that you can now launch virt-manager as the users in the wheel group without entering the password. To add users in the wheel group run:

# usermod -G wheel <username>

If you examine the polkit rule carefully you will notice that it checks to see if the user is in the wheel group, is on a local, and has an active session. If so then the result on the org.libvirt.unix.manage action is a YES to allow the action. This could also be configured as:

  • NO: Reject the access request (return polkit.Result.No;)
  • AUTH_SELF: Request the user's own password (return polkit.Result.AUTH_SELF;)
  • AUTH_ADMIN: Request the password for an admin on the system (return polkit.Result.AUTH_ADMIN

Once virt-manager is opened, go to Edit | Connection Details to access the options to configure network and storage:

The Overview tab will give basic information on the libvirt connection URI, CPU, and memory usage pattern of the host system. Virtual Networks and Storage will present the details of the network and storage pools that can be used by the virtual machines. The Network Interfaces tab will give details of the host network and will offer options to configure them. We will cover this in more detail in Chapter 5, Network and Storage.

The Virtual Networks tab

The Virtual Networks tab allows us to configure various types of virtual network and monitor their status:

Using the Virtual Networks tab you will be able to configure the following types of virtual network:

  • NATed
  • Routed
  • Isolated

NATed virtual network

A NAT-based virtual network provides outbound network connectivity to the virtual machines. That means the VMs can communicate with the outside network based on the network connectivity available on the host but none of the outside entities will be able to communicate with the VMs. In this setup, the virtual machines and host should be able to communicate with each other through the bridge interface configured on the host.

Routed virtual network

A routed virtual network allows the connection of virtual machines directly to the physical network. Here VMs will send out packets to the outside network based on the routing rules set on the hypervisor.

Isolated virtual network

As the name implies, this provides a private network between the hypervisor and the virtual machines.

We will cover each network configuration in detail in the next chapter (as well as other network implementations used in production environments) with practical examples. In this chapter, we will be concentrating on the default virtual network, which uses NAT. Once you understand how default networks work, it is very easy to understand other network topologies.

Use virsh net list --all to list the virtual networks. --all is used to list both active and inactive virtual networks. If --all is not specified only active virtual networks will be listed:

# virsh net-list --all
 Name State Autostart Persistent
----------------------------------------------------------
 default active yes yes

Default network

As mentioned earlier, the default network is a NAT-based virtual network. It allows virtual machines to communicate with the outside networks irrespective of the active network interface (Ethernet, wireless, VPN, and so on) available on the hypervisor. It also provides a private network with IP and a DHCP server so that the VMs will get their IP addresses automatically.

Check the details provided about the default network in the previous screenshot:

  • default is the Name of the virtual network. This is provided when you create a virtual network.
  • Device represents the name of bridge created on the host. The bridge interface is the main component for creating virtual networks. We will cover bridges in greater depth in a later chapter.
  • State represents the state of the virtual network. It can be active or inactive.
  • Autostart shows whether the virtual network should be started when you activate the libvirtd service.
  • IPv4 Configuration provides the details of the private network, the DHCP range that will be provided to the VMs, and the forwarding mode. The forwarding mode can be NAT or isolated.

You can stop the default network using the red "stop sign" button and start again using the PLAY button. The + button is used for creating new virtual networks, which we will cover in the next chapter. The x button is used for deleting virtual networks.

You can see the same details using the virsh command:

# virsh net-info default
Name: default
UUID: ba551355-0556-4d32-87b4-653f4a74e09f
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr0

-----

# virsh net-dumpxml default
<network>
 <name>default</name>
 <uuid>ba551355-0556-4d32-87b4-653f4a74e09f</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr0' stp='on' delay='0'/>
 <mac address='52:54:00:d1:56:2e'/>
 <ip address='192.168.124.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.168.124.2' end='192.168.124.254'/>
 </dhcp>
 </ip>
</network>

Some of the basic commands that will get you started with the default network are as follows:

  • Virtual network configuration files are stored in /etc/libvirt/qemu/networks/ as XML files. For the default network it is /etc/libvirt/qemu/networks/default.xml.
  • This virsh command net-destroy will stop a virtual network and net-start will start a virtual network. Do not issue these commands when virtual machines are active using the virtual network. It will break the network connectivity for the virtual machine.
  • # virsh net-destroy default: The default network is destroyed.
  • # virsh net-start default: The default network is started.

Storage tab

This tab allows you to configure various types of storage pool and monitor their status. The following screenshot shows the Storage tab:

The Storage tab provides details of the storage pools available. A storage pool is just a store for saved virtual machine disk images.

At the time of writing, libvirt supports creating storage pools from the different types of source shown in the following screenshot; directory and LVM are the most commonly used. We will look into this in greater detail in the next chapter:

Default storage pool: Default is the name of file-based storage pool that libvirt created to store its virtual machine image file. The location of this storage pool is in /var/lib/libvirt/images.