wiki:Internal/VMHostSetup

Updates as of 06/19/2017

We are running a 4 node vm cluster, with shared storage. These are named orbvm1 through orbvm4. In addition the machine nodevm1 hosts node VMs.

The OS is ubuntu 16.04, and VMs are managed with virt-manager and virsh, controlling libvirt, which is itself a wrapper around kvm.

Networking is handled by openvswitch, to avoid complications with linux bridging.

References: https://www.netflask.net/transparent-vlan-tagging-libvirt-ovs/

Building A VM host

External3 was rebuilt into a VM host.

BASE OS: Ubunut 11.04
Bridge: Open V Switch
Emulator: KVM
Packages: OpenVSwitch (via-debs), qemu-kvm

Building the Host

It was built using the following steps:

  1. Install Ubuntu 11.04 from CD (with stock kernel): Had to switch to version 11.04 for kernel compatibly with openvswitch. The running Kernel Version is:
    root@external3:~# uname -r
    2.6.38-8-server
    root@external3:~# uname -a
    Linux external3 2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
    
  2. run kvm-ok. You might need to run
    apt-get install cpu-checker
    
    Check for a message like this one:
    INFO: /dev/kvm does not exist
    HINT:   sudo modprobe kvm_intel
    INFO: Your CPU supports KVM extensions
    INFO: KVM (vmx) is disabled by your BIOS
    HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
          and then hard poweroff/poweron your system
    KVM acceleration can NOT be used
    
    If disabled, you will need to enter the bios to enable it.
  3. Install the openvswitch packages. Do not use the Ubuntu repositories since the install the incorrect versions of the package, instead download the packages that match your kernel version from here I downloaded:
    openvswitch-datapath-module-2.6.38-8-server_1.2.2.10448_amd64.deb
    openvswitch-common_1.2.2.10448_amd64.deb
    openvswitch-switch_1.2.2.10448_amd64.deb
    
    and then Installed them in that order with "dpkg -i". It will recommend a restart.
    NOTE: The package openvswitch-brcompat_1.2.2.10448_amd64.deb was removed because we are not using bridge compatability.
  4. Once these are installed and the system freshly restarted, you can query the module.
    root@external3:~# ovs-vsctl show
    d03e1847-34f4-4129-8821-63fff3403553
    ovs_version: "1.2.2.10448"
    
    lsmod should also show the running openvswitch_mod.
  5. The readme refrenced here recomends installing the uml utilities, I didn't need them but I installed them any way.
    apt-get install uml-utilities
    
  6. After these components were installed I added a bridge and got it an address:
    ovs-vsctl add-br br0
    ovs-vsctl add-port br0 eth0
    ifconfig eth0 up
    dhclient br0
    
  7. Make the ovs-ifup and ovs-ifdown scripts as referenced here. Make sure to chmod +x them.
    /etc/ovs-ifup
    --------------------------------------------------------------------
    #!/bin/sh
    
    switch='br0'
    /sbin/ifconfig $1 0.0.0.0 up
    ovs-vsctl add-port ${switch} $1
    --------------------------------------------------------------------
    
    /etc/ovs-ifdown
    --------------------------------------------------------------------
    #!/bin/sh
    
    switch='br0'
    /sbin/ifconfig $1 0.0.0.0 down
    ovs-vsctl del-port ${switch} $1
    --------------------------------------------------------------------
    
  8. Now we're ready to install the KVM packages documented here, all but the bridge-utils:
    sudo apt-get install qemu-kvm
    
  9. Next we will build a vm using the command line tools.
  10. if you get the error
    kvm - ....
    kvm: pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin"
    
    Install the package kvm-pxe as referenced here.

Adjustments for multiple Bridges/Nics

Our current setup on external3 requires a separate bridge(vswitch) for each interfaces, because some VM's need to be in 10.50(DMZ) and other need 172.16(network), while others still need both. ovs-vsctl will happily build multiple bridges however, a few tweaks needed to be make to the host inorder of the system to work properly. For some reason, the original /etc/network/interfaces config breaks if you use more than one bridge, even if you only dhcp over just 1. After some expirmentation this is the working network interfaces files:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo eth1 br1
iface lo inet loopback
iface eth1 inet manual
        up ifconfig eth1 up
        down ifconfig eth1 down
iface br1 inet dhcp
        up ifconfig eth0 up
        up ifconfig br0 up
        down ifconfig eth0 down
        down ifconfig br0 down

There are other working examples, but this one seems functional enough. It brings all the interfaces up and then attempts dhcp out of br1. The running ovs-vsctl should look like:

d03e1847-34f4-4129-8821-63fff3403553
    Bridge "br1"
        Port "eth1"
            Interface "eth1"
        Port "tap3"
            Interface "tap3"
        Port "tap1"
            Interface "tap1"
        Port "br1"
            Interface "br1"
                type: internal
    Bridge "br0"
        Port "eth0"
            Interface "eth0"
        Port "tap2"
            Interface "tap2"
        Port "br0"
            Interface "br0"
                type: internal
        Port "tap0"
            Interface "tap0"
    ovs_version: "1.2.2.10448"

Note that the taps are distributed according to which vlan/subnet they are supposed to belong to. I also switch from e1000 emulation to virtio, because its supposed to preform better. The big discovery with this setup was in how to invoke the kvm. Originally I replicated the -net… flag and started the vm. This works but it bonds the two interfaces together. Packets from one virtual interface show up on both bridges and go out of both cards. The reason this happens is a missing vlan keyword in the -net flag. This vlan keyword has nothing to do with vlan tagging for packets egress from the virtual interfaces. It's purely about the internal representation of the interfaces and the internal switching that qemu/kvm does. Specifying different vlan flags for the different interfaces and their respective taps, fixed the bonding problem. Packets were not only present on the proper bridge. This is documented here. See attached PDF (since this site is some what flaky) KVM-MULTI-NETWORK.pdf.

I also had to modify all the scripting infrastructure to reflect this change. Instead of a single ovs-ifup/down script, there are now two(ovs-ifup-br0 and ovs-ifup-br1), one for each bridge. There are also now two scripts for starting vms depending on whether you want 1 or two interfaces.

Single interface command string
kvm -daemonize -vnc :$2 -m 2048 -smp 2 -net nic,model=virtio,macaddr=00:11:22:EE:EE:E$2 -net tap,script=/etc/ovs-ifup-br$3,downscript=/etc/ovs-ifdown-br$3 -drive file=$1

note the 3rd argument specifies the brdige to join

2 interface command string
kvm -daemonize -vnc :$2 -m 2048 -smp 2 -net nic,model=virtio,vlan=0,macaddr=00:11:22:EE:EE:E$2 -net tap,vlan=0,script=/etc/ovs-ifup-br0,downscript=/etc/ovs-ifdown-br0 -net nic,model=virtio,vlan=1,macaddr=00:11:22:FF:FF:F$2 -net tap,vlan=1,script=/etc/ovs-ifup-br1,downscript=/etc/ovs-ifdown-br1 -drive file=$1

Note the added vlan tags. The ovs-ifup-brX scripts are the same as the original, except for the swith=… keyword.


Building the client OS

This process is clubbed together from a collection of references listed at the bottom. The crux of the process is that instead of using libvirt based tools (e.g. virsh/virt-manager), we will use the qemu/kvm tools directly. A Virtual machine is really only two components, a running process that is spawned via the kvm exectuable (an alias form qemu), and a disk image that is the state of the running VM. To get the virtual machine up we need to build the disk and then start the process. One the process is spawned, a vnc session will begin listening on the proper port (usually 5900). You can connect a vnc client to this port, and thus get "console" access to the VM.

  1. Building the disk: We use the kvm-img tool to build the disk image. there are many formats but we will used the qcow2 type since it supports snapshots.
    kvm-img create -f qcow2 filename size
    
  2. Spawing the process: This can be as simple as:
     kvm filename
    
    However we'll want to add a few parameters to get the machine in a usable mode.
    kvm -daemonize -vnc :0 -m 2048 -smp 2 -net nic,model=e1000,macaddr=00:11:22:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive file=/root/test_mac.img -cdrom /root/ubuntu-11.04-server-amd64.iso -boot order=dc
    
    The paramenters are:
    • -vnc - specify which vnc session to spawn, this argument can be specified in several diffrent ways.
    • -m - Memeory
    • -snp - #of cpus
    • -net nic,model=e1000,.. - specify the mac of the first interface. more can be added but the flags will be diffrent. NOTE the model flag is required other wise it defaults to 10/100 with the requisite degradation in performance.
    • -net tap,… - specify how the other end of the nic gets connected. In this case we used the vswitch start up scripts
    • -drive - the name of the disk drive (there are many ways to specify this flag, include -hda,etc …)
    • -cdrom - location of the iso to use as a cd-rom (alternatively you could use /dev/cdrom)
    • -boot order=dc - specify boot params like boot order, splash screen, etc… If omitted will default to the first disk
  3. Once this done you can point your vnc client (locally, or if you specified the parameters correctly remotely) to the specfied port and answer the prompts to preform an installation. Rember to press f4 at the start screen and choose the minimal vm install option. You will get a machine with a "virtual" kernel.
    native@clientvm:~$ uname -r
    2.6.38-8-virtual
    
  4. After the os is installed, it will try to reboot and fail. At this point you can "shut down" the machine by kill -9 the process.
  5. Next "remove" the cdrom and start the vm again. It should boot appropriately. Note the missing -boot param.
    kvm -daemonize -vnc :0 -m 2048 -smp 2 -net nic,model=e1000,macaddr=00:11:22:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive file=/root/test_mac.img
    
  6. On external3 I've made 2 scripts to start the VM. Their contents are very simple:
    start_vm:
    kvm -daemonize -vnc :$2 -m 2048 -smp 2 -net nic,model=e1000,macaddr=00:11:22:EE:EE:E$2 -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive file=$1 
    install_vm:
    kvm -daemonize -vnc :0 -m 2048 -smp 2 -net nic,model=e1000,macaddr=00:11:22:EE:EE:E0 -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom $2 -boot order=dc -drive file=$1
    root@external3:/root#
    
    install_vm takes 2 parameters, the name of the disk image to install on (the image created earlier via kvm-img) and the iso to mount. It takes the default vnc port :0 (tcp port 5900), and also mounts a cdrom. The vnc port number is also the last digit of the mac address. start_vm has two arguments, the first is also the image name, the second is which vnc port to take (1-9). It also follows the last digit mac convention.

Incase you are using regular linux bridgeing

In winlab we used simple bridgeing instead of the open-V-switch. Many steps remain the same however there are a few adjustments. These adjustments basically have to do with installed packages and the ifup and ifdown scripts.

  1. The package installed was
    apt-get install bridge-utils
    
  2. /etc/networks/interfaces was reconfigured to reflect the bridged interface:
    # This file describes the network interfaces available on your system
    # # and how to activate them. For more information, see interfaces(5).
    
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    auto eth0 
    iface eth0 inet dhcp
    
    auto  eth1
    iface eth1 inet manual
    
    auto br0
    iface br0 inet static
    address 192.168.200.67
    	network 192.168.200.0
    	netmask 255.255.252.0
    	bridge_ports eth1
    	bridge_stp off
    	bridge_fd 0
    	bridge_maxwait 0
    
  3. The ifup and ifdown scripts were replaced with the brctl equivalents:
    /etc/ifup-br0
    --------------------------------------------------------------------
    #!/bin/sh
    
    switch='br0'
    /sbin/ifconfig $1 0.0.0.0 up
    /sbin/brctl addif ${switch} $1
    --------------------------------------------------------------------
    
    /etc/ifdown-br0
    --------------------------------------------------------------------
    #!/bin/sh
    
    switch='br0'
    /sbin/ifconfig $1 0.0.0.0 down
    /sbin/brctl delif ${switch} $1
    --------------------------------------------------------------------
    
  1. The kvm startup script was modified to use these scripts instead of their ovs coutnerparts:
    vmhost2:/root# more start_vm 
    kvm -daemonize -vnc :$2 -m 2048 -smp 2 -net nic,model=virtio,macaddr=00:11:22:EE:EE:E$2 -net tap,script=/etc/ifup-br0,downscript=/etc/ifdown-br0 -drive file=$1
    
    Where $1 is the image name, and $2 is the last digit of the mac and it's matching vnc port number.

refrences:

Last modified 8 years ago Last modified on Jun 19, 2017, 9:58:18 PM

Attachments (3)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.