Configuration
-------------
Make sure the UQT_VM_TOOLS environment variable is set to something like:
export UQT_VM_TOOLS="$HOME/bzr-pulls/ubuntu-qa-tools/vm-tools"

System specific configuration is in $HOME/.uqt-vm-tools.conf. Eg, have
something like:

# list of all active releases (included devel)
vm_release_list="dapper gutsy hardy intrepid jaunty"

# used by vm-repo
vm_repo_url="http://192.168.122.1/debs/testing"

# vm-tools specific settings
vm_path="/srv/vms/kvm"
vm_mirror="http://192.168.122.1/ubuntu"
vm_security_mirror=""		# set if want to use a local mirror for security
vm_root_size="4096"             # 4096 is a good default
vm_swap_size="1024"             # 1024 is good with 384 RAM
vm_memory="384"                 # 384 is needed for desktops, 256 for servers
vm_ssh_key=""                   # defaults to $HOME/.ssh/id_rsa.pub
vm_connect="qemu:///system"
vm_flavor=""                    # blank for default, set to override (eg 'rt')
vm_archs="i386 amd64"           # architectures to use when using '-p PREFIX'
                                # with some commands
vm_extra_packages="screen ubuntu-desktop"       # list of packages to also
                                                # install via postinstall.sh

# vm-iso specific settings (also uses vm-tools settings)
vm_iso_ndisks="1"		# number of disks
vm_iso_vcpus="1"		# number of virtual CPUs
vm_iso_ostype="linux"
vm_iso_osvariant="ubuntuIntrepid"
vm_iso_fully_allocate="yes"	# fully allocate the disk image (usu. faster)

# vm-new locale
vm_locale="en_US.UTF-8"

# vm-new keyboard layout
vm_setkeyboard="false"
#  Settings for Feisty+, and Dapper's xorg
vm_xkbmodel="pc105"
vm_xkblayout="ca"
vm_xkbvariant=""
vm_xkboptions="lv3:ralt_switch"
# Settings for Dapper's console
vm_keymap="cf"


VMBUILDER
---------
NOTE: vm-builder 0.9 (included in intrepid) does not support future Ubuntu
releases (see bug #294305). As such, to create 'karmic' on a jaunty host,
you must:

1. Update /usr/share/pyshared/VMBuilder/plugins/ubuntu/distro.py to have the
new release included in 'suites' and to default to the new release.

2. Create /usr/share/pyshared/VMBuilder/plugins/ubuntu/<release>.py. Eg:
cp /usr/share/pyshared/VMBuilder/plugins/ubuntu/jaunty.py /usr/share/pyshared/VMBuilder/plugins/ubuntu/karmic.py

(then update karmic.py to use inherit from Jaunty. Eg:

  from   VMBuilder.plugins.ubuntu.jaunty import Jaunty

  class Karmic(Jaunty):

3. sudo rm /usr/lib/python2.6/dist-packages/VMBuilder/plugins/ubuntu/distro.pyc

4. ln -s /usr/share/pyshared/VMBuilder/plugins/ubuntu/<release>.py /usr/lib/python2.6/dist-packages/VMBuilder/plugins/ubuntu/<release>.py

5. update debootstrap to have the new release in
/usr/share/debootstrap/scripts/. Usually this is a symlink to the previous
release.


Managing Virtual Machines for the Security Team
-----------------------------------------------
The security team should have at least one virtual machine per release and
one for the development release. The recommended method is to use kvm with
libvirt. Here are the packages needed (as of intrepid):

$ sudo apt-get install ubuntu-virt-mgmt ubuntu-virt-server

virsh (cli) and virt-manager (gui) are the most often used tools. See
https://help.ubuntu.com/community/KVM for more information.

kvm on 64-bit will allow 32-bit OS guests, so if running a 64-bit host OS, then
can have both i386 and amd64 installs. It's often useful to spend some time
getting a pristine image exactly how you want it, then you clone/snapshot off
of it for development work and/or testing.

There are various ways to accomplish this. One is the newer snapshots method,
see 'Snapshots' (below) for details.

Alternatively, can have a set of clean VMs and another set that are cloned. Eg,
might have the following virtual machines:
clean-dapper-i386
clean-dapper-amd64
clean-gutsy-i386
clean-gutsy-amd64
clean-hardy-i386
clean-hardy-amd64
clean-intrepid-i386
clean-intrepid-amd64
clean-jaunty-i386
clean-jaunty-amd64


Then clone the above (eg with virt-clone or vm-clone (see below)) and have:
sec-dapper-i386
sec-dapper-amd64
sec-gutsy-i386
sec-gutsy-amd64
sec-hardy-i386
sec-hardy-amd64
sec-intrepid-i386
sec-intrepid-amd64
sec-jaunty-i386
sec-jaunty-amd64

The 'clean' machines should only ever be accessed for updates or fine-tuning
while the 'sec' machines can be used, updated, mangled, discarded and
recreated as needed.


Here is one way to achieve that:
. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    vm-new $i i386 clean
    vm-new $i amd64 clean
done

Be sure to run 'sudo /postinstall.sh' in all of the above by logging in with
your username and password 'ubuntu'. Get them exactly the way you want them
(eg, install ubuntu-desktop, disable tracker, disable screensave, etc), and
then they can be used for cloning.


If you also setup up your host's resolv.conf to have:
nameserver 192.168.122.1
nameserver ...
nameserver ...

Then you will be able to ssh into the machines with:
$ ssh sec-intrepid-i386.

(notice the required '.' due to a dnsmasq bug).


If you use dnsmasq as above, you can clone these with:
. $HOME/.uqt-vm-tools.conf
for i in $release_list ; do
    vm-clone clean-${i}-i386 sec-${i}-i386
    vm-clone clean-${i}-amd64 sec-${i}-amd64
done

This creates 'sec-<release>-<arch>' machines that can be updated, tested etc
and keeps a pristine copy of the virtual machines in 'clean-<release>-<arch>'.
The vm-clone command is simply a wrapper for virt-clone which also updates
the following files:
/etc/hostname
/etc/dhcp3/dhclient.conf
/etc/hosts

vm-clone is kind of brittle because it currently uses ssh commands, so if
something goes wrong, just make sure the above files get updated.


Finally, if using dnsmasq as above, you can also use vm-cmd to do batch
commands for the virtual machines, like so:
$ vm-cmd -p sec "sudo apt-get update && sudo apt-get -y upgrade"

vm_cmd uses 'release_list' in $HOME/.uqt-vm-tools.conf and will
ssh in to all running sec-*-* machines and perform the specified command.


Other Useful Commands
---------------------
vm-start
vm-stop
vm-remove
vm-modify
vm-new
vm-repo


Snapshots
---------
The basic idea is as follows:

1. the pristine image is in <path>/disk0.pristine.qcow2
2. the libvirt XML uses the disk at <path>/disk0.qcow2
3. when using 'vm-start -s ...', <path>/disk0.qcow2 is created using qemi-img
   as a snapshot of <path>/disk0.pristine.qcow2. If <path>/disk0.qcow2 already
   exists, it is discarded
4. 'vm-stop -u ...' will commit changes to any snapshots. 'vm-stop -f ...'
   will remove any existing snapshots ('-f' uses virsh destroy, which implies
   not caring about the contents). 'vm-stop ...' shutdown the machine without
   removing or committing existing snapshots.

Typical uses:
 * snapshot and discard
   $ vm-start -s foo
   ... do your stuff ...
   $ vm-stop -f foo

 * snapshot with persistence across stops
   $ vm-start -s foo
   ... do your stuff ...
   $ vm-stop foo		# no '-f' so snapshot is not removed
   $ vm-start foo		# notice no '-s', so existing snapshot is used
   ... do more stuff ...
   $ vm-stop foo
   ... do even more stuff ...
   $ vm-stop -f foo		# done with work, discard the snapshot with '-f'

Adjusting VMs to use snapshots:
$ vm-use-snapshots <vmname>

For a new VM:
$ vm-new ...
$ vm-use-snapshots <vmname>

* IMPORTANT *
Changes made in a snapshot will be lost if you use '-f' with vm-stop or
otherwise remove the snapshots. Also, because the libvirt XML references the
snapshot name and not the pristine image, these machines cannot be started
with virsh or virt-manager until the snapshot is created (because the disk
appears to be missing).

To use in virt-manager, start a vm with a snapshot using, but without the vnc
viewer:
$ vm-start -v -s foo

Then access the already started VM from within virt-manager.

You can manually create the snapshot with qemu-img, like so:
$ qemu-img create -F qcow2 -b <pristine> -f qcow2 <snapshot>

And commit the changes with:
$ qemu-img commit <snapshot>

