虚拟化容器,大数据,DBA,中间件,监控。

XenServer: HowTo Convert HVM to PV (RHEL/SuSE)

29 10月
作者:admin|分类:容器虚拟化
XenServer: HowTo Convert HVM to PV (RHEL/SuSE)

 In order to make paravirtual work on a linux distro 2 things need to be done, first we need to install on the guest a kernel that support Xen, second, we need to change some parameters on the host system to let it know that we want it to boot in a paravirt environment. the following procedure can be applied on a fresh installation of a HVM system but can also be applied to manually p2v a already working system.in order to install the OS in an HVM mode you will need to choose "other media", that will allow the xenserver to install the system from the a CDROM


Installing the kernel:
  • as a condition for paravirtual to work, a kernel that support Xen hypervisor need to be install on the guest, both SLES10SP1 and RHEL5 have Xen enabled kernels as part of there virtualization packages but we when we install SLES i386 version (oppose to x86_64) we'll find more than one xen kernel. specifically on xen we will need to install the xen kernel that ends with the word "pae" on 32bit distro guest (if you installing a x86_64 bit version you have only one xen kernel, and that the one we need). "PAE" which stands for Physical Addressing Extension basically add the support for the translation from the Xen server 4.0.1 64bit hypervisor to the regular 32bit guest OS. (for more info http://en.wikipedia.org/wiki/Physical_Address_Extension), RHEL have only one xen kernel and that kernel is fine.
it is important to make a small note to ourselves about what is the name of the kernel and where is it installed, what I mean by that for example is that on RHEL you have the name of the kernel and it is installed on the boot directory on a separate partition from root (which means that the reference to it will be /vmlinuz-&lt;version&gt;xen) but on SLES its still in the /boot directory but on the root partition (and the reference to it will be /boot/vmlinuz-<version>xen). the best way to verify it is by checking the configuration file for grub, we will need the name and the full path to the kernel and initrd for later.

note: Other distros or releases might not have a kernel that support Xen hypervisor, in this case, you might need to compile one yourself (haven't done it myself, yet... that probably a different howto :)

Tweaking parameters on Dom0
 

  • once the guest is install, we need to tweak the xen database, in order to boot it in paravirt mode and not in HVM. login to the xen server (ssh or local console) command line.
note: paravirt mode uses a customize pygrub to boot the xen enabled kernel, HVM uses a customize qemu to bootstrap the boot partition and proceed with normal boot operation)

run this command to list the installed VMs: 
  1. xe vm-list

copy the uuid of the VM that you have just installed and run the command:
  1. xe vm-param-list uuid=<vm uuid>

this command will output all the parameters available for the VM including the on that we are about to change.
the main parameter is "HVM-boot-policy", right now you should have "HVM-boot-policy=BIOS order". we need to set empty this parameter, that how xen engine knows to use a bootloader and not qemu:
  1. xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""

note: by setting the parameter HVM-boot-policy back to "BIOS order" you could boot the guest OS back to HVM, very useful in case something doesn't work well on paravirt.

note: each parameter that is set with vm-param-set can be validated by using vm-param-get command, e.g:
  1. xe vm-param-get uuid=<vm uuid> param-name=HVM-boot-policy
this will get you the value of the HVM-boot-policy parameter. in general it is a good idea to validate each parameter you modify.

now we need to tell the xen engine which bootloader to use by setting this parameter "PV-bootloader=pygrub" (it should be empty right now) with this command:
#xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub 

we also need to tell the bootloader which kernel/initrd (we should use the note we made earlier of the full path to the kenrel/initrd) to load by setting this parameter "PV-bootloader-args" with this command: 
#xe vm-param-set uuid=<vm uuid> PV-bootloader-args="--kernel <full path to xen kernel> --ramdisk <full path to xen initrd>"

note: there is the parameters "PV-kernel" and "PV-ramdisk" but they do not work for some reason... :(

it is possible to also add some kernel options using this parameter "PV-args" for example:
  1. xe vm-param-set uuid=<vm uuid> PV-args="console=ttyS0 xencons=ttyS"
note: this will cause the kernel to set the serial0 as the console but you need to make sure that you add "console" to the file /etc/securetty

another parameter that need to be tweaked before we can boot is on the virtual disk of the vm or to be more accurate on the virtual block device(vbd)of the vm, so first we need to get the corolating uuids. 

run the command, in order to get the list of devices attached to the vm (HD/CD):
  1. xe vm-disk-list uuid=<vm uuid>

you should see the virtual disk(tagged as VDI) of the vm there
note: in case you are not sure which one is the HD you want ot change then go back to the XenCenter, switch to the storage tab of the installed VM and change the "name" of the HD and press apply. run again the command above and you should see the new name there.

the parameter that we want to change is not in the VID parameter but in the VBD of the disk, so in order to get the correlating uuid for the VBD copy the uuid (note: that is not the same uuid as the VM uuid! this uuid is the uuid of the Virtual disk/VDI) and run this command:
  1. xe vdi-param-list uuid=<virtual disk/VDI uuid>

now copy the "vbd-uuids" parameter's value, that is the uuid of the correlating virtual block device of the virtual disk.
run the command:
  1. xe vbd-param-list uuid=<Virtual Block Device/VBD uuid>

the parameter that need to be changed is "bootable", and it need to be set to true, with the command:
  1. xe vbd-param-set uuid==<Virtual Block Device/VBD uuid> bootable=true

ok, a few more tweaking to make it work, we need to change the a few things on the guest OS, it is specific to SLES10SP1 because the way they handle boot, which usually differ from one distro to an other, so here what I have found so far:
on SLES10SP1 the root device is written inside the initrd, that cause a problem because the name of the root device is different. so before booting we will need to set the parameter PV-args to include root=/dev/xvda2 (in a condition that it is a default configuration) so that would look something like that:
  1. xe vm-param-list uuid=<vm uuid> PVargs="console=ttyS0 xencons=ttyS root=/dev/xvda2"
and we also need to change the /etc/fstab.
RHEL doesn't have this problem because it uses LVM which are automatically detected on the given harddrives and LVM doesn't change names.

of course now would be a good idea to install the Xen tools on the guest, I haven't explorer what exactly they are for (I think they are mostly for monitoring) any way I guess that its better with then without. 

if you would like also to enable the "Switch to X console" button on the XenCenter you will need edit the gdm configuration, the location of the file differ between SLES and RHEL, it is located in /etc/opt/gnome/gdm/gdm.conf on SLES and in /etc/gdm/custom.conf in RHEL but I could only make it work on RHEL ... not the same version of Xvnc and differences in the parameters, haven't sorted it yet. anyway here is what you need to do to make RHEL work:
open the file in you favorite editor (vi) and seek for "servers"
under it add the line:
0=VNC
then in the end of the file add the following lines:
server-VNC
name=VNC
command=/usr/bin/Xvnc -geometry 800x600 -PasswordFile /etc/vncpass BlacklistTimeout=0
flexible=true

of course we need to make sure that Xvnc package is installed and finaly create a password file using the command:

  1. vncpasswd /etc/vncpass

the geometry and the location of the password file is configurable as you can see.

that should be it. now you should cross your fingers and reboot the system

 
 
附:

Steps:

  1. yum install kernel-xen

    This installed: 2.6.18-194.32.1.el5xen

  2. edited: /boot/grub/menu.lst changed my specs to match:

    title CentOS (2.6.18-194.32.1.el5xen)    
    root (hd0,0)
    kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0
    initrd /initrd-2.6.18-194.32.1.el5xen.img
    

    Then I changed my xenserver parameters to match:

    xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img"
    xe vm-param-set uuid=[vm uuid] HVM-boot-policy=""
    xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub 
    xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true

浏览2761 评论0
返回
目录
返回
首页
intel的东西越来越垃圾。 Cnyunweiw.com-Cacti+Nagios安装后按自己的相关要求修改相关信息