XenServer: HowTo Convert HVM to PV (RHEL/SuSE)
In order to make paravirtual work on a linux distro 2 things need to be done, first we need to install on the guest a kernel that support Xen, second, we need to change some parameters on the host system to let it know that we want it to boot in a paravirt environment. the following procedure can be applied on a fresh installation of a HVM system but can also be applied to manually p2v a already working system.in order to install the OS in an HVM mode you will need to choose "other media", that will allow the xenserver to install the system from the a CDROM
Installing the kernel:
- as a condition for paravirtual to work, a kernel that support Xen hypervisor need to be install on the guest, both SLES10SP1 and RHEL5 have Xen enabled kernels as part of there virtualization packages but we when we install SLES i386 version (oppose to x86_64) we'll find more than one xen kernel. specifically on xen we will need to install the xen kernel that ends with the word "pae" on 32bit distro guest (if you installing a x86_64 bit version you have only one xen kernel, and that the one we need). "PAE" which stands for Physical Addressing Extension basically add the support for the translation from the Xen server 4.0.1 64bit hypervisor to the regular 32bit guest OS. (for more info http://en.wikipedia.org/wiki/Physical_Address_Extension), RHEL have only one xen kernel and that kernel is fine.
note: Other distros or releases might not have a kernel that support Xen hypervisor, in this case, you might need to compile one yourself (haven't done it myself, yet... that probably a different howto
Tweaking parameters on Dom0
- once the guest is install, we need to tweak the xen database, in order to boot it in paravirt mode and not in HVM. login to the xen server (ssh or local console) command line.
run this command to list the installed VMs:
- xe vm-list
copy the uuid of the VM that you have just installed and run the command:
- xe vm-param-list uuid=<vm uuid>
this command will output all the parameters available for the VM including the on that we are about to change.
the main parameter is "HVM-boot-policy", right now you should have "HVM-boot-policy=BIOS order". we need to set empty this parameter, that how xen engine knows to use a bootloader and not qemu:
- xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
note: by setting the parameter HVM-boot-policy back to "BIOS order" you could boot the guest OS back to HVM, very useful in case something doesn't work well on paravirt.
note: each parameter that is set with vm-param-set can be validated by using vm-param-get command, e.g:
- xe vm-param-get uuid=<vm uuid> param-name=HVM-boot-policy
now we need to tell the xen engine which bootloader to use by setting this parameter "PV-bootloader=pygrub" (it should be empty right now) with this command:
#xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub
we also need to tell the bootloader which kernel/initrd (we should use the note we made earlier of the full path to the kenrel/initrd) to load by setting this parameter "PV-bootloader-args" with this command:
#xe vm-param-set uuid=<vm uuid> PV-bootloader-args="--kernel <full path to xen kernel> --ramdisk <full path to xen initrd>"
note: there is the parameters "PV-kernel" and "PV-ramdisk" but they do not work for some reason...
it is possible to also add some kernel options using this parameter "PV-args" for example:
- xe vm-param-set uuid=<vm uuid> PV-args="console=ttyS0 xencons=ttyS"
another parameter that need to be tweaked before we can boot is on the virtual disk of the vm or to be more accurate on the virtual block device(vbd)of the vm, so first we need to get the corolating uuids.
run the command, in order to get the list of devices attached to the vm (HD/CD):
- xe vm-disk-list uuid=<vm uuid>
you should see the virtual disk(tagged as VDI) of the vm there
note: in case you are not sure which one is the HD you want ot change then go back to the XenCenter, switch to the storage tab of the installed VM and change the "name" of the HD and press apply. run again the command above and you should see the new name there.
the parameter that we want to change is not in the VID parameter but in the VBD of the disk, so in order to get the correlating uuid for the VBD copy the uuid (note: that is not the same uuid as the VM uuid! this uuid is the uuid of the Virtual disk/VDI) and run this command:
- xe vdi-param-list uuid=<virtual disk/VDI uuid>
now copy the "vbd-uuids" parameter's value, that is the uuid of the correlating virtual block device of the virtual disk.
run the command:
- xe vbd-param-list uuid=<Virtual Block Device/VBD uuid>
the parameter that need to be changed is "bootable", and it need to be set to true, with the command:
- xe vbd-param-set uuid==<Virtual Block Device/VBD uuid> bootable=true
ok, a few more tweaking to make it work, we need to change the a few things on the guest OS, it is specific to SLES10SP1 because the way they handle boot, which usually differ from one distro to an other, so here what I have found so far:
on SLES10SP1 the root device is written inside the initrd, that cause a problem because the name of the root device is different. so before booting we will need to set the parameter PV-args to include root=/dev/xvda2 (in a condition that it is a default configuration) so that would look something like that:
- xe vm-param-list uuid=<vm uuid> PVargs="console=ttyS0 xencons=ttyS root=/dev/xvda2"
RHEL doesn't have this problem because it uses LVM which are automatically detected on the given harddrives and LVM doesn't change names.
of course now would be a good idea to install the Xen tools on the guest, I haven't explorer what exactly they are for (I think they are mostly for monitoring) any way I guess that its better with then without.
if you would like also to enable the "Switch to X console" button on the XenCenter you will need edit the gdm configuration, the location of the file differ between SLES and RHEL, it is located in /etc/opt/gnome/gdm/gdm.conf on SLES and in /etc/gdm/custom.conf in RHEL but I could only make it work on RHEL ... not the same version of Xvnc and differences in the parameters, haven't sorted it yet. anyway here is what you need to do to make RHEL work:
open the file in you favorite editor (vi) and seek for "servers"
under it add the line:
0=VNC
then in the end of the file add the following lines:
server-VNC
name=VNC
command=/usr/bin/Xvnc -geometry 800x600 -PasswordFile /etc/vncpass BlacklistTimeout=0
flexible=true
of course we need to make sure that Xvnc package is installed and finaly create a password file using the command:
- vncpasswd /etc/vncpass
the geometry and the location of the password file is configurable as you can see.
that should be it. now you should cross your fingers and reboot the system
Steps:
-
yum install kernel-xen
This installed: 2.6.18-194.32.1.el5xen
-
edited: /boot/grub/menu.lst changed my specs to match:
title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img
Then I changed my xenserver parameters to match:
xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true
更多文章推荐
- 华为云21天转型微服务实战营全部资源
- kubernetes离线安装KubePi
- OpenEuler/Centos安装containerd容器,cni,nerdctl,buildkit,runc
- K8s网络组件之Flannel:VXLAN模式
- 在 Kubernetess 中使用 DNS 和 Headless Service 发现运行中的 Pod
- K8s网络组件之Calico:IPIP工作模式
- K8s网络组件之Calico:Route Reflector 模式(RR)
- K8s 高性能网络组件 Calico 入门教程
- 华为云基于ServiceStage的微服务开发与部署的实验过程问题
- 如何体验华为云ServiceStage的源码部署功能?
目录 返回
首页