Physical to Virtual with Proxmox VE 2.2

Bought my server a christmas present, which finally arrived. A "new" 146GB 15K RPM SAS drive to eventually join the RAID 6 array. In the mean time, I've been wanting to convert my server to a virtual machine node for some time now, so now I finally have a drive to do the conversion. While the entire process is not quite done yet, bits and pieces are coming together (finally), so I'm going to start to document this before I forget. Keep in mind that I am using Debian Squeeze 6.0.6, so the commands and files may be a bit different than another distro that you may be using.

1. Making the backup

This was relatively easy. First, we make the partition on the device:

    fdisk /dev/sdb
    *n* to add new partition; I used default of full drive
    *a* to make it bootable
    *v* to verify the partition table
    *w* to write the partition table to device

Note: My device was on /dev/sdb, yours could be anywhere; be sure to check your device before proceeding.

Then, we make a file system, mount it, and rsync our live data over:

    mkfs -t ext3 /dev/sdb
    mkdir -p /mnt/sdb
    mount -t ext3 /dev/sdb /mnt/sdb
    rsync -aAXv /* /mnt/sdb --exclude=\ 
    {/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}

After this gets completed, you have a backup of your system. This is not bootable, and I chose to not make it bootable at this time because it will need to be adjusted when we convert it to a virtual machine anyways.

2. Installing Proxmox VE and setting up new KVM

At this point I started to install Proxmox VE. The ISO I got was 2.2-24, by the time you read this, yours will most likely be newer. Procedures should be relatively similar, though.

  1. Hook up your SpiderKVM, DRAC Remote Session, HP iLC, or whatever you use to get out of band access, as you will nuke the OS.
  2. Boot from the DVD, and you'll get to a nice splash screen telling you to hit return to start install. I made the mistake of doing that, and it assigned 92GB to my pve-boot partition. I have no idea how they came up with that number, but since I only have 146GB drives (6 of them in total, currently 5 on RAID 6, 6th acting as the backup sdb we used earlier), this is not acceptable. The correct way was to enter custom configuration parameters here before hitting enter. In my case linux ext4 maxroot=32 swapsize=32; linux tells the system things after that are OS configuration parameters (there are other nice cheat codes to use, such as setting VGA mode, memory for installation, etc. but I didn't use them), ext4 sets the filesystem to ext4, maxroot=32 sets the maximum root partition size to 32G, swapsize=32 sets the swap partition size to 32G.
  3. Follow on-screen instructions to complete the rest of the installation (WAY faster than I was expecting for an OS install).
  4. Point browser to https://ipaddress:8006 to setup VM, ignore certificate warning if you get one.
  5. I made a KVM similar to my usage (will scale down later), and no media, make sure to stop it if it tries to start itself.

3. Booting the Backup as a VM

Here's the time consuming part -- mainly because dd is slow, and if you mess up like me, it takes even more time to re-do the process.

  1. SSH into your host node, and go to the VM's image directory. In my case, this was at /var/lib/vz/images/100/ and start to copy the disk to a raw image:

     dd if=/dev/sdb bs=16M of = /var/lib/vz/images/100/olddisk.raw
    
  2. That last bit will not produce any output, and will take quite some time (in my case, it was doing around 60MB/s, on a 146GB drive, so almost 1 hour). If your drive is bigger, or slower at writing, it will take even more time. Once it is done, install kpartx and mount the raw image:

     modprobe dm_mod
     apt-get install kpartx
     losetup /dev/loop0 /var/lib/vz/images/100/olddisk.raw
     kpartx -a -v /dev/loop0
     mkdir -p /mnt/olddisk
     mount /dev/mapper/loop0p1 /mnt/olddisk
    
  3. Go to edit the networking to reflect the VM (I want to use 10.84.10.10 as my internal IP address, and NAT only specific port(s) to the VM), so in my case, I go and edit /mnt/olddisk/etc/network/interfaces as follows:

     auto lo
     iface lo inet loopback
    
     allow-hotplug eth0
     iface eth0 inet static
         address 10.84.10.10
         netmask 255.255.255.0
         gateway 10.84.10.255
    
     auth eth0
    
  4. Cleanly unmount everything, and get ready to boot into the OS using rescue mode to setup grub and initramfs:
     umount /mnt/olddisk
     kpartx -d /dev/loop0
     losetup -d /dev/loop0
    
  5. Go back to Proxmox VE, and map an ISO to your VM. Preferably the same one as the guest's OS. I dropped debian-live-6.0.6-amd64-rescue.iso into /var/lib/vz/template/iso/ and hooked it up via UI.
  6. Start the VM, use remote console to see what's happening, reset the VM from that popup since it already gone pass and can't boot properly, and hit F2 to get boot menu, choose the boot from CD/DVD, and go through menu until you get to the command line. Chroot into your drive, on my, it was chroot /target, get the UUID with blkid and edit /etc/fstab to match that of the result you get. After you're done with that, exit out, reinstall grub from the rescue disc menu, and go back into the console to update initramfs and grub:
     update-initramfs -c -k 2.6.32-5-amd64
     update-grub
    
  7. Reboot, and the system should come up.

4. Tying up the loose ends

Now this is where I am at... Messing with private VLANs (still figuring this one out) and trying to get it setup so I can port forward specific ports to this VM. So far, no success... Apache forwarding was relatively easy, but the other stuff (SSH, FTP, control panel, virtualbox webservices (VM inside inception style)) I haven't quite figure out yet. Going to take a while more to wrap my head around all of these to make sure things work.

How is your setup done?

{{ message }}

{{ 'Comments are closed.' | trans }}