So anyhow, just wanted to share a technique I used to rescue a failing machine. The machine involved was a Red Hat Enterprise Linux 4 machine that I wanted to migrate to virtualization for the simple reason that one of its drives had failed. 30GB of the first drive was used for actual data, most of the system was empty.
So first thing first, I created a blank virtual machine on the ESXi host and told VMware to create a drive big enough to hold all the data on the old RHEL4 machine. Then I connected that virtual machine's hard drive to a Centos6 virtual machine as a virtual hard drive. Then I exported that virtual hard drive via tgtd / iSCSI to the RHEL4 machine and connected to that target from the RHEL4 machine's iSCSI initiator. On the RHEL4 machine I then dd'ed the first hundred blocks from its physical hard drive to the iSCSI hard drive (which was something like /dev/sdc, I'd checked /proc/partitions before telling the target to scan so I could know what showed up), did a 'sfdisk -R /dev/sdc' to re-read the partition table on /dev/sdc, then copied the /boot partition (after unmounting it) as a byte by byte copy: 'dd if=/dev/sda1 of=/dev/sdc1'. Then I did
- pvcreate /dev/sdc2
- vgcreate rootgroup /dev/sdc2
- lvcreate -n rootvol -L 16G rootgroup
- lvcreate -n swapvol -L 2G rootgroup
- lvcreate -n extravol -L 16G rootgroup
- vgscan -a
- lvscan -a
- mkfs -t ext3 /dev/mapper/rootgroup-rootvol
- mkswap /dev/mapper/rootgroup-swapvol
- mkfs -t ext3 /dev/mapper/rootgroup-extravol
So how did this data transfer perform? Well, basically at the full speed of the source hard drive, which was a 500GB IDE hard drive.
Anyhow, having used the Linux iSCSI target daemon, tgtd, here as well as extensively for other projects, let me just say that it sucks big-time compared to "real" targets. How does it suck? Let me count the ways:
- Online storage management simply doesn't exist with tgtd. You can't do *anything* to manage a iSCSI target that someone's already connected to, you can't even stop tgtd!
- For that matter, storage management period doesn't exist with tgtd. For example, you can't increase the size of a target once you've created it by adding more backing store to an already existing up and running iSCSI target, it simply is.
- tgtd gets into regular fights with the Linux kernel about who owns the block devices that it's trying to export. It's basically useless for exporting block devices because of that -- if there's a md array on the block device or a lvm volume set on the block device, the Linux kernel will claim it long before tgtd gets ownership of it. Thing is, you don't have any control over what the initiator puts onto a block device, so you're kind of stuck there, you have to manually stop the target, deactivate the RAID array and / or volume group, then manually start the target in order to get control over the physical device to export it.
- tgtd has the most obscure failure mode I've ever encountered: if it can't do something it will still happily export the volume, just as a 0-length volume. WTF?!
-ELG