Virtualization is something I am obsessed with, and we started using VMware ESXi 4 at work today. I was reading some stuff on ESXi 4, specifically how to create linked clones, when I came across some idiot who said they were running ESXi in a virtual machine under VMware Workstation; as if! This caught my attention, and minutes later, after having spent countless hours over the last year or so trying to get ESXi 3.5 and 4.0 working on some unsupported hardware at home, I have ESXi 4.0 running!
I tried to get it to talk to my iSCSI target of my local OpenFiler NAS but it didn’t work for some reason, so I used NFS instead. I am installing Ubuntu 8.0.10 but it’s awful slow, and this on an Intel i920 processor with 9GB RAM, 4GB of which is allocated to the ESXi VM. I didn’t expect much but I didn’t expect this. (Note: this was easily fixed, see my comment below)
The only caveat in getting it running was making sure that I selected “Other Linux 64-bit” for the ESXi VM, and editing its .vmx file to add the lines:
monitor.virtual_exec = "hardware" monitor_control.restrict_backdoor = "true"
Without these, ESXi will not allow you to start any VMs. It’s also important that the NIC be an e1000, likely because that’s one of the few NICs supported by ESX, but that was what I got by default using the above setting.
My intent as mentioned earlier is to play with linked clones. We have a project to create a set of semi-public workstations and I want them to be thin clients (e.g. Thinstation) that connect to a set of VMs running XP such that logging off or disconnecting will revert the VMs to a snapshot. Straightforward enough, but a PITA for maintenance where each VM would require its own large, redundant disk image. Linking each VM to a single image will use minimal disk space, plus when we update that common disk image with new anti-virus defs, Windows updates, etc, rolling out the changes will be as easy as replacing the linked disk. A discussion of how this will work is here, at the bottom of the discussion.