Tag Archive: ESX


That’s better!

After discovering last night that my newly acquired Broadcom 57810A Dual Port 10Gb PCIE Copper (RJ45) Ethernet NICs didn’t fit my Supermicro AS-2022TG-HIBQRF, I was pleased to find they do fit my Supermicro AS-1042G-LTF.

And best of all, absolutely no setup/configuration was necessary. I simply powered down the host, replaced the NIC and powered back up. The ports automatically assigned the same as the NIC I removed.

Screen Shot 2014-12-05 at 09.08.19

Advertisements

Following on from: https://tickett.wordpress.com/2014/11/24/building-hosting-environment-part-1-hardware/

  • Configure IPMI (either use a static IP or setup a static DHCP lease)
  • Tweak the bios (ensure options are optimised for performance rather than to minimise noise etc)
  • Add DNS* entries for your IPMI and ESX Management Interfaces
  • Install ESXi (I did everything without the need to even plug a monitor/keyboard in, IPMI is a life saver)
  • Configure your management interfaces (use the IP addresses you previously configured in DNS, and the domain name you previously selected)

Now you can login with the vSphere client and configure a few more items;

  • NTP (on the Configuration tab under Software, Time Configuration)
  • Add your datastore (i’m using NFS, so I had to add a VMKernel interface first)

Until we have our vCenter server up and running we will stick to a single NIC.

*If you don’t yet have a device which provides DNS (router), you can add entries to your hosts file for now.

*Choosing a domain name; I’ve always gone with something.local or something.home in the past, but suffered as a result. I did a little research and found some articles suggesting best practice is to use a subdomain of an internet facing domain you own http://www.mdmarra.com/2012/11/why-you-shouldnt-use-local-in-your.html. So, say you own microsoft.com, your internal domain name may be ad.microsoft.com. You configure the NETBIOS name to be whatever you like, this will be used when you logon using NETBIOS\User rather than user@ad.microsoft.com.

New Lab / NAS

Far too long since the last post. Let’s hope this will be the start of them picking back up again!

I have been experiencing some performance issues and need to have a bit of a re-shuffle of the servers/network (my vCenter appliance has stopped working, SQL is being slow etc). I have some production stuff running and don’t want to take everything offline for long so decided to build a new environment then migrate stuff.

I wont be changing much; 

Old NAS; Synology DiskStation 1812+ w/
-4x 3TB WD Green in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-2x 256GB OCZ Vertex4 SSD in RAID0 : Virtual machine storage (NFS)
-2x1gbit LACP to switch
Old ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 16GB RAM running VMWare ESXi v5.1
Old switch; Linksys SRW2024W

New NAS; Snology DiskStation 1813+ w/
-3x 4TB WD Red in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-3/4?x 250GB Samsung EVO 840 SSD in RAID0? : Virtual machine storage (NFS/iSCSI?)
-3x1gbit LACP to switch dedicated to main data store
-1gbit to switch dedicated to VM storage
New ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 32GB RAM running VMWare ESXi v5.5
New switch; Cisco SG200-26 (separate vm storage traffic on it’s own VLAN/subnet)

You’ll notice a bunch of questions marks around the new Virtual machine storage volume. I’m currently debating which disk configuration to use and which storage protocol. I’ve always used NFS as it seems much simpler but understood iSCSI to be the better option (especially with the Synology supporting VAAI hardware acceleration). But despite this, i’ve been reading that NFS seems to outperform iSCSI.

Additionally, if I go iSCSI I will try using 2x1gbit ports and enabling multipathing / round-robin. If I go down the NFS route I don’t think LACP will provide any benefit as the IP hash from a single ESXi host to the single DiskStation will always use the same link?

I have 4 of the EVO SSD so am initially creating a 2 disk RAID0 volume using NFS and an identical volume using iSCSI. I can then try running some like for like comparisons/benchmarks to determine which configuration to use going forward.

I will provide an update shortly.

I powered up a new ESX host and enabled passthrough. This got me a bit further:

I thought it had hung here but patience paid off and the installer booted:

Damn- the keyboard/mouse doesn’t work (the pointer was moving around but I couldn’t click anything). Again- patience paid off and eventually it automatically moved on to the next step:

No target disks were listed when I chose to "Reinstall OS X". I went in to Disk Utility to investigate:

Damn. I tried using both IDE and SCSI disks in ESX but neither appear.

No doubt I’ll try some other ideas at a later date. I have a feeling I need to replace a .kext file on the install disc.

L

No success unfortunately- although I’m not overly fussed about getting it to work- just fancied a try.

ESX 5:

VirtulBox:

Again, getting stuck on "Still waiting for root device".

This was mounting the iso or trying client dvd drive. I can’t take ESX offline at the moment to reboot and enable USB passthrough.

If I get an opportunity to bring my other ESX host up sometime I’ll give it another go.

L

Servers Re-Racked

After running power out to the garage it was time to move the servers out and re-rack them. It all went pretty well…

Here’s the mess at the back of the rack before I stripped it out:

Once all the cabling was stripped out:

Testing the 2U cable dump panel for the power cables:

And covered up:

Seems to work nicely but I only have one at the moment and I need that for the network cables:

Power cables re-run using standard 1U cable management bar:

Network leads patched in (2 for each ESXi server, 1 for the WHS 2011 server, 2 for IPMI and 1 running back to the house- hopefully to be replaced shortly by 2 fibre LAG):

And covered up:

Front of the rack (bit of a rubbish photo, but you can see none of the equipment has any front connections, also quite a lot of redundant gear):

L

Mac OS X Lion on ESXi 5

After countless failed attempts I’ve finally managed to get Mac OS X Lion running in ESXi 5.

I used Donk’s ESXi 5 Mac OS X Unlocker: http://www.insanelymac.com/forum/index.php?s=&showtopic=267296&view=findpost&p=1745191

Unfortunately when you try and boot from the Lion installation DVD the Virtual Machine hangs on the apple logo. Pressing F8 or configuring the VM to force entry into the BIOS on next boot allows you to select EFI Internal Shell

And boot verbosely by issuing the command: boot –v

We now see the boot is hanging at the PCI configuration stage:

Issuing the command: boot –v npci=0x2000

Allows us to get past the PCI configuration step but now hangs looking for the installation media (still waiting for root device):

This appears to be because the IDE controller is unsupported. Attaching an external USB DVD ROM, enabling pass-through:

Then attaching the USB controller to the VM:

And voila- the Lion installation begins!

The next obstacle came when trying to select the target disk for the installation. The virtual disk wasn’t listed and attempting to partition/format the disk resulted in an error: Unable to write to the last block of the device

I found a few suggestions: https://discussions.apple.com/thread/3226425?start=0&tstart=0

Launching Terminal and issuing: diskutil list
Allowed me to identify the disk: /dev/disk0
And issue: diskutil eraseVolume jhfs+ "OSX" /dev/disk0
Which also failed but back in the disk utility I was now able to partition & format the disk ready for the installation!

Everything went smoothly from here. Installing VMWare tools v5.0.0: http://www.insanelymac.com/forum/index.php?showtopic=267339 went smoothly but after rebooting I couldn’t login to (the password seemed to be being rejected). I rebooted the VM in safe mode (hold the shift key whilst during boot) and was able to login.

Performance is pretty poor (I think this is due to the lack of graphics acceleration) so I went straight in and enabled remote management so I could now use Apple Remote Desktop to administer the server.

Good luck!

L

*EDIT* One important thing to note the VM cannot be powered on from vCenter (error: Failed to find a host for powering on the virtual machine. The following faults explain why the registered host is not compatible. The guest operating system ‘darwin10_64guest’ is not supported). Simply logging directly into the host allows you to power on the VM.

I felt this deserved a separate post as the issue could have driven me crazy/made me return the motherboard in error!!!

After building the first server for my new Virtual Lab i installed ESXi and started to deploy my first VM. Everything was going nicely until vSphere lost the connection to the server. I used the IPMI remote control to take a look at the console and apparently both network interfaces were disconnected? Maybe something went wrong somewhere…

Reboot… Same problem after 5minutes or so the network connections drop
Reinstall ESXi… Same problem…
Recrimp a couple of new network cables… Same problem…

Finally i found this article which quite correctly identifies Active State Power Management as the cause! Pop into the bios:

Advanced, Advanced Chipset Control and disable Active State Power Management

Voila! Everything’s now chugging along nicely (fingers crossed)

L

New Virtual Lab – Part 2

…continued from https://tickett.wordpress.com/2011/08/24/new-virtual-lab-part-1/

So- in came the first set of bits for new server #1 and I begun piecing it together…

Issue #1- The motherboard doesn’t sit quite right on the spacers/chassis screws (because of the element of the cpu cooler which sits on the underside of the metherboard)- not really a problem, I just added a few washers (I expect I may’ve been able to find some slightly larger spacers too, if i’d looked hard enough).

Issue #2- My USB pen drive didn’t fit in the internal slot with the chassis together. Not to worry- i simply attached a header to the spare pins and plugged the USB stick into one of those ports still inside the case.

Issue #3- When i powered up the machine it was pretty loud. I checked and believe this to be because the Akasa cooler (AK-CCE-7107BS) only has a 3 pin header so doesn’t support pulse-width-modulation (PWM) and effectively runs at full speed all of the time! Fortunately the other cooler (Gelid Slim Silence IPlus Low Profile Intel CPU Cooler) had the correct 4 pin connector and when hooked up, supported PWM and ran nice and quiet!

Issue #4- I intend to run the server “headless” so one of the great features of the X8-SiL-F motherboard is the on-board IPMI. Unfortunately when i tried to connect with the default username/password “ADMIN” / “ADMIN” access was denied. I downloaded a copy of the latest firmware from the Supermicro site and flashed using:

dpupdate.exe -f SMT_SX_250.bin -r n

The -r n parameter telling it to reset to factory settings. Voila- I could now login via the web-interface or windows IPMI tool using the default login credentials (“ADMIN” / “ADMIN”).

ESXi 4.1 installed like a charm but i’ve had a little trouble trying to deploy my first Virtual Machine (ESXi management network losing connectivity and/or the hypervisor crashing)- I think this might be because i’m using old knackered network cables! I will replace them and hopefully update tomorrow in Part 3.

The 2nd of these servers is on route and hopefully be delivered before the weekend.

My next question is what virtual machines should i configure?

I definitely need a vCenter server
I definitely need at least one SQL server (possibly 2 or 3 as i’d like to experiment with replication)
I definitely need at least one web server (IIS)
I definitely want to get trixbox back up and running
I am considering experimenting with pfSense or possibly untangle?
I also need a local dns server but think that might best sit on physical hardware or i’ll get problems with the hypervisor being unable to query DNS before the VM has started?
My fileserver currently runs WHS2011- So I would also like a WHS2011 VM to test the “Drive Extender” replacements on (however I realize I can’t really test performance here so might have to give that a miss).
Also, i think that OSX Server might run on ESXi- and i’d quite like to have a proper (non-hacked) time machine backup store configured so this might be the right route to go down…

L

Since I moved (about a year ago) I have been without a virtual lab- the old lab ran on 2x Dell Poweredge 2950 III & 1x HP Proliant DL385 G2, noisey and power-hungry! My intention was to run power to the garage so I could bring the lab back to life, but rising electricity costs and some interesting posts on Jason’s blog persuaded me to order some new, smaller, quieter and more green/efficient hardware:

£163 – Supermicro X8SIL-F (the motherboard Jason recommended and many seem to’ve reported success with)
£255 – Intel Xeon X3470 Quad-Core 2.93Ghz (Socket 1156)
£026 – Akasa Intel 1U 1156 CPU Cooler (AK-CCE-7107BS)
£123 – 4x 4GB Kingston 1333Mhz ECC (KVR1333D3E9S/4G or KVR1333D3E9SK2/8G)

I already had some spare 1U Supermicro chassis with power supplies (512L-200B) and USB pen drives laying around for the hypervisors.

I initially ordered one of the above-mentioned kit from LambdaTek (I intended to get comfortable with it before ordering more). Unfortunately the motherboard was on back order so I decided to look around for another supplier. I couldn’t find anywhere cheaper or with stock so i picked 2 up from an international seller on eBay. While I was at it I grabbed another x3470 from Amazon (£260), another 16GB of RAM from Crescent Electronics (£107) and another Cooler (this time the Gelid Slim Silence IPlus Low Profile Intel CPU Cooler from Cool&Quiet @ £20)

Part 2 to follow! Just a quick note about the Akasa Cooler- i’ve deliberately put a line through it because you should NOT order it (it’s no good- i will get onto that in part 2)

L

%d bloggers like this: