Tag Archive: Supermicro

Further to https://tickett.wordpress.com/2014/12/04/doh-it-doesnt-fit/

I was unable to find a replacement heatsink which would allow the Broadcom NIC to fit. I did however manage to replace the standard heatsink screws with these (12mm M3). I bought a pack of 100 from RS for a mere £1.52 (rs stock no 553-403);


You can see how much smaller they are than the stock screws;


Here it is fitted;


And the card fit;


Note that I added a bit of electrical tape to the end of the NIC to make 100% sure none of the components will short. As well as slotting a small sheet of paper between the card and the motherboard.

That’s better!

After discovering last night that my newly acquired Broadcom 57810A Dual Port 10Gb PCIE Copper (RJ45) Ethernet NICs didn’t fit my Supermicro AS-2022TG-HIBQRF, I was pleased to find they do fit my Supermicro AS-1042G-LTF.

And best of all, absolutely no setup/configuration was necessary. I simply powered down the host, replaced the NIC and powered back up. The ports automatically assigned the same as the NIC I removed.

Screen Shot 2014-12-05 at 09.08.19

Doh! It doesn’t fit!

Looking for a cheaper alternative to the Intel X540-T2 (Dual Port 10Gb PCIE Copper (RJ45) Ethernet NIC) I purchased a Broadcom 57810A (Dual Port 10Gb PCIE Copper (RJ45) Ethernet NIC).

I eagerly opened up my server (Supermicro AS-2022TG-HIBQRF) but it doesn’t quite fit!

Screen Shot 2014-12-04 at 22.14.11

You can see the plastic RAM shroud just touching, and the CPU heatsink just touching but unfortunately the problem is the heatsink screw (just below the back end of the card);

Screen Shot 2014-12-04 at 22.14.36

Very frustrating! If the screw lay where the 20 pin connector / holes were i’d consider getting out the dremel. You can clearly see some chips where i’d need to cut though!

I have had a look and can’t find any alternative heatsinks that might free up the space either!

Guess i’ll have to stick with the Intel X540-T2 for now (admittedly I don’t even know if the Broadcom 57810A is compatible with the server & ESX yet… although I will be sure to try it in my Supermicro AS-1042G-LTF later (watch this space).

Here’s the X540-T2;

Screen Shot 2014-12-04 at 22.23.05

Following on from: https://tickett.wordpress.com/2014/11/24/building-hosting-environment-part-1-hardware/

  • Configure IPMI (either use a static IP or setup a static DHCP lease)
  • Tweak the bios (ensure options are optimised for performance rather than to minimise noise etc)
  • Add DNS* entries for your IPMI and ESX Management Interfaces
  • Install ESXi (I did everything without the need to even plug a monitor/keyboard in, IPMI is a life saver)
  • Configure your management interfaces (use the IP addresses you previously configured in DNS, and the domain name you previously selected)

Now you can login with the vSphere client and configure a few more items;

  • NTP (on the Configuration tab under Software, Time Configuration)
  • Add your datastore (i’m using NFS, so I had to add a VMKernel interface first)

Until we have our vCenter server up and running we will stick to a single NIC.

*If you don’t yet have a device which provides DNS (router), you can add entries to your hosts file for now.

*Choosing a domain name; I’ve always gone with something.local or something.home in the past, but suffered as a result. I did a little research and found some articles suggesting best practice is to use a subdomain of an internet facing domain you own http://www.mdmarra.com/2012/11/why-you-shouldnt-use-local-in-your.html. So, say you own microsoft.com, your internal domain name may be ad.microsoft.com. You configure the NETBIOS name to be whatever you like, this will be used when you logon using NETBIOS\User rather than user@ad.microsoft.com.

I hope to be installing some equipment in a local datacenter to offer some hosting services. First item, the hardware;

  • Ubiquiti Edgerouter Lite
  • Dell 8024 (24x 10GbE Switch)
  • Synology RS3614RPXS NAS (6x WD RED 3TB + 2x Samsung EVO 840 1TB + Intel X540-T2 10GbE NIC)
  • 2x Supermicro AS-2022TG-HIBQRF (each w/ four nodes w/ 64GB RAM & 2x Opteron 6176 + Intel X540-T2 10GbE NIC)

Initially I went for a combination of the Netgear Prosafe XS708E (8x 10GbE Switch) paired with a Dell (24x 1GbE Switch) but quickly found myself running out of 10GbE ports and concerned about the lack of redundant power supplies.

Likewise, I had chosen the RS3614XS but felt the additional cost of the RP (model with redundant power supplies) was justified.

And finally the servers themselves, initially Supermicro AS1042G-LTF (single node with four sockets and single power supply) but then switching to the AS-2022TG-HIBQRF (four node, each with two sockets and shared redundant power supplies).

I’ve tried to avoid single points of failure at a component level (redundant power supplies etc) but without overkill couldn’t avoid it at device level (redundant switches, NAS etc).

Supplier wise… I got the switch from http://www.etb-tech.com/ and the NAS from http://elow.co.uk/ (both of which admittedly i had my doubts about when first placing the orders, as the prices seemed a little cheap, but the service was incredible, both dispatched same day using next day couriers). The rest from eBuyer and local suppliers.

Each device is connected to the switch using 2x10GbE LAG/LACP ports (I may go more into the configuration of this later).

It has been far too long since I blogged. Work has been keeping me busy 7 days a week but here’s a quick post about a recent challenge.

I have been working on several data migration projects and have been struggling in general with sluggish SQL query performance. Most of the research conducted has suggested the primary bottleneck is disk IO (rather than throughput or CPU speed) so I decided to try and find the fastest "reasonably priced" SSD to give me a boost. I found this list http://www.storagesearch.com/ssd-fastest.html and after research each of the items (for availability and cost) I chose the OCZ Revodrive3 X2 PCI-e SSD.

I ordered the SSD and installed it in one of my spare servers (SuperMicro X8SIL-F, Intel X3470, 16GB Kingston RAM). Despite not being listed on the OCZ compatibly list http://www.ocztechnology.com/displaypage.php?name=revodrive_moboguide the SSD was detected during post. Again, despite being listed as only compatible with Windows 7, the SSD was detected in Windows 2008 R2 and after installing the driver and initialising the disk in Server Manager appeared in "My Computer".

Unfortunately after installing SQL Server 2008 R2 and loading a few database up the machine mysteriously restarted and no longer was the drive listed in "My Computer" or device manager. After further reboots the SSD was no longer showing during post (various green and blue LEDs were still lit so it wasn’t completely dead). I tried the card in an old Dell 9150 I had kicking around- nothing. Then a spare HP Microserver N36L- again, nothing.

Fairly convinced now that the SSD had died I called eBuyer up and started the RMA ball rolling along with placing an order for a new card (this was Thursday night and fortunately they were able to guarantee Saturday delivery). I also ordered a motherboard from the compatibility list- ASRock Z68 Extreme4 Gen3, an Intel i7 Quad-Core 2600 3.4GHz Sandy Bridge processor, and 16GB DDR3 RAM along with a no-name case all from Amazon (again, with the guarantee of Saturday delivery).

I put everything together and fired it up:

One of the first things I noticed was the different number of chips present on each of the SSD boards (compare this to the one at the top of the post?). But regardless, the replacement was working and still going strong after 24hrs solid use.

Keeping an eye on various counters in Windows I’ve seen the card reach over 1000MB/sec- impressive! And no longer does the disk seem to be the bottleneck, it now appears to be the CPU- doh! Unfortunately it looks like some queries only utilise a single processor core so the CPU is actually only 12.5% utilised.

That’s all for now! It looks like I was just very unfortunate with the first card but the replacement is blindingly fast and a great price (especially when compared with the competition).


I felt this deserved a separate post as the issue could have driven me crazy/made me return the motherboard in error!!!

After building the first server for my new Virtual Lab i installed ESXi and started to deploy my first VM. Everything was going nicely until vSphere lost the connection to the server. I used the IPMI remote control to take a look at the console and apparently both network interfaces were disconnected? Maybe something went wrong somewhere…

Reboot… Same problem after 5minutes or so the network connections drop
Reinstall ESXi… Same problem…
Recrimp a couple of new network cables… Same problem…

Finally i found this article which quite correctly identifies Active State Power Management as the cause! Pop into the bios:

Advanced, Advanced Chipset Control and disable Active State Power Management

Voila! Everything’s now chugging along nicely (fingers crossed)


New Virtual Lab – Part 2

…continued from https://tickett.wordpress.com/2011/08/24/new-virtual-lab-part-1/

So- in came the first set of bits for new server #1 and I begun piecing it together…

Issue #1- The motherboard doesn’t sit quite right on the spacers/chassis screws (because of the element of the cpu cooler which sits on the underside of the metherboard)- not really a problem, I just added a few washers (I expect I may’ve been able to find some slightly larger spacers too, if i’d looked hard enough).

Issue #2- My USB pen drive didn’t fit in the internal slot with the chassis together. Not to worry- i simply attached a header to the spare pins and plugged the USB stick into one of those ports still inside the case.

Issue #3- When i powered up the machine it was pretty loud. I checked and believe this to be because the Akasa cooler (AK-CCE-7107BS) only has a 3 pin header so doesn’t support pulse-width-modulation (PWM) and effectively runs at full speed all of the time! Fortunately the other cooler (Gelid Slim Silence IPlus Low Profile Intel CPU Cooler) had the correct 4 pin connector and when hooked up, supported PWM and ran nice and quiet!

Issue #4- I intend to run the server “headless” so one of the great features of the X8-SiL-F motherboard is the on-board IPMI. Unfortunately when i tried to connect with the default username/password “ADMIN” / “ADMIN” access was denied. I downloaded a copy of the latest firmware from the Supermicro site and flashed using:

dpupdate.exe -f SMT_SX_250.bin -r n

The -r n parameter telling it to reset to factory settings. Voila- I could now login via the web-interface or windows IPMI tool using the default login credentials (“ADMIN” / “ADMIN”).

ESXi 4.1 installed like a charm but i’ve had a little trouble trying to deploy my first Virtual Machine (ESXi management network losing connectivity and/or the hypervisor crashing)- I think this might be because i’m using old knackered network cables! I will replace them and hopefully update tomorrow in Part 3.

The 2nd of these servers is on route and hopefully be delivered before the weekend.

My next question is what virtual machines should i configure?

I definitely need a vCenter server
I definitely need at least one SQL server (possibly 2 or 3 as i’d like to experiment with replication)
I definitely need at least one web server (IIS)
I definitely want to get trixbox back up and running
I am considering experimenting with pfSense or possibly untangle?
I also need a local dns server but think that might best sit on physical hardware or i’ll get problems with the hypervisor being unable to query DNS before the VM has started?
My fileserver currently runs WHS2011- So I would also like a WHS2011 VM to test the “Drive Extender” replacements on (however I realize I can’t really test performance here so might have to give that a miss).
Also, i think that OSX Server might run on ESXi- and i’d quite like to have a proper (non-hacked) time machine backup store configured so this might be the right route to go down…


Since I moved (about a year ago) I have been without a virtual lab- the old lab ran on 2x Dell Poweredge 2950 III & 1x HP Proliant DL385 G2, noisey and power-hungry! My intention was to run power to the garage so I could bring the lab back to life, but rising electricity costs and some interesting posts on Jason’s blog persuaded me to order some new, smaller, quieter and more green/efficient hardware:

£163 – Supermicro X8SIL-F (the motherboard Jason recommended and many seem to’ve reported success with)
£255 – Intel Xeon X3470 Quad-Core 2.93Ghz (Socket 1156)
£026 – Akasa Intel 1U 1156 CPU Cooler (AK-CCE-7107BS)
£123 – 4x 4GB Kingston 1333Mhz ECC (KVR1333D3E9S/4G or KVR1333D3E9SK2/8G)

I already had some spare 1U Supermicro chassis with power supplies (512L-200B) and USB pen drives laying around for the hypervisors.

I initially ordered one of the above-mentioned kit from LambdaTek (I intended to get comfortable with it before ordering more). Unfortunately the motherboard was on back order so I decided to look around for another supplier. I couldn’t find anywhere cheaper or with stock so i picked 2 up from an international seller on eBay. While I was at it I grabbed another x3470 from Amazon (£260), another 16GB of RAM from Crescent Electronics (£107) and another Cooler (this time the Gelid Slim Silence IPlus Low Profile Intel CPU Cooler from Cool&Quiet @ £20)

Part 2 to follow! Just a quick note about the Akasa Cooler- i’ve deliberately put a line through it because you should NOT order it (it’s no good- i will get onto that in part 2)


%d bloggers like this: