Archive for April, 2014


Further to https://tickett.wordpress.com/2014/04/17/new-lab-nas/ I have now received my new hardware and had time to think and experiment.

My first experiment was iSCSI vs NFS. I created 2x 2-disk RAID0 Volumes on the Synology DS 1813+ (using the 4x Samsung EVO 840 SSD) and attached them to my new ESXi host. I then installed a clean Windows 7 VM on each volume. After installing windows I did some file copies and made a few notes.

Image

The graph above (along with the figures in the table) do a great job of showing the results. NFS cam out over 20% faster. You can see the max read speed recorded as 93 MB/s (only 71 MB/s for iSCSI) and write rate recorded as 77 MB/s (only 62 MB/s for iSCSI).

I then ran Crystal Disk Mark on the iSCSI VM:

Image

And the NFS VM:

Image

Here some of the results are less of an improvements, but others are more than doubled. All of the above ruled iSCSI out for me.

But I then got thinking about local storage. I only run one ESXi host these days (I have a spare host ready to swap out if I need to, but to save electricity I only run one at a time), so the benefit of networked/shared storage is almost non-existent (I will be backing up snapshots to the NAS).

I couldn’t initially create a 2 disk RAID0 array because ESXi doesn’t support my motherboard’s onboard raid, so I stuck with a single local disk (the Samsung EVO 840 SSD again); Installed Windows 7 and ran the Crystal Disk Mark benchmark:

Image

I then found an old PCIe SSD (OCZ Revodrive x3) and thought i’d give that a try more out of interest:

Image

Nice! Unfortunately the PCIe SSD isn’t directly supported in ESXi so I had to create a normal VM and connect the SSD using passthrough. This would essentially mean it could only be connected to one VM (which wouldn’t be a huge problem as I’d want to connect it to my SQL VM) but the stability isn’t great either.

I picked up a cheap LSI SAS3041E raid card from eBay and went about setting up the local 2 disk RAID0 array. The results were very surprising:

Image

These are all below the speeds seen using a single SSD. See the below table to easily compare:

I’m not sure whether this is because the raid card, or the lack of TRIM support or some other obscure reason. I decided i’m happier running 2 separate SSDs anyway (I can split SQL db & logs between the two discs to see a performance boost) and if something goes wrong I will only have to restore half my VMs from nightly backup.

iSCSI NFS Local Single SSD Passthrough PCIe SSD Local RAID0 2x SSD
Seq Read 96.74 101.4 238.7 1371 233.4
Seq Write 30.61 72.91 229.1 1089 219.9
512K Read 74.77 74.48 228.3 1051 210.1
512K Write 45.46 66.23 223.8 1010 208.2
4K Read 4.138 5.213 22.92 30.36 18.31
4K Write 4.337 4.781 52.73 68.65 23.59
4K QD32 Read 6.575 8.661 212.8 281.1 62.26
4K QE32 Write 5.582 8.791 199.7 240.9 92.81

Screen Shot 2014-04-25 at 16.20.55

 

 

And another without the PCIe SSD to make it a little easier to compare:

Screen Shot 2014-04-25 at 16.26.58

So, in conclusion- I will be running 2 of the 250GB Samsung EVO 840 SSDs locally in the ESXi host. This will provide optimal performance and hugely reduce my dependance on the network and NAS (currently the VMs live on the NAS and I can’t take it or the network down without powering everything down first; my pfSense software router resides in a VM too, so I lose internet connectivity!). I will continue to use Veem to take nightly backups should anything go wrong with the ESXi host.

I hope to migrate everything over the weekend- fingers crossed.

Advertisements

Solar Update

I posted some details back in October when I had my solar panels installed https://tickett.wordpress.com/2013/10/25/solar-kwh-meters-new-fuse-box-flukso/

I thought i’d provide a little update to show how much i’m getting out of them;

Image

 

This shows my best day so far.

Image

This shows a daily summary for the last 40 days.

Image

And for the last 12 weeks.

To give you an idea, i’m on a feed-in-tariff paying roughly 17.5p/kWh so on my best day 23kWh earnt me £4. Not to mention that based on the estimation that i’m consuming 50% @ 13.5p that’s another £1.50.

Image

And one last screen showing average daily generation through the winter months. Making a mere 50p/day :)

New Lab / NAS

Far too long since the last post. Let’s hope this will be the start of them picking back up again!

I have been experiencing some performance issues and need to have a bit of a re-shuffle of the servers/network (my vCenter appliance has stopped working, SQL is being slow etc). I have some production stuff running and don’t want to take everything offline for long so decided to build a new environment then migrate stuff.

I wont be changing much; 

Old NAS; Synology DiskStation 1812+ w/
-4x 3TB WD Green in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-2x 256GB OCZ Vertex4 SSD in RAID0 : Virtual machine storage (NFS)
-2x1gbit LACP to switch
Old ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 16GB RAM running VMWare ESXi v5.1
Old switch; Linksys SRW2024W

New NAS; Snology DiskStation 1813+ w/
-3x 4TB WD Red in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-3/4?x 250GB Samsung EVO 840 SSD in RAID0? : Virtual machine storage (NFS/iSCSI?)
-3x1gbit LACP to switch dedicated to main data store
-1gbit to switch dedicated to VM storage
New ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 32GB RAM running VMWare ESXi v5.5
New switch; Cisco SG200-26 (separate vm storage traffic on it’s own VLAN/subnet)

You’ll notice a bunch of questions marks around the new Virtual machine storage volume. I’m currently debating which disk configuration to use and which storage protocol. I’ve always used NFS as it seems much simpler but understood iSCSI to be the better option (especially with the Synology supporting VAAI hardware acceleration). But despite this, i’ve been reading that NFS seems to outperform iSCSI.

Additionally, if I go iSCSI I will try using 2x1gbit ports and enabling multipathing / round-robin. If I go down the NFS route I don’t think LACP will provide any benefit as the IP hash from a single ESXi host to the single DiskStation will always use the same link?

I have 4 of the EVO SSD so am initially creating a 2 disk RAID0 volume using NFS and an identical volume using iSCSI. I can then try running some like for like comparisons/benchmarks to determine which configuration to use going forward.

I will provide an update shortly.

%d bloggers like this: