Further to https://tickett.wordpress.com/2014/04/17/new-lab-nas/ I have now received my new hardware and had time to think and experiment.

My first experiment was iSCSI vs NFS. I created 2x 2-disk RAID0 Volumes on the Synology DS 1813+ (using the 4x Samsung EVO 840 SSD) and attached them to my new ESXi host. I then installed a clean Windows 7 VM on each volume. After installing windows I did some file copies and made a few notes.

Image

The graph above (along with the figures in the table) do a great job of showing the results. NFS cam out over 20% faster. You can see the max read speed recorded as 93 MB/s (only 71 MB/s for iSCSI) and write rate recorded as 77 MB/s (only 62 MB/s for iSCSI).

I then ran Crystal Disk Mark on the iSCSI VM:

Image

And the NFS VM:

Image

Here some of the results are less of an improvements, but others are more than doubled. All of the above ruled iSCSI out for me.

But I then got thinking about local storage. I only run one ESXi host these days (I have a spare host ready to swap out if I need to, but to save electricity I only run one at a time), so the benefit of networked/shared storage is almost non-existent (I will be backing up snapshots to the NAS).

I couldn’t initially create a 2 disk RAID0 array because ESXi doesn’t support my motherboard’s onboard raid, so I stuck with a single local disk (the Samsung EVO 840 SSD again); Installed Windows 7 and ran the Crystal Disk Mark benchmark:

Image

I then found an old PCIe SSD (OCZ Revodrive x3) and thought i’d give that a try more out of interest:

Image

Nice! Unfortunately the PCIe SSD isn’t directly supported in ESXi so I had to create a normal VM and connect the SSD using passthrough. This would essentially mean it could only be connected to one VM (which wouldn’t be a huge problem as I’d want to connect it to my SQL VM) but the stability isn’t great either.

I picked up a cheap LSI SAS3041E raid card from eBay and went about setting up the local 2 disk RAID0 array. The results were very surprising:

Image

These are all below the speeds seen using a single SSD. See the below table to easily compare:

I’m not sure whether this is because the raid card, or the lack of TRIM support or some other obscure reason. I decided i’m happier running 2 separate SSDs anyway (I can split SQL db & logs between the two discs to see a performance boost) and if something goes wrong I will only have to restore half my VMs from nightly backup.

iSCSI NFS Local Single SSD Passthrough PCIe SSD Local RAID0 2x SSD
Seq Read 96.74 101.4 238.7 1371 233.4
Seq Write 30.61 72.91 229.1 1089 219.9
512K Read 74.77 74.48 228.3 1051 210.1
512K Write 45.46 66.23 223.8 1010 208.2
4K Read 4.138 5.213 22.92 30.36 18.31
4K Write 4.337 4.781 52.73 68.65 23.59
4K QD32 Read 6.575 8.661 212.8 281.1 62.26
4K QE32 Write 5.582 8.791 199.7 240.9 92.81

Screen Shot 2014-04-25 at 16.20.55

 

 

And another without the PCIe SSD to make it a little easier to compare:

Screen Shot 2014-04-25 at 16.26.58

So, in conclusion- I will be running 2 of the 250GB Samsung EVO 840 SSDs locally in the ESXi host. This will provide optimal performance and hugely reduce my dependance on the network and NAS (currently the VMs live on the NAS and I can’t take it or the network down without powering everything down first; my pfSense software router resides in a VM too, so I lose internet connectivity!). I will continue to use Veem to take nightly backups should anything go wrong with the ESXi host.

I hope to migrate everything over the weekend- fingers crossed.

Advertisements