Tag Archive: NFS


New Lab / NAS

Far too long since the last post. Let’s hope this will be the start of them picking back up again!

I have been experiencing some performance issues and need to have a bit of a re-shuffle of the servers/network (my vCenter appliance has stopped working, SQL is being slow etc). I have some production stuff running and don’t want to take everything offline for long so decided to build a new environment then migrate stuff.

I wont be changing much; 

Old NAS; Synology DiskStation 1812+ w/
-4x 3TB WD Green in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-2x 256GB OCZ Vertex4 SSD in RAID0 : Virtual machine storage (NFS)
-2x1gbit LACP to switch
Old ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 16GB RAM running VMWare ESXi v5.1
Old switch; Linksys SRW2024W

New NAS; Snology DiskStation 1813+ w/
-3x 4TB WD Red in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-3/4?x 250GB Samsung EVO 840 SSD in RAID0? : Virtual machine storage (NFS/iSCSI?)
-3x1gbit LACP to switch dedicated to main data store
-1gbit to switch dedicated to VM storage
New ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 32GB RAM running VMWare ESXi v5.5
New switch; Cisco SG200-26 (separate vm storage traffic on it’s own VLAN/subnet)

You’ll notice a bunch of questions marks around the new Virtual machine storage volume. I’m currently debating which disk configuration to use and which storage protocol. I’ve always used NFS as it seems much simpler but understood iSCSI to be the better option (especially with the Synology supporting VAAI hardware acceleration). But despite this, i’ve been reading that NFS seems to outperform iSCSI.

Additionally, if I go iSCSI I will try using 2x1gbit ports and enabling multipathing / round-robin. If I go down the NFS route I don’t think LACP will provide any benefit as the IP hash from a single ESXi host to the single DiskStation will always use the same link?

I have 4 of the EVO SSD so am initially creating a 2 disk RAID0 volume using NFS and an identical volume using iSCSI. I can then try running some like for like comparisons/benchmarks to determine which configuration to use going forward.

I will provide an update shortly.

Some problems

Wireless network not operating at 802.11n 130 Mbps speed (only running at 802.11g 54 Mbps): I won’t go into too much detail- but the resolution was to change the security from WPA to WPA2 (seems crazy to me that the web interface doesn’t make this incompatibility clear).

ESXi / NFS problems: My virtual machines kept dieing and there were a lot of NFS events in vSphere "Lost connection to server x mount point y mounted as z". This must be related to the network maintenance over the last few weeks but I’m not quite sure why. I ended up fitting the new HP NC360T dual-port gigabit NICs to both HP Microservers then migrating the datastore to FreeNAS. Since then *fingers crossed, touch wood etc* I’ve had no problems!

I want to talk a bit about LACP but that’ll have to wait for another time:

L

I’ve decided to stick with my rackable systems box as my fileserver because it’s relatively quiet and “green”. I have two identical servers so all i had to do was pop 4x 1.5TB drives in the new one (the old one had 4x 750GB) and migrate the data! The old server is running FreeNAS 7.2 and I loaded FreeNAS 8 RC2 onto the new one (ZFS support).

I can briefly remember attempting the migration task in the past and finding it somewhat easier said than done!

First i tried from a windows box connecting to each server using samba simply dragging/dropping the files. The transfer goes from the old server to the windows box then back to the other server so I decided this was a no go (performance was pretty poor and the samba process on both servers seemed to be eating the CPU but at least i got a progress bar).

Now I decided if i want the transfer to go directly from server to server i’d need to SSH into one of them and initiate the transfer from there. I mounted the old server on the new (mkdir /mnt/oldnas | mount oldnas:/mnt/data /mnt/oldnas) and proceeded to copy a folder (cp -R /mnt/oldnas/test /mnt/data). Unfortunately I couldn’t tell how fast the transfer was going nor the progress it had made!

So googling suggested rsync (rsync –progress) which provided speed and progress- great! Or not… unfortunately performance was even worse than before, only this time rsync was eating the CPU!

Next google result… scp (scp /mnt/oldnas/test newnas:/mnt/data)- using both local paths doesn’t display progress so one path must be nfs. Poor performance again! You guessed it… scp eating CPU.

So… back to cp! This time i passed -v to get an update after every file completes (cp -R -n -v /mnt/oldnas/test /mnt/data). Excellent! Only 10% CPU utilization… but can i determine how fast the transfer is going without timing how long a file (or group of files) takes to transfer? After trying a bunch of useless commands i finally found systat -ifs which shows current network throughput and peak network throughput- perfect!

Another update shortly!

P.S. I found that: systat -ifs doesn’t work prior to FreeBSD 8 (not a problem for me as i think FreeNAS 8 RC2 is based on FreeBSD 8.2) but: netstat -Iem0 -w1 -h is mildy helpful if you are running FreeBSD < 8.

%d bloggers like this: