Latest Entries »

Oddly, it’s quite hard to find Ubiquiti hardware in the UK, but i’ve previously sourced equipment from an eBay distributor (http://stores.ebay.co.uk/ubntshop/) and they were able to provide the best price for the ERL.

I fired the first EdgeRouter up and starting getting to know the webUI. It didn’t take long, but seemed very basic. Even for my relatively simple requirements i’d need to get to know the CLI. The official Ubiquiti Edgemax support forum (https://community.ubnt.com/t5/EdgeMAX/bd-p/EdgeMAX) was a great place to start.

An important thing to note is that the Edgemax operating system is based on Vyatta, so you if you struggle to find an Edgemax specific solution to a problem you may be able to find a Vyatta solution which will work on your Ubiquiti hardware.

IP Addresses

One of the first decisions was an IP addressing scheme. I decided to use 192.168.x.y;

  • where x represents the site (in increments of 10 to allow for future splitting) and
  • y will be the same at every site for the router & file server

The DHCP range will be from .1 to .199

Firmware Upgrade

I should have done this a little earlier, but when configuring the system parameters I was reminded to check for a firmware upgrade and found the shipped unit was running a pretty outdated v1.1 (the current at the time of posting is v1.5). So I went ahead and upgraded.

Default Configuration (WAN + 2LAN)

The new firmware has a great wizard to get you started. I chose the WAN + 2LAN setup and was immediately up and running with the router providing internet connectivity to the LAN. However, at this point in time double NAT is occurring as the internet connection is provided by a BT HomeHub3 (which doesn’t support bridge mode).

ADSL Modem

To avoid the double NAT scenario it was necessary to purchase an ADSL modem. There don’t appear to be many to chose from, I opted for the Draytek Vigor 120. Absolutely no configuration of the modem was required, I simply plugged it in and set the ERL WAN connection to use PPoE with login credentials;

username: bthomehub@btbroadband.com
password: BT

…and voila!

VPN

Initially during testing I placed both Edgerouters side by side, set static IP addresses (8.10.0.1 and 8.20.0.1) on connected them with an ethernet cable. Unfortunately, I was unable to get an IPSec tunnel established using the WebUI, but after looking at some sample configs on the forum I was able to get it working using the CLI.

I had to then modify 3 elements to get it working on-site;

  • the peer to use the dynamic hostname
  • the local-ip to use 0.0.0.0
  • the interface to use pppoe0
vpn {
    ipsec {
        auto-firewall-nat-exclude enable
        disable-uniqreqids
        esp-group FOO0 {
            compression disable
            lifetime 3600
            mode tunnel
            pfs enable
            proposal 1 {
                encryption aes128
                hash sha1
            }
        }
        ike-group FOO0 {
            dead-peer-detection {
                action restart
                interval 15
                timeout 30
            }
            lifetime 28800
            proposal 1 {
                dh-group 2
                encryption aes128
                hash sha1
            }
        }
        ipsec-interfaces {
            interface pppoe0
        }
        nat-traversal enable
        site-to-site {
            peer dynamic-hostname.com {
                authentication {
                    mode pre-shared-secret
                    pre-shared-secret secret
                }
                connection-type initiate
                default-esp-group FOO0
                ike-group FOO0
                local-ip 0.0.0.0
                tunnel 1 {
                    allow-nat-networks disable
                    allow-public-networks disable
                    esp-group FOO0
                    local {
                        subnet 192.168.20.0/24
                    }
                    remote {
                        subnet 192.168.10.0/24
                    }
                }
            }
        }
    }
}

This was working well but if either internet connection dropped or router got rebooted the VPN wouldn’t automatically come back up. Supposedly dead-peer-detection should take care of this, but it doesn’t appear to be working. I decided to create a simple workaround using a cron script;

#!/bin/bash
run=/opt/vyatta/bin/vyatta-op-cmd-wrapper
$run show vpn ipsec sa | grep "up"
if [ $? == 1 ]
then
 $run restart vpn
fi

The following command creates a cron job to run the script every 5 minutes;

set system task-scheduler task vpn_monitor executable path /config/scripts/vpn_monitor.sh
set system task-scheduler task vpn_monitor interval 5m

By placing the script in /config/scripts you ensure it remains after a firmware upgrade and is included in configuration backups.

Static-Host-Mapping

We want to block a few websites (namely facebook) and rather than overcomplicating things with url-filtering / squidguard, we’ve simply set a few static host mappings;

set static-host-mapping host-name facebook.com inet 127.0.0.1

We also set a static host map for file server at the other site (as the DNS server on the local router doesn’t have any knowledge of hostnames/ip addresses serviced by the other site). Maybe at a later date I will try and find out if I can forward DNS requests to the other site before going out to the internet?

Backup

Every time I make a configuration change I download a config backup.

On one occasion the backup failed to download and the WebUI became unresponsive (rebooting the router fixed things, but the backup still wouldn’t download). I later discovered this was due to size of the /config folder after installing squidguard and downloading the category database. As I wasn’t going to be using this initially I simply removed it.

I was recently tasked with overhauling the “network” for a local, small, not for profit. The company currently have 2 sites, with roughly a dozen desktops at each and half dozen laptops which roam between the two.

The primary requirements were to provide;

  • networked file storage (preferably redundant)
  • centralised user management (single sign-on and access control)
  • site blocking/web filtering

If both sites had “reasonable” internet connections, I would have suggested a single server at the “central” location with a site-to-site VPN. Unfortunately the connections are~ 3MBit down, 0.3Mbit up (ADSL). This introduces a need for additional hardware (servers at every site) and a way of synchronising/replicating between the sites!

As always, everything should be designed with scalability in mind, but sticking to a tight budget.

The File Servers

My first purchase were the file servers. Many years back I used to “roll my own” with something like a HP MicroServer and Windows Home Server (or possibly FreeNAS/OpenFiler) but some years back I made the transition to a dedicated Synology appliance.

Whilst you lose some of the flexibility (being able to install any software on x86/x64 hardware like the MicroServer) you gain a huge amount of reliability and support by going with a dedicated appliance (not to mention the huge feature set and ability to run many additional applications on the Synology product line).

One of the only requirements for the file server was redundancy (so at least 2 bays to support raid 1). Wanting to stick with Synology I used their product comparison tool (https://www.synology.com/en-uk/products/compare) to make a shortlist and then after looking at the prices settled for the DiskStation DS214.

Although storage requirements were pretty small, I got a great deal on WD Green 3TB disks so bought 5 (2 for each site and 1 spare).

The Routers

Had this have been a “home solution” i’d probably have opted for something like the Asus RT-AC66U flashed with one of the open source firmwares such as OpenWRT or DD-WRT. But, needing a “business solution” I needed something, most importantly reliable (the potential sacrifice being ease of use).

On top of reliability, the primary feature requirement for the routers is site-to-site VPN. After some research I decided to give the Ubiquiti EdgeRouter Lite 3 a try. Frustratingly the ADSL connection coming in at both sites is provided by a BT HomeHub3. The HH3 doesn’t support bridge mode, and to avoid double NAT / further complications I decided to purchase 2 ADSL modems (there aren’t many to chose from… I went for the Draytek Vigor 120).

Documentation

I previously posted about some SharePoint issues i’ve been tackling, this is the medium i’ve chosen for documenting and sharing the how-to guides, configuration details and process documents. I’m yet to tackle, but may also use it for new user requests, password resets, support requests etc.

To be continued…

Similarly, I have already posted about getting OpenLDAP replication working, this was one tiny part of the project. I will be following up this post with a number specifically tackling the implementation and configuration of the new solution.

Watch this space.

Hopefully the shortest post yet :)

I was previously using GeoScaling (http://www.geoscaling.com/) to provide DNS for my domain names but they’ve had a few 24hr+ outages in the last year and this has caused havoc with (mainly with my e-mail as sending servers have been unable to resolve/deliver mail to my domain).

I anticipated it was time to cough up some cash for a professional, paid for service with guaranteed uptime, but stumbled across another free option- CloudFare (http://www.cloudflare.com). The switch was pretty seamless and (to the best of my knowledge) they’ve had no downtime since I migrated. They have a much larger infrastructure (presumably due to the paid for services they also offer) and even the free services supports a CDN style caching if you wish to save your webserver’s bandwidth.

Office 365 Hosted Sharepoint

Hopefully a quick post…

My company is currently in the process of trying to document everything that’s currently stored in our heads. Initially we were using our helpdesk/ticketing software but decided, in some instances we would like to give our clients access to the documentation which relates to their organisation.

I use mediawiki for some other information sharing, but from what i’ve read, it isn’t really meant for this type of “role” driven access control and trying to use it in that way will ultimately end in failure. I don’t like “documents” (microsoft word etc) so really wanted to stick with a “wiki” style solution. I recall using Sharepoint on client sites historically and remember it handling this scenario pretty well- as we already have an Office 365 subscription it seemed a sensible avenue to explore.

Initial research had me concerned about the ability to share outside of our organisation (needing to purchase a license for every account that should be able to login)- but subsequently it turns out you can either;

-Create users without actually assigning licenses
-Grant access to anyone using their e-mail address (it will need to be linked to a microsoft account, but there is no charge and many already are)

So we have set our creating the Sharepoint sites and it’s coming together really well, but one thing was bothering me… When we login we are presented with a list of “sites”;

Screen Shot 2014-08-29 at 19.54.57

-“New Public Site”: http://tickett-public.sharepoint.com
-“Public Site”: http://tickett.sharepoint.com
-“Team Site”: http://tickett.sharepoint.com/TeamSite

If you clicked either of the first two links, hopefully you were redirected to http://tickett.net? But this wasn’t easy and I was pretty confused why I had two public URLs/sites and how could I edit them!

The “New Pubic Site” looked like;

new_public

And the “Public Site” like;

old_public

A bit of googling and I found a reasonable explanation of why I have two sites… Microsoft went through an upgrade at some point in time and to avoid breaking Sharepoint sites they kept all of the old ones and created new ones to site alongside.

As I already have a website I decided I don’t really need either of these so ideally would just like to redirect visitors to my existing site for now.

After a lot of poking around I somehow managed to get to the “New Public Site” in “edit mode” and add a little javascript to redirect visitors to our existing stie;

<script>window.location.href="http://tickett.net";</script>

After adding the code I was successfully redirected when I visited the site but anyone not logged in was not. So… armed with a handful of questions I decided it was time to raise a support ticket. Very quickly the phone ran and a technician was on the case;

#1- How do I edit the “New Public Site”

It didn’t take many minutes before I was informed that simply adding /_layouts/viewlsts.aspx after to the URL would take me to the “admin area” where I could manage the site. Easy… but surely there must be an easier way than typing the URL?

If you refer back to my earlier screenshot you’ll notice a “manage” link. Clicking this allows you to modify the links to the “New Public Site”, “Public Site” and “TeamSite”. Adding the suffix to the URL made sense so now when I login clicking on the site will take me to “edit mode” rather than “view”;

Screen Shot 2014-08-29 at 20.00.03

Well done Microsoft :)

#2- Why is the redirect only working for me?

Once #1 was solved and I was back in to “edit mode” the Microsoft engineer was very quick to pickup on the fact that my change was in draft;

Screen Shot 2014-08-29 at 20.04.04

 

Clicking the … (three dots / ellipsis) displays a menu, clicking the … (three dots / ellipsis) brings out another menu which gives the “Publish a Major Version” option and upon clicking this my change was live and everyone hitting the site was now getting redirected.

Well done Microsoft :)

#3- How do I edit the “Public Site”

So far Microsoft had done pretty well, but really struggled with this one. We have still yet to find a way to edit the site via a web interface.

Eventually, they suggested trying Sharepoint Designer. I’ve not used this before, but since installing have found it to be a pretty good alternative to the web UI. Unfortunately when I tried to open the site I got stuck at the login stage- it appears that Sharepoint Designer doesn’t support federated login (my Office365 logins are authenticated using my on-premise ADFS server). Doh!

But… there was hope… we “shared” the site through the web interface with my personal @gmail address (which is linked to a microsoft account) and I was successfully able to login to Sharepoint Designer- nearly there!

Next problem… the sites doesn’t appear to exist;

Screen Shot 2014-08-29 at 20.29.42

 

Determination and a lot more poking around eventually took us to a link on the front page “Edit site home page”;

Screen Shot 2014-08-29 at 20.53.53

Which threw yet another error, “This page does not contain any regions that you have permission to edit.”. But navigating back a few steps to “Website -> Web Pages” I was able to right click, open with, notepad;

Screen Shot 2014-08-29 at 20.55.47

And add in my script;

Screen Shot 2014-08-29 at 20.57.22 1

So far, so good.

Despite it being a little bit “trial and error”, with Microsoft’s help, we did get there in the end, and very soon after I first raised the support ticket- good job!

A few weeks on since my last post (http://tickett.wordpress.com/2014/08/14/synology-directory-openldap-replication/) I have found a few bugs, fixed a few more issues and hopefully have a fully working solution.

One of issues with my previous post (that i’m not going to go into at the moment) was that I hadn’t cross-compiled openssl and cyrus-sasl2 so my version of slapd didn’t support either. I think i’ve now resolved this and you can download my latest slapd here: https://dl.dropboxusercontent.com/u/713/slapd

#1- I needed the slave to refer changes to the master

Documentation and discussion everywhere seems to suggest simply adding a line to the slave slapd.conf;

updateref ldap://192.168.10.250

Would ensure any changes were written to the master but I couldn’t get this working (even with debug enabled). The only error I could really find (from memory) was an err=49 which I believe refers to invalid credentials but i’m unsure which credentials or how this is possible.

After further research, I found that there is an alternatively openldap configuration referred to as n-way multi master. Rather than specifying a master and slave, both nodes are masters and changes are replicated both ways. This was relatively easy to setup and “just worked” (not to mention, a better solution as before it was possible the “master” server would be unreachable (if the site-to-site VPN was down) and changes would fail).

You will find config details for n-way multi master / mirrormode in my next blog post.

#2- Unable to access shares after password change (from windows/pGina) with error “Element Not Found”

This was a real curve ball. Google sent me in completely the wrong direction, but I recalled a discussion about multiple passwords being stored in the LDAP database, which led me to wonder if the userPassword wasn’t the only field needing to be updated.

A colleague stumbled across the documentation for pGina fork: http://mutonufoai.github.io/pgina/documentation/plugins/ldap.html which shows a rather more complete “Change Password” configuration for the LDAP plugin. Unfortunately pGina main doesn’t support the DES or Timestamp methods so we couldn’t configure sambaLMPassword, shadowLastChange or sambaPwdLastSet, but adding sambaNTPassword (MD4) alongside userPassword (SHA1) seems to have done the trick.

#3- Data was replicating but the users could not login

I’m not sure exactly how I figured this one out. I think I had a vague recollection of reading a discussion about passwords not replication because default permissions do not allow them to be read from the database.

I added a line in slapd.conf above the existing ACL include;

include /usr/syno/etc/openldap/acls.conf
include /usr/syno/etc/openldap/slapd-acls.conf

The contents of which;

access to attrs=userPassword,sambaLMPassword,sambaNTPassword
     by dn.base="cn=replication,cn=users,dc=example,dc=com" write

Allow the password to be read from the database by the replication user.

This drove me close to insanity, but I got there eventually!

I found an old discussion on the Synology forum http://forum.synology.com/enu/viewtopic.php?f=183&t=55020 and was optimistic it’d be pretty simple. The thread talks about compiling a later version of OpenLDAP from source, but the version included (in DSM5.0) is later than that discussed;

file-20> slapd -VV
@(#) $OpenLDAP: slapd 2.4.34 (Feb 27 2014 03:17:07) $
root@build3:/source/openldap-2.4.x/servers/slapd

I tried configuring my provider and consumer using the example and referring to http://www.openldap.org/doc/admin23/syncrepl.html but wasn’t getting anywhere (after changing slapd.conf I would disable and re-enable the LDAP server through the web ui). I was getting an error “Permission denied. Please contact the server administrator.” and an entry in /var/log/messages;

file-20> tail /var/log/messages
Aug 14 21:51:59 file-20 ldap.cgi: ldap_server_default_add.c:146 add [admin] to [cn=Directory Operators,cn=groups,dc=example,dc=com] failed, ldap_insufficient_access (53)

Oddly the slapd process continues to run but no replication is taking place. I believed the error might be because the admin account is locked in some way and wont allow any modification. I tried adding a filter;

filter="(!cn=admin)"

This prevented the error message popping up and the error in /var/log/messages but still no replication was taking place.

I imagine it would have been a trivial task on a standard Linux distribution but it seems OpenLDAP has been compiled in a manner which does not allow debug;

file-20> slapd -d 1
must compile with LDAP_DEBUG for debugging

So there’s no real feedback as to what is (or isn’t) working.

After blindly fumbling around for hours I decided to try and compile myself so I could debug. This itself was a mammoth chore!

I wanted to stick with the same version currently running on DSM5.0 so started with the source for 2.4.34 from http://www.openldap.org/software/download/OpenLDAP/openldap-release/

In order to cross compile I followed the Synology 3rd-Party Package Developers guide; http://www.synology.com/en-uk/support/third_party_app_int. I had a spare ubuntu machine I could use for compiling… I needed the DSM5.0 toolchain from http://sourceforge.net/projects/dsgpl/files/DSM%205.0%20Tool%20Chains/ as i’m using the DS214 which apparently has a marvell amanda xp processor. And extracted the archive;

tar zxpf gcc464_glibc215_hard_armada-GPL.tgz –C /usr/local/

Then Berkeley DB 5.1.25 from http://pkgs.fedoraproject.org/repo/pkgs/libdb/db-5.1.25.tar.gz/06656429bfc1abb6c0498eaeff70cd04/

tar xvfdb-5.1.25.tar.gz
cd db-5.1.25
cd build_unix
export CC=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-gcc
export LD=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ld
export RANLIB=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ranlib
export CFLAGS="-I/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/include -mhard-float -mfpu=vfpv3-d16"
export LDFLAGS="-L/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/lib"
../dist/configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local 

I also had to install;

sudo apt-get install lib32z1

Now I was able to configure OpenLDAP;

export LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.5.1/lib -R/usr/local/lib -R/usr/local/BerkeleyDB.5.1/lib"
export LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib
export LD_RUN_PATH=/usr/local/BerkeleyDB.5.1/lib
export CPPFLAGS="-I/usr/local/BerkeleyDB.5.1/include"
./configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local --with-yielding-select=no --enable-crypt

But when I tried to;

make depend
make

I received an error; cross compile openldap error: undefined reference to `lutil_memcmp’ – http://zhuqy.wordpress.com/2010/04/22/cross-compile-openldap-error-undefined-reference-to-lutil_memcmp/ put me straight- I just had to comment out a line from include/portable.h;

//#define NEED_MEMCMP_REPLACEMENT 1

make was now successful and I moved my newly compiled slapd to the synology diskstation, chown’d & chmod’d it,  and tested debug… we see an instant result;

file-20> chown root:root slapd.me
file-20> chmod 755 slapd.me
file-20> slapd.me -d 1
ldap_url_parse_ext(ldap://localhost/)
ldap_init: trying /usr/local/etc/openldap/ldap.conf
ldap_init: HOME env is /root
ldap_init: trying /root/ldaprc

Now I disabled the directory server in the web ui and instead ran my new version from the commandline with debug 1;

./slapd.me -d 1 -f /usr/syno/etc/openldap/slapd.conf

It failed with an error referring to;

password-hash {CRYPT}

Turns out I had to recompile slapd with –enable-crypt. I copied the newly compiled slapd over, ran again with -d 1 and now I could see it failing with error relating to an invalid filter;

filter="(!cn=admin)"

So I removed this… Try again, now;

ldap_sasl_bind_s failed

I think that sent me in the wrong direction (I thought it was an ssl/tls/authentication issue) and I spent hours messing with certificates, unsupported tls configuration parameters etc but got nowhere. Eventually I determined this error essentially means “can’t connect”. Eventually I tried without ssl and as if by magic everything sprung to life!

Here are the lines I added to the default slapd.conf on the provider;

index entryCSN eq
index entryUUID eq

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 10

And the consumer;

index entryCSN eq
index entryUUID eq

syncrepl rid=20
 provider=ldap://192.168.10.250
 type=refreshAndPersist
 interval=00:00:10:00
 searchbase="dc=example,dc=com"
 bindmethod=simple
 binddn="uid=admin,cn=users,dc=example,dc=com"
 credentials=password
 scope=sub
 retry="60 +"

If you want to download my compiled version of slapd you can find it here; https://www.dropbox.com/s/sfb06uo0leqxqq9/slapd

I hope this will help you!

Further to http://tickett.wordpress.com/2014/04/17/new-lab-nas/ I have now received my new hardware and had time to think and experiment.

My first experiment was iSCSI vs NFS. I created 2x 2-disk RAID0 Volumes on the Synology DS 1813+ (using the 4x Samsung EVO 840 SSD) and attached them to my new ESXi host. I then installed a clean Windows 7 VM on each volume. After installing windows I did some file copies and made a few notes.

Image

The graph above (along with the figures in the table) do a great job of showing the results. NFS cam out over 20% faster. You can see the max read speed recorded as 93 MB/s (only 71 MB/s for iSCSI) and write rate recorded as 77 MB/s (only 62 MB/s for iSCSI).

I then ran Crystal Disk Mark on the iSCSI VM:

Image

And the NFS VM:

Image

Here some of the results are less of an improvements, but others are more than doubled. All of the above ruled iSCSI out for me.

But I then got thinking about local storage. I only run one ESXi host these days (I have a spare host ready to swap out if I need to, but to save electricity I only run one at a time), so the benefit of networked/shared storage is almost non-existent (I will be backing up snapshots to the NAS).

I couldn’t initially create a 2 disk RAID0 array because ESXi doesn’t support my motherboard’s onboard raid, so I stuck with a single local disk (the Samsung EVO 840 SSD again); Installed Windows 7 and ran the Crystal Disk Mark benchmark:

Image

I then found an old PCIe SSD (OCZ Revodrive x3) and thought i’d give that a try more out of interest:

Image

Nice! Unfortunately the PCIe SSD isn’t directly supported in ESXi so I had to create a normal VM and connect the SSD using passthrough. This would essentially mean it could only be connected to one VM (which wouldn’t be a huge problem as I’d want to connect it to my SQL VM) but the stability isn’t great either.

I picked up a cheap LSI SAS3041E raid card from eBay and went about setting up the local 2 disk RAID0 array. The results were very surprising:

Image

These are all below the speeds seen using a single SSD. See the below table to easily compare:

I’m not sure whether this is because the raid card, or the lack of TRIM support or some other obscure reason. I decided i’m happier running 2 separate SSDs anyway (I can split SQL db & logs between the two discs to see a performance boost) and if something goes wrong I will only have to restore half my VMs from nightly backup.

iSCSI NFS Local Single SSD Passthrough PCIe SSD Local RAID0 2x SSD
Seq Read 96.74 101.4 238.7 1371 233.4
Seq Write 30.61 72.91 229.1 1089 219.9
512K Read 74.77 74.48 228.3 1051 210.1
512K Write 45.46 66.23 223.8 1010 208.2
4K Read 4.138 5.213 22.92 30.36 18.31
4K Write 4.337 4.781 52.73 68.65 23.59
4K QD32 Read 6.575 8.661 212.8 281.1 62.26
4K QE32 Write 5.582 8.791 199.7 240.9 92.81

Screen Shot 2014-04-25 at 16.20.55

 

 

And another without the PCIe SSD to make it a little easier to compare:

Screen Shot 2014-04-25 at 16.26.58

So, in conclusion- I will be running 2 of the 250GB Samsung EVO 840 SSDs locally in the ESXi host. This will provide optimal performance and hugely reduce my dependance on the network and NAS (currently the VMs live on the NAS and I can’t take it or the network down without powering everything down first; my pfSense software router resides in a VM too, so I lose internet connectivity!). I will continue to use Veem to take nightly backups should anything go wrong with the ESXi host.

I hope to migrate everything over the weekend- fingers crossed.

Solar Update

I posted some details back in October when I had my solar panels installed http://tickett.wordpress.com/2013/10/25/solar-kwh-meters-new-fuse-box-flukso/

I thought i’d provide a little update to show how much i’m getting out of them;

Image

 

This shows my best day so far.

Image

This shows a daily summary for the last 40 days.

Image

And for the last 12 weeks.

To give you an idea, i’m on a feed-in-tariff paying roughly 17.5p/kWh so on my best day 23kWh earnt me £4. Not to mention that based on the estimation that i’m consuming 50% @ 13.5p that’s another £1.50.

Image

And one last screen showing average daily generation through the winter months. Making a mere 50p/day :)

New Lab / NAS

Far too long since the last post. Let’s hope this will be the start of them picking back up again!

I have been experiencing some performance issues and need to have a bit of a re-shuffle of the servers/network (my vCenter appliance has stopped working, SQL is being slow etc). I have some production stuff running and don’t want to take everything offline for long so decided to build a new environment then migrate stuff.

I wont be changing much; 

Old NAS; Synology DiskStation 1812+ w/
-4x 3TB WD Green in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-2x 256GB OCZ Vertex4 SSD in RAID0 : Virtual machine storage (NFS)
-2x1gbit LACP to switch
Old ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 16GB RAM running VMWare ESXi v5.1
Old switch; Linksys SRW2024W

New NAS; Snology DiskStation 1813+ w/
-3x 4TB WD Red in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-3/4?x 250GB Samsung EVO 840 SSD in RAID0? : Virtual machine storage (NFS/iSCSI?)
-3x1gbit LACP to switch dedicated to main data store
-1gbit to switch dedicated to VM storage
New ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 32GB RAM running VMWare ESXi v5.5
New switch; Cisco SG200-26 (separate vm storage traffic on it’s own VLAN/subnet)

You’ll notice a bunch of questions marks around the new Virtual machine storage volume. I’m currently debating which disk configuration to use and which storage protocol. I’ve always used NFS as it seems much simpler but understood iSCSI to be the better option (especially with the Synology supporting VAAI hardware acceleration). But despite this, i’ve been reading that NFS seems to outperform iSCSI.

Additionally, if I go iSCSI I will try using 2x1gbit ports and enabling multipathing / round-robin. If I go down the NFS route I don’t think LACP will provide any benefit as the IP hash from a single ESXi host to the single DiskStation will always use the same link?

I have 4 of the EVO SSD so am initially creating a 2 disk RAID0 volume using NFS and an identical volume using iSCSI. I can then try running some like for like comparisons/benchmarks to determine which configuration to use going forward.

I will provide an update shortly.

Further 433Mhz RF Hacking

Further to http://tickett.wordpress.com/2012/06/27/more-433mhz-rf-hacking/

I noticed some referral traffic coming from Erlands blog: http://fedmow349.blogspot.co.uk/2013/05/hacking-433mhz-rf-link-home-automation.html

He used https://code.google.com/p/rc-switch/ on his arduino to receive/interpret the 433Mhz traffic and it works a treat for the cheapo PIR, door/window sensors, smoke alarms etc

4f77d093 1440 4530 b3c1 283ec01a57f9

Here is a sample output from one of the door sensors. It quite rightly identifies the state 0/1/F of each jumper:

Decimal: 1398131 (24Bit) Binary: 000101010101010101110011 Tri-State: 0FFFFFFFF101 PulseLength: 525 microseconds Protocol: 1
Raw data: 16344,540,1560,544,1564,544,1560,1588,508,540,1568,1584,516,540,1564,1588,512,528,1576,1580,524,536,1576,1576,524,528,1584,1580,524,528,1584,1576,528,524,1588,1576,540,1572,544,1572,544,520,1604,520,1604,1572,544,1568,552,

I'm in the process of firing up a new Raspberry Pi DomitiGa server. I will use a JeeNode connected via USB Serial to receive 433Mhz traffic alongside 868Mhz JeeNode traffic. I guess I can probably get rid of my RFXCom 433Mhz transceiver now?
Follow

Get every new post delivered to your Inbox.

Join 2,245 other followers

%d bloggers like this: