Latest Entries »

Hopefully the shortest post yet :)

I was previously using GeoScaling (http://www.geoscaling.com/) to provide DNS for my domain names but they’ve had a few 24hr+ outages in the last year and this has caused havoc with (mainly with my e-mail as sending servers have been unable to resolve/deliver mail to my domain).

I anticipated it was time to cough up some cash for a professional, paid for service with guaranteed uptime, but stumbled across another free option- CloudFare (http://www.cloudflare.com). The switch was pretty seamless and (to the best of my knowledge) they’ve had no downtime since I migrated. They have a much larger infrastructure (presumably due to the paid for services they also offer) and even the free services supports a CDN style caching if you wish to save your webserver’s bandwidth.

Office 365 Hosted Sharepoint

Hopefully a quick post…

My company is currently in the process of trying to document everything that’s currently stored in our heads. Initially we were using our helpdesk/ticketing software but decided, in some instances we would like to give our clients access to the documentation which relates to their organisation.

I use mediawiki for some other information sharing, but from what i’ve read, it isn’t really meant for this type of “role” driven access control and trying to use it in that way will ultimately end in failure. I don’t like “documents” (microsoft word etc) so really wanted to stick with a “wiki” style solution. I recall using Sharepoint on client sites historically and remember it handling this scenario pretty well- as we already have an Office 365 subscription it seemed a sensible avenue to explore.

Initial research had me concerned about the ability to share outside of our organisation (needing to purchase a license for every account that should be able to login)- but subsequently it turns out you can either;

-Create users without actually assigning licenses
-Grant access to anyone using their e-mail address (it will need to be linked to a microsoft account, but there is no charge and many already are)

So we have set our creating the Sharepoint sites and it’s coming together really well, but one thing was bothering me… When we login we are presented with a list of “sites”;

Screen Shot 2014-08-29 at 19.54.57

-“New Public Site”: http://tickett-public.sharepoint.com
-“Public Site”: http://tickett.sharepoint.com
-“Team Site”: http://tickett.sharepoint.com/TeamSite

If you clicked either of the first two links, hopefully you were redirected to http://tickett.net? But this wasn’t easy and I was pretty confused why I had two public URLs/sites and how could I edit them!

The “New Pubic Site” looked like;

new_public

And the “Public Site” like;

old_public

A bit of googling and I found a reasonable explanation of why I have two sites… Microsoft went through an upgrade at some point in time and to avoid breaking Sharepoint sites they kept all of the old ones and created new ones to site alongside.

As I already have a website I decided I don’t really need either of these so ideally would just like to redirect visitors to my existing site for now.

After a lot of poking around I somehow managed to get to the “New Public Site” in “edit mode” and add a little javascript to redirect visitors to our existing stie;

<script>window.location.href="http://tickett.net";</script>

After adding the code I was successfully redirected when I visited the site but anyone not logged in was not. So… armed with a handful of questions I decided it was time to raise a support ticket. Very quickly the phone ran and a technician was on the case;

#1- How do I edit the “New Public Site”

It didn’t take many minutes before I was informed that simply adding /_layouts/viewlsts.aspx after to the URL would take me to the “admin area” where I could manage the site. Easy… but surely there must be an easier way than typing the URL?

If you refer back to my earlier screenshot you’ll notice a “manage” link. Clicking this allows you to modify the links to the “New Public Site”, “Public Site” and “TeamSite”. Adding the suffix to the URL made sense so now when I login clicking on the site will take me to “edit mode” rather than “view”;

Screen Shot 2014-08-29 at 20.00.03

Well done Microsoft :)

#2- Why is the redirect only working for me?

Once #1 was solved and I was back in to “edit mode” the Microsoft engineer was very quick to pickup on the fact that my change was in draft;

Screen Shot 2014-08-29 at 20.04.04

 

Clicking the … (three dots / ellipsis) displays a menu, clicking the … (three dots / ellipsis) brings out another menu which gives the “Publish a Major Version” option and upon clicking this my change was live and everyone hitting the site was now getting redirected.

Well done Microsoft :)

#3- How do I edit the “Public Site”

So far Microsoft had done pretty well, but really struggled with this one. We have still yet to find a way to edit the site via a web interface.

Eventually, they suggested trying Sharepoint Designer. I’ve not used this before, but since installing have found it to be a pretty good alternative to the web UI. Unfortunately when I tried to open the site I got stuck at the login stage- it appears that Sharepoint Designer doesn’t support federated login (my Office365 logins are authenticated using my on-premise ADFS server). Doh!

But… there was hope… we “shared” the site through the web interface with my personal @gmail address (which is linked to a microsoft account) and I was successfully able to login to Sharepoint Designer- nearly there!

Next problem… the sites doesn’t appear to exist;

Screen Shot 2014-08-29 at 20.29.42

 

Determination and a lot more poking around eventually took us to a link on the front page “Edit site home page”;

Screen Shot 2014-08-29 at 20.53.53

Which threw yet another error, “This page does not contain any regions that you have permission to edit.”. But navigating back a few steps to “Website -> Web Pages” I was able to right click, open with, notepad;

Screen Shot 2014-08-29 at 20.55.47

And add in my script;

Screen Shot 2014-08-29 at 20.57.22 1

So far, so good.

Despite it being a little bit “trial and error”, with Microsoft’s help, we did get there in the end, and very soon after I first raised the support ticket- good job!

A few weeks on since my last post (http://tickett.wordpress.com/2014/08/14/synology-directory-openldap-replication/) I have found a few bugs, fixed a few more issues and hopefully have a fully working solution.

One of issues with my previous post (that i’m not going to go into at the moment) was that I hadn’t cross-compiled openssl and cyrus-sasl2 so my version of slapd didn’t support either. I think i’ve now resolved this and you can download my latest slapd here: https://dl.dropboxusercontent.com/u/713/slapd

#1- I needed the slave to refer changes to the master

Documentation and discussion everywhere seems to suggest simply adding a line to the slave slapd.conf;

updateref ldap://192.168.10.250

Would ensure any changes were written to the master but I couldn’t get this working (even with debug enabled). The only error I could really find (from memory) was an err=49 which I believe refers to invalid credentials but i’m unsure which credentials or how this is possible.

After further research, I found that there is an alternatively openldap configuration referred to as n-way multi master. Rather than specifying a master and slave, both nodes are masters and changes are replicated both ways. This was relatively easy to setup and “just worked” (not to mention, a better solution as before it was possible the “master” server would be unreachable (if the site-to-site VPN was down) and changes would fail).

You will find config details for n-way multi master / mirrormode in my next blog post.

#2- Unable to access shares after password change (from windows/pGina) with error “Element Not Found”

This was a real curve ball. Google sent me in completely the wrong direction, but I recalled a discussion about multiple passwords being stored in the LDAP database, which led me to wonder if the userPassword wasn’t the only field needing to be updated.

A colleague stumbled across the documentation for pGina fork: http://mutonufoai.github.io/pgina/documentation/plugins/ldap.html which shows a rather more complete “Change Password” configuration for the LDAP plugin. Unfortunately pGina main doesn’t support the DES or Timestamp methods so we couldn’t configure sambaLMPassword, shadowLastChange or sambaPwdLastSet, but adding sambaNTPassword (MD4) alongside userPassword (SHA1) seems to have done the trick.

#3- Data was replicating but the users could not login

I’m not sure exactly how I figured this one out. I think I had a vague recollection of reading a discussion about passwords not replication because default permissions do not allow them to be read from the database.

I added a line in slapd.conf above the existing ACL include;

include /usr/syno/etc/openldap/acls.conf
include /usr/syno/etc/openldap/slapd-acls.conf

The contents of which;

access to attrs=userPassword,sambaLMPassword,sambaNTPassword
     by dn.base="cn=replication,cn=users,dc=example,dc=com" write

Allow the password to be read from the database by the replication user.

This drove me close to insanity, but I got there eventually!

I found an old discussion on the Synology forum http://forum.synology.com/enu/viewtopic.php?f=183&t=55020 and was optimistic it’d be pretty simple. The thread talks about compiling a later version of OpenLDAP from source, but the version included (in DSM5.0) is later than that discussed;

file-20> slapd -VV
@(#) $OpenLDAP: slapd 2.4.34 (Feb 27 2014 03:17:07) $
root@build3:/source/openldap-2.4.x/servers/slapd

I tried configuring my provider and consumer using the example and referring to http://www.openldap.org/doc/admin23/syncrepl.html but wasn’t getting anywhere (after changing slapd.conf I would disable and re-enable the LDAP server through the web ui). I was getting an error “Permission denied. Please contact the server administrator.” and an entry in /var/log/messages;

file-20> tail /var/log/messages
Aug 14 21:51:59 file-20 ldap.cgi: ldap_server_default_add.c:146 add [admin] to [cn=Directory Operators,cn=groups,dc=example,dc=com] failed, ldap_insufficient_access (53)

Oddly the slapd process continues to run but no replication is taking place. I believed the error might be because the admin account is locked in some way and wont allow any modification. I tried adding a filter;

filter="(!cn=admin)"

This prevented the error message popping up and the error in /var/log/messages but still no replication was taking place.

I imagine it would have been a trivial task on a standard Linux distribution but it seems OpenLDAP has been compiled in a manner which does not allow debug;

file-20> slapd -d 1
must compile with LDAP_DEBUG for debugging

So there’s no real feedback as to what is (or isn’t) working.

After blindly fumbling around for hours I decided to try and compile myself so I could debug. This itself was a mammoth chore!

I wanted to stick with the same version currently running on DSM5.0 so started with the source for 2.4.34 from http://www.openldap.org/software/download/OpenLDAP/openldap-release/

In order to cross compile I followed the Synology 3rd-Party Package Developers guide; http://www.synology.com/en-uk/support/third_party_app_int. I had a spare ubuntu machine I could use for compiling… I needed the DSM5.0 toolchain from http://sourceforge.net/projects/dsgpl/files/DSM%205.0%20Tool%20Chains/ as i’m using the DS214 which apparently has a marvell amanda xp processor. And extracted the archive;

tar zxpf gcc464_glibc215_hard_armada-GPL.tgz –C /usr/local/

Then Berkeley DB 5.1.25 from http://pkgs.fedoraproject.org/repo/pkgs/libdb/db-5.1.25.tar.gz/06656429bfc1abb6c0498eaeff70cd04/

tar xvfdb-5.1.25.tar.gz
cd db-5.1.25
cd build_unix
export CC=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-gcc
export LD=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ld
export RANLIB=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ranlib
export CFLAGS="-I/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/include -mhard-float -mfpu=vfpv3-d16"
export LDFLAGS="-L/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/lib"
../dist/configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local 

I also had to install;

sudo apt-get install lib32z1

Now I was able to configure OpenLDAP;

export LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.5.1/lib -R/usr/local/lib -R/usr/local/BerkeleyDB.5.1/lib"
export LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib
export LD_RUN_PATH=/usr/local/BerkeleyDB.5.1/lib
export CPPFLAGS="-I/usr/local/BerkeleyDB.5.1/include"
./configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local --with-yielding-select=no --enable-crypt

But when I tried to;

make depend
make

I received an error; cross compile openldap error: undefined reference to `lutil_memcmp’ – http://zhuqy.wordpress.com/2010/04/22/cross-compile-openldap-error-undefined-reference-to-lutil_memcmp/ put me straight- I just had to comment out a line from include/portable.h;

//#define NEED_MEMCMP_REPLACEMENT 1

make was now successful and I moved my newly compiled slapd to the synology diskstation, chown’d & chmod’d it,  and tested debug… we see an instant result;

file-20> chown root:root slapd.me
file-20> chmod 755 slapd.me
file-20> slapd.me -d 1
ldap_url_parse_ext(ldap://localhost/)
ldap_init: trying /usr/local/etc/openldap/ldap.conf
ldap_init: HOME env is /root
ldap_init: trying /root/ldaprc

Now I disabled the directory server in the web ui and instead ran my new version from the commandline with debug 1;

./slapd.me -d 1 -f /usr/syno/etc/openldap/slapd.conf

It failed with an error referring to;

password-hash {CRYPT}

Turns out I had to recompile slapd with –enable-crypt. I copied the newly compiled slapd over, ran again with -d 1 and now I could see it failing with error relating to an invalid filter;

filter="(!cn=admin)"

So I removed this… Try again, now;

ldap_sasl_bind_s failed

I think that sent me in the wrong direction (I thought it was an ssl/tls/authentication issue) and I spent hours messing with certificates, unsupported tls configuration parameters etc but got nowhere. Eventually I determined this error essentially means “can’t connect”. Eventually I tried without ssl and as if by magic everything sprung to life!

Here are the lines I added to the default slapd.conf on the provider;

index entryCSN eq
index entryUUID eq

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 10

And the consumer;

index entryCSN eq
index entryUUID eq

syncrepl rid=20
 provider=ldap://192.168.10.250
 type=refreshAndPersist
 interval=00:00:10:00
 searchbase="dc=example,dc=com"
 bindmethod=simple
 binddn="uid=admin,cn=users,dc=example,dc=com"
 credentials=password
 scope=sub
 retry="60 +"

If you want to download my compiled version of slapd you can find it here; https://www.dropbox.com/s/sfb06uo0leqxqq9/slapd

I hope this will help you!

Further to http://tickett.wordpress.com/2014/04/17/new-lab-nas/ I have now received my new hardware and had time to think and experiment.

My first experiment was iSCSI vs NFS. I created 2x 2-disk RAID0 Volumes on the Synology DS 1813+ (using the 4x Samsung EVO 840 SSD) and attached them to my new ESXi host. I then installed a clean Windows 7 VM on each volume. After installing windows I did some file copies and made a few notes.

Image

The graph above (along with the figures in the table) do a great job of showing the results. NFS cam out over 20% faster. You can see the max read speed recorded as 93 MB/s (only 71 MB/s for iSCSI) and write rate recorded as 77 MB/s (only 62 MB/s for iSCSI).

I then ran Crystal Disk Mark on the iSCSI VM:

Image

And the NFS VM:

Image

Here some of the results are less of an improvements, but others are more than doubled. All of the above ruled iSCSI out for me.

But I then got thinking about local storage. I only run one ESXi host these days (I have a spare host ready to swap out if I need to, but to save electricity I only run one at a time), so the benefit of networked/shared storage is almost non-existent (I will be backing up snapshots to the NAS).

I couldn’t initially create a 2 disk RAID0 array because ESXi doesn’t support my motherboard’s onboard raid, so I stuck with a single local disk (the Samsung EVO 840 SSD again); Installed Windows 7 and ran the Crystal Disk Mark benchmark:

Image

I then found an old PCIe SSD (OCZ Revodrive x3) and thought i’d give that a try more out of interest:

Image

Nice! Unfortunately the PCIe SSD isn’t directly supported in ESXi so I had to create a normal VM and connect the SSD using passthrough. This would essentially mean it could only be connected to one VM (which wouldn’t be a huge problem as I’d want to connect it to my SQL VM) but the stability isn’t great either.

I picked up a cheap LSI SAS3041E raid card from eBay and went about setting up the local 2 disk RAID0 array. The results were very surprising:

Image

These are all below the speeds seen using a single SSD. See the below table to easily compare:

I’m not sure whether this is because the raid card, or the lack of TRIM support or some other obscure reason. I decided i’m happier running 2 separate SSDs anyway (I can split SQL db & logs between the two discs to see a performance boost) and if something goes wrong I will only have to restore half my VMs from nightly backup.

iSCSI NFS Local Single SSD Passthrough PCIe SSD Local RAID0 2x SSD
Seq Read 96.74 101.4 238.7 1371 233.4
Seq Write 30.61 72.91 229.1 1089 219.9
512K Read 74.77 74.48 228.3 1051 210.1
512K Write 45.46 66.23 223.8 1010 208.2
4K Read 4.138 5.213 22.92 30.36 18.31
4K Write 4.337 4.781 52.73 68.65 23.59
4K QD32 Read 6.575 8.661 212.8 281.1 62.26
4K QE32 Write 5.582 8.791 199.7 240.9 92.81

Screen Shot 2014-04-25 at 16.20.55

 

 

And another without the PCIe SSD to make it a little easier to compare:

Screen Shot 2014-04-25 at 16.26.58

So, in conclusion- I will be running 2 of the 250GB Samsung EVO 840 SSDs locally in the ESXi host. This will provide optimal performance and hugely reduce my dependance on the network and NAS (currently the VMs live on the NAS and I can’t take it or the network down without powering everything down first; my pfSense software router resides in a VM too, so I lose internet connectivity!). I will continue to use Veem to take nightly backups should anything go wrong with the ESXi host.

I hope to migrate everything over the weekend- fingers crossed.

Solar Update

I posted some details back in October when I had my solar panels installed http://tickett.wordpress.com/2013/10/25/solar-kwh-meters-new-fuse-box-flukso/

I thought i’d provide a little update to show how much i’m getting out of them;

Image

 

This shows my best day so far.

Image

This shows a daily summary for the last 40 days.

Image

And for the last 12 weeks.

To give you an idea, i’m on a feed-in-tariff paying roughly 17.5p/kWh so on my best day 23kWh earnt me £4. Not to mention that based on the estimation that i’m consuming 50% @ 13.5p that’s another £1.50.

Image

And one last screen showing average daily generation through the winter months. Making a mere 50p/day :)

New Lab / NAS

Far too long since the last post. Let’s hope this will be the start of them picking back up again!

I have been experiencing some performance issues and need to have a bit of a re-shuffle of the servers/network (my vCenter appliance has stopped working, SQL is being slow etc). I have some production stuff running and don’t want to take everything offline for long so decided to build a new environment then migrate stuff.

I wont be changing much; 

Old NAS; Synology DiskStation 1812+ w/
-4x 3TB WD Green in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-2x 256GB OCZ Vertex4 SSD in RAID0 : Virtual machine storage (NFS)
-2x1gbit LACP to switch
Old ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 16GB RAM running VMWare ESXi v5.1
Old switch; Linksys SRW2024W

New NAS; Snology DiskStation 1813+ w/
-3x 4TB WD Red in Synology Hybrid Raid (SHR) : Main data store for Movies, PVR Recordings, ISOs, Photos etc (CIFS & NFS)
-3/4?x 250GB Samsung EVO 840 SSD in RAID0? : Virtual machine storage (NFS/iSCSI?)
-3x1gbit LACP to switch dedicated to main data store
-1gbit to switch dedicated to VM storage
New ESXi Host; SuperMicro X8SIL-F w/ Xeon X3470 & 32GB RAM running VMWare ESXi v5.5
New switch; Cisco SG200-26 (separate vm storage traffic on it’s own VLAN/subnet)

You’ll notice a bunch of questions marks around the new Virtual machine storage volume. I’m currently debating which disk configuration to use and which storage protocol. I’ve always used NFS as it seems much simpler but understood iSCSI to be the better option (especially with the Synology supporting VAAI hardware acceleration). But despite this, i’ve been reading that NFS seems to outperform iSCSI.

Additionally, if I go iSCSI I will try using 2x1gbit ports and enabling multipathing / round-robin. If I go down the NFS route I don’t think LACP will provide any benefit as the IP hash from a single ESXi host to the single DiskStation will always use the same link?

I have 4 of the EVO SSD so am initially creating a 2 disk RAID0 volume using NFS and an identical volume using iSCSI. I can then try running some like for like comparisons/benchmarks to determine which configuration to use going forward.

I will provide an update shortly.

Further 433Mhz RF Hacking

Further to http://tickett.wordpress.com/2012/06/27/more-433mhz-rf-hacking/

I noticed some referral traffic coming from Erlands blog: http://fedmow349.blogspot.co.uk/2013/05/hacking-433mhz-rf-link-home-automation.html

He used https://code.google.com/p/rc-switch/ on his arduino to receive/interpret the 433Mhz traffic and it works a treat for the cheapo PIR, door/window sensors, smoke alarms etc

4f77d093 1440 4530 b3c1 283ec01a57f9

Here is a sample output from one of the door sensors. It quite rightly identifies the state 0/1/F of each jumper:

Decimal: 1398131 (24Bit) Binary: 000101010101010101110011 Tri-State: 0FFFFFFFF101 PulseLength: 525 microseconds Protocol: 1
Raw data: 16344,540,1560,544,1564,544,1560,1588,508,540,1568,1584,516,540,1564,1588,512,528,1576,1580,524,536,1576,1576,524,528,1584,1580,524,528,1584,1576,528,524,1588,1576,540,1572,544,1572,544,520,1604,520,1604,1572,544,1568,552,

I'm in the process of firing up a new Raspberry Pi DomitiGa server. I will use a JeeNode connected via USB Serial to receive 433Mhz traffic alongside 868Mhz JeeNode traffic. I guess I can probably get rid of my RFXCom 433Mhz transceiver now?

I’ve booked a holiday to Thailand over Christmas and decided to buy a quadcopter so I could get some different photos & videos while i’m away. It’s great fun!

Here’s an aerial shot from a local park:

DCIM100GOPRO

And a video of the first flight with the Zenmuse Gimbal installed:

And a few shots of the local tennis club:

DCIM100GOPRO

DCIM100GOPRO

And video:

Still working on getting a decent FPV setup and learning to be gentler with the controls (I may end up adjusting in software to make them less sensitive).

Despite still having a handful of unfinished projects I decided to take the plunge and have some solar panels installed. I found a local MCS accredited installer MAH Solar Solutions and in no time had the kit up and running. My house faces east/west so I have 8 panels on each side.

Image

The panels are Canadian Solar 250W, total 16 to form a 4KW system fed into a Aurora ONE/PVI 3.6 inverter in the loft.

Image

The solar install was the perfect opportunity to finally replace my ancient fusewire consumer unit. Which got me to thinking of ways I may be able to better monitor my energy consumption. I hoped to find “a magical fuse box” but hours, days, weeks of research and the best solution I could come up with were DIN rail mount KWH meters.

Unfortunately no amount of searching could find a single image of the KWH meters in use or a diagram of how to wire them in a consumer unit. Instinct told me to have each meter mounted alongside each breaker, but this would prevent me from using the busbar. Then I thought about mounting them upside down, but this would cause similar issues and mean running the neutral far too close to the busbar! The next idea was to either mount 2 consumer units side by side, having the breakers in one and the meters in the other… Then I found some information about twin/dual rail consumer units which seemed like a winning idea.

MAH Solar Solutions were to be installing the new consumer unit and despite having never seen or used the KWH meters before they were more than happy to install them. They came up with the idea of mounting the meters at the end of each block of breakers. With a big enough consumer unit this meant the busbar could still be used. I will try and add some diagrams and more images at a later date.

Image

I’m still working the best way to read so many pulse counters simultaneously and log/process the data. Currently http://openenergymonitor.org/emon/buildingblocks/12-input-pulse-counting looks like one of the most promising, but I will need to start from scratch with a way to count, store, transmit etc. So for the time being I decided on the fluksometer which can handle 4 pulse inputs- which will do for now: mains, solar, oven & gas meter.

Image

The flukso service itself provides quite nice graphing- you can see here my mains consumption (blue), oven consumption (green), solar pv generation (red) and gas consumption (orange).

Image

Which feeds nicely to pvoutput.org.

Image

You can see the rj11 cable coming out of the gas meter (to the fluksometer). The purple cable runs from the RS485 port on the inverter in the loft. I am waiting on a USB to RS485 device so I can start pulling some detailed data using Aurora Monitor or similar.

Image

I understand the fluksometer has an onboard RFM12B configured on the 868Mhz band to understand communication from jeelabs devices- which is ideal as I meter my water using jeenodes.

Image

I just don’t know how to get them talking… yet!

Follow

Get every new post delivered to your Inbox.

Join 2,246 other followers

%d bloggers like this: