Latest Entries »

VirtualBox High CPU Usage on OSX

I noticed the fan was constantly making a lot of racket in my MacBook Air lately (running Yosemite 10.10). VirtualBox appears to be the culprit, even when my Windows 7 guest is idling (showing as 100%+ in Activity Monitor).

After reducing the number of assigned processors to 2 (was previously 4) everything seems to be back to normal. I also set the max usage to 75% (although this had no impact when set to 4 processors, so i’m not sure if it actually helps).

Looking forward to the increased battery life more than anything.

For our helpdesk/support/ticketing system we have a screen mounted on the wall with key stats (still a work in progress);

tel_dash

90% of the data is pulling from a SQL database- simple stuff! However, we wanted to pull in the number of unanswered e-mails currently sitting in our support mailbox (circled in red).

Retreiving Data From the Inbox

The first challenge was to find a way to pull the number of e-mails. A small .NET app seemed like the way to go (we could then push the data into SQL and pull it into SSRS).

The first attempt used Microsoft.Office.Interop.Outlook;

var app = new Microsoft.Office.Interop.Outlook.Application();
var ns = app.GetNamespace("MAPI");
ns.Logon(null, null, false, false);
Outlook.Recipient recipient;
recipient = ns.CreateRecipient("Tickett Enterprises Support");
var inbox = ns.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderInbox);
var subfolder = inbox.Folders[1];
var shared = ns.GetSharedDefaultFolder(recipient ,Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderInbox);
return shared.Items.Count.ToString();

This was straightforward but assumes the application is being run on a machine with outlook installed and configured with the Inbox setup etc. As our e-mail is provided by Exchange (Office 365) the recommended approach appeared to be utilising the Exchange Web Services Managed API (EWS). Writing this code took considerably longer. I got stuck troubleshooting “Autodiscover service couldn’t be located” error… I decided to point the API directly at the EWS URL but that failed with 401 Unauthorized Exception.

I’m not 100% sure of the cause, but it seems that my windows credentials were either not being passed or were not being recognised/interpreted correctly. Microsoft’s sample code uses

service.UseDefaultCredentials = true;

…but changing this to false fixed the 401. My end code was;

            
try
{
  ExchangeService service = new ExchangeService();
  service.Credentials = new WebCredentials("user@domain", "password", "");
  service.UseDefaultCredentials = false;
  service.Url = new Uri("https://pod51047.outlook.com/ews/exchange.asmx");
  Mailbox mb = new Mailbox("support@domain");
  return Folder.Bind(service, new FolderId(WellKnownFolderName.Inbox, mb)).TotalCount.ToString();
}
  catch (Exception ex)
{
  return ex.Message;
}

A colleague then suggested calling this code directly from SSRS instead of pushing/pulling to/from SQL. So… the .NET project was compiled as a class library and the rest should be easy?

Loading/Calling the DLL in SSRS

I expected this to be straightforward, but let’s face it- it never is! Fortunately, google saved the day.

In my development environment (local machine) I had to;

  • Reference my custom dll (in BIDS / Visual Studio, on the report menu, under report properties, references)
  • Copy my custom dll to C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies (This is for Visual Studio 2010, your path may differ slightly)
  • Copy any additional dlls your dll references to C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies (for me, this was Microsoft.Exchange.WebServices.dll)
  • Modify C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\RSPreviewPolicy.config (under each Codegroup node, set the PermissionSetName to FullTrust)

This had me up and running! I was hoping the same process would be true for deployment to the reporting service, think again! It turns out SSRS 2012 only supports .net 3.5 and earlier (my code was compiled as 4). Fortunately I was able to recompile my dll in .net 3.5 without any drama.

Then roughly the same process in my production environment (ssrs 2012);

  • Copy my custom dll and dependencies to c:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\bin (for SQL Server Reporting Services 2012)
  • Modify c:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\rssrvpolicy.config (under each Codegroup node, set the PermissionSetName to FullTrust)

A bit behind as always… but struggling with this one for a while, so I felt it worthy of sharing the solution.

I got everything working on the LAN then setup port forwarding on my firewall for;

TCP Port 80
TCP Port 443
TCP Port 1494
TCP Port 1604
TCP Port 2598

Whilst I was able to access the storefront when I tried to launch my application i received an error;

Connection to the server "192.168.0.243" (the server's internal / private IP address) was interrupted. Please check your network connection and try again.

 

Screen Shot 2014-09-24 at 22.48.46

All the documentation and discussions online suggest that remote access is only possible using Netscaler, and although I may deploy Netscaler at a later date, for the moment I want to skip that step!

I found the default.ica file in C:\inetpub\wwwroot\Citrix\Store\App_Data contained an Address= entry (my installation uses mostly default values, and an excerpt looked something like this);

[ApplicationServers]
Application=

[Application]
Address=
TransportDriver=TCP/IP
DoNotUseDefaultCSL=On
BrowserProtocol=HTTPonTCP
LocHttpBrowserAddress=!
WinStationDriver=ICA 3.0
ProxyTimeout=30000
AutologonAllowed=ON

Simply adding my external IP address in the Address= section got me working!

Oddly, it’s quite hard to find Ubiquiti hardware in the UK, but i’ve previously sourced equipment from an eBay distributor (http://stores.ebay.co.uk/ubntshop/) and they were able to provide the best price for the ERL.

I fired the first EdgeRouter up and starting getting to know the webUI. It didn’t take long, but seemed very basic. Even for my relatively simple requirements i’d need to get to know the CLI. The official Ubiquiti Edgemax support forum (https://community.ubnt.com/t5/EdgeMAX/bd-p/EdgeMAX) was a great place to start.

An important thing to note is that the Edgemax operating system is based on Vyatta, so you if you struggle to find an Edgemax specific solution to a problem you may be able to find a Vyatta solution which will work on your Ubiquiti hardware.

IP Addresses

One of the first decisions was an IP addressing scheme. I decided to use 192.168.x.y;

  • where x represents the site (in increments of 10 to allow for future splitting) and
  • y will be the same at every site for the router & file server

The DHCP range will be from .1 to .199

Firmware Upgrade

I should have done this a little earlier, but when configuring the system parameters I was reminded to check for a firmware upgrade and found the shipped unit was running a pretty outdated v1.1 (the current at the time of posting is v1.5). So I went ahead and upgraded.

Default Configuration (WAN + 2LAN)

The new firmware has a great wizard to get you started. I chose the WAN + 2LAN setup and was immediately up and running with the router providing internet connectivity to the LAN. However, at this point in time double NAT is occurring as the internet connection is provided by a BT HomeHub3 (which doesn’t support bridge mode).

ADSL Modem

To avoid the double NAT scenario it was necessary to purchase an ADSL modem. There don’t appear to be many to chose from, I opted for the Draytek Vigor 120. Absolutely no configuration of the modem was required, I simply plugged it in and set the ERL WAN connection to use PPoE with login credentials;

username: bthomehub@btbroadband.com
password: BT

…and voila!

VPN

Initially during testing I placed both Edgerouters side by side, set static IP addresses (8.10.0.1 and 8.20.0.1) on connected them with an ethernet cable. Unfortunately, I was unable to get an IPSec tunnel established using the WebUI, but after looking at some sample configs on the forum I was able to get it working using the CLI.

I had to then modify 3 elements to get it working on-site;

  • the peer to use the dynamic hostname
  • the local-ip to use 0.0.0.0
  • the interface to use pppoe0
vpn {
    ipsec {
        auto-firewall-nat-exclude enable
        disable-uniqreqids
        esp-group FOO0 {
            compression disable
            lifetime 3600
            mode tunnel
            pfs enable
            proposal 1 {
                encryption aes128
                hash sha1
            }
        }
        ike-group FOO0 {
            dead-peer-detection {
                action restart
                interval 15
                timeout 30
            }
            lifetime 28800
            proposal 1 {
                dh-group 2
                encryption aes128
                hash sha1
            }
        }
        ipsec-interfaces {
            interface pppoe0
        }
        nat-traversal enable
        site-to-site {
            peer dynamic-hostname.com {
                authentication {
                    mode pre-shared-secret
                    pre-shared-secret secret
                }
                connection-type initiate
                default-esp-group FOO0
                ike-group FOO0
                local-ip 0.0.0.0
                tunnel 1 {
                    allow-nat-networks disable
                    allow-public-networks disable
                    esp-group FOO0
                    local {
                        subnet 192.168.20.0/24
                    }
                    remote {
                        subnet 192.168.10.0/24
                    }
                }
            }
        }
    }
}

This was working well but if either internet connection dropped or router got rebooted the VPN wouldn’t automatically come back up. Supposedly dead-peer-detection should take care of this, but it doesn’t appear to be working. I decided to create a simple workaround using a cron script;

#!/bin/bash
run=/opt/vyatta/bin/vyatta-op-cmd-wrapper
$run show vpn ipsec sa | grep "up"
if [ $? == 1 ]
then
 $run restart vpn
fi

The following command creates a cron job to run the script every 5 minutes;

set system task-scheduler task vpn_monitor executable path /config/scripts/vpn_monitor.sh
set system task-scheduler task vpn_monitor interval 5m

By placing the script in /config/scripts you ensure it remains after a firmware upgrade and is included in configuration backups.

Static-Host-Mapping

We want to block a few websites (namely facebook) and rather than overcomplicating things with url-filtering / squidguard, we’ve simply set a few static host mappings;

set static-host-mapping host-name facebook.com inet 127.0.0.1

We also set a static host map for file server at the other site (as the DNS server on the local router doesn’t have any knowledge of hostnames/ip addresses serviced by the other site). Maybe at a later date I will try and find out if I can forward DNS requests to the other site before going out to the internet?

Backup

Every time I make a configuration change I download a config backup.

On one occasion the backup failed to download and the WebUI became unresponsive (rebooting the router fixed things, but the backup still wouldn’t download). I later discovered this was due to size of the /config folder after installing squidguard and downloading the category database. As I wasn’t going to be using this initially I simply removed it.

I was recently tasked with overhauling the “network” for a local, small, not for profit. The company currently have 2 sites, with roughly a dozen desktops at each and half dozen laptops which roam between the two.

The primary requirements were to provide;

  • networked file storage (preferably redundant)
  • centralised user management (single sign-on and access control)
  • site blocking/web filtering

If both sites had “reasonable” internet connections, I would have suggested a single server at the “central” location with a site-to-site VPN. Unfortunately the connections are~ 3MBit down, 0.3Mbit up (ADSL). This introduces a need for additional hardware (servers at every site) and a way of synchronising/replicating between the sites!

As always, everything should be designed with scalability in mind, but sticking to a tight budget.

The File Servers

My first purchase were the file servers. Many years back I used to “roll my own” with something like a HP MicroServer and Windows Home Server (or possibly FreeNAS/OpenFiler) but some years back I made the transition to a dedicated Synology appliance.

Whilst you lose some of the flexibility (being able to install any software on x86/x64 hardware like the MicroServer) you gain a huge amount of reliability and support by going with a dedicated appliance (not to mention the huge feature set and ability to run many additional applications on the Synology product line).

One of the only requirements for the file server was redundancy (so at least 2 bays to support raid 1). Wanting to stick with Synology I used their product comparison tool (https://www.synology.com/en-uk/products/compare) to make a shortlist and then after looking at the prices settled for the DiskStation DS214.

Although storage requirements were pretty small, I got a great deal on WD Green 3TB disks so bought 5 (2 for each site and 1 spare).

The Routers

Had this have been a “home solution” i’d probably have opted for something like the Asus RT-AC66U flashed with one of the open source firmwares such as OpenWRT or DD-WRT. But, needing a “business solution” I needed something, most importantly reliable (the potential sacrifice being ease of use).

On top of reliability, the primary feature requirement for the routers is site-to-site VPN. After some research I decided to give the Ubiquiti EdgeRouter Lite 3 a try. Frustratingly the ADSL connection coming in at both sites is provided by a BT HomeHub3. The HH3 doesn’t support bridge mode, and to avoid double NAT / further complications I decided to purchase 2 ADSL modems (there aren’t many to chose from… I went for the Draytek Vigor 120).

Documentation

I previously posted about some SharePoint issues i’ve been tackling, this is the medium i’ve chosen for documenting and sharing the how-to guides, configuration details and process documents. I’m yet to tackle, but may also use it for new user requests, password resets, support requests etc.

To be continued…

Similarly, I have already posted about getting OpenLDAP replication working, this was one tiny part of the project. I will be following up this post with a number specifically tackling the implementation and configuration of the new solution.

Watch this space.

Hopefully the shortest post yet :)

I was previously using GeoScaling (http://www.geoscaling.com/) to provide DNS for my domain names but they’ve had a few 24hr+ outages in the last year and this has caused havoc with (mainly with my e-mail as sending servers have been unable to resolve/deliver mail to my domain).

I anticipated it was time to cough up some cash for a professional, paid for service with guaranteed uptime, but stumbled across another free option- CloudFare (http://www.cloudflare.com). The switch was pretty seamless and (to the best of my knowledge) they’ve had no downtime since I migrated. They have a much larger infrastructure (presumably due to the paid for services they also offer) and even the free services supports a CDN style caching if you wish to save your webserver’s bandwidth.

Office 365 Hosted Sharepoint

Hopefully a quick post…

My company is currently in the process of trying to document everything that’s currently stored in our heads. Initially we were using our helpdesk/ticketing software but decided, in some instances we would like to give our clients access to the documentation which relates to their organisation.

I use mediawiki for some other information sharing, but from what i’ve read, it isn’t really meant for this type of “role” driven access control and trying to use it in that way will ultimately end in failure. I don’t like “documents” (microsoft word etc) so really wanted to stick with a “wiki” style solution. I recall using Sharepoint on client sites historically and remember it handling this scenario pretty well- as we already have an Office 365 subscription it seemed a sensible avenue to explore.

Initial research had me concerned about the ability to share outside of our organisation (needing to purchase a license for every account that should be able to login)- but subsequently it turns out you can either;

-Create users without actually assigning licenses
-Grant access to anyone using their e-mail address (it will need to be linked to a microsoft account, but there is no charge and many already are)

So we have set our creating the Sharepoint sites and it’s coming together really well, but one thing was bothering me… When we login we are presented with a list of “sites”;

Screen Shot 2014-08-29 at 19.54.57

-”New Public Site”: http://tickett-public.sharepoint.com
-”Public Site”: http://tickett.sharepoint.com
-”Team Site”: http://tickett.sharepoint.com/TeamSite

If you clicked either of the first two links, hopefully you were redirected to http://tickett.net? But this wasn’t easy and I was pretty confused why I had two public URLs/sites and how could I edit them!

The “New Pubic Site” looked like;

new_public

And the “Public Site” like;

old_public

A bit of googling and I found a reasonable explanation of why I have two sites… Microsoft went through an upgrade at some point in time and to avoid breaking Sharepoint sites they kept all of the old ones and created new ones to site alongside.

As I already have a website I decided I don’t really need either of these so ideally would just like to redirect visitors to my existing site for now.

After a lot of poking around I somehow managed to get to the “New Public Site” in “edit mode” and add a little javascript to redirect visitors to our existing stie;

<script>window.location.href="http://tickett.net";</script>

After adding the code I was successfully redirected when I visited the site but anyone not logged in was not. So… armed with a handful of questions I decided it was time to raise a support ticket. Very quickly the phone ran and a technician was on the case;

#1- How do I edit the “New Public Site”

It didn’t take many minutes before I was informed that simply adding /_layouts/viewlsts.aspx after to the URL would take me to the “admin area” where I could manage the site. Easy… but surely there must be an easier way than typing the URL?

If you refer back to my earlier screenshot you’ll notice a “manage” link. Clicking this allows you to modify the links to the “New Public Site”, “Public Site” and “TeamSite”. Adding the suffix to the URL made sense so now when I login clicking on the site will take me to “edit mode” rather than “view”;

Screen Shot 2014-08-29 at 20.00.03

Well done Microsoft :)

#2- Why is the redirect only working for me?

Once #1 was solved and I was back in to “edit mode” the Microsoft engineer was very quick to pickup on the fact that my change was in draft;

Screen Shot 2014-08-29 at 20.04.04

 

Clicking the … (three dots / ellipsis) displays a menu, clicking the … (three dots / ellipsis) brings out another menu which gives the “Publish a Major Version” option and upon clicking this my change was live and everyone hitting the site was now getting redirected.

Well done Microsoft :)

#3- How do I edit the “Public Site”

So far Microsoft had done pretty well, but really struggled with this one. We have still yet to find a way to edit the site via a web interface.

Eventually, they suggested trying Sharepoint Designer. I’ve not used this before, but since installing have found it to be a pretty good alternative to the web UI. Unfortunately when I tried to open the site I got stuck at the login stage- it appears that Sharepoint Designer doesn’t support federated login (my Office365 logins are authenticated using my on-premise ADFS server). Doh!

But… there was hope… we “shared” the site through the web interface with my personal @gmail address (which is linked to a microsoft account) and I was successfully able to login to Sharepoint Designer- nearly there!

Next problem… the sites doesn’t appear to exist;

Screen Shot 2014-08-29 at 20.29.42

 

Determination and a lot more poking around eventually took us to a link on the front page “Edit site home page”;

Screen Shot 2014-08-29 at 20.53.53

Which threw yet another error, “This page does not contain any regions that you have permission to edit.”. But navigating back a few steps to “Website -> Web Pages” I was able to right click, open with, notepad;

Screen Shot 2014-08-29 at 20.55.47

And add in my script;

Screen Shot 2014-08-29 at 20.57.22 1

So far, so good.

Despite it being a little bit “trial and error”, with Microsoft’s help, we did get there in the end, and very soon after I first raised the support ticket- good job!

A few weeks on since my last post (http://tickett.wordpress.com/2014/08/14/synology-directory-openldap-replication/) I have found a few bugs, fixed a few more issues and hopefully have a fully working solution.

One of issues with my previous post (that i’m not going to go into at the moment) was that I hadn’t cross-compiled openssl and cyrus-sasl2 so my version of slapd didn’t support either. I think i’ve now resolved this and you can download my latest slapd here: https://dl.dropboxusercontent.com/u/713/slapd

#1- I needed the slave to refer changes to the master

Documentation and discussion everywhere seems to suggest simply adding a line to the slave slapd.conf;

updateref ldap://192.168.10.250

Would ensure any changes were written to the master but I couldn’t get this working (even with debug enabled). The only error I could really find (from memory) was an err=49 which I believe refers to invalid credentials but i’m unsure which credentials or how this is possible.

After further research, I found that there is an alternatively openldap configuration referred to as n-way multi master. Rather than specifying a master and slave, both nodes are masters and changes are replicated both ways. This was relatively easy to setup and “just worked” (not to mention, a better solution as before it was possible the “master” server would be unreachable (if the site-to-site VPN was down) and changes would fail).

You will find config details for n-way multi master / mirrormode in my next blog post.

#2- Unable to access shares after password change (from windows/pGina) with error “Element Not Found”

This was a real curve ball. Google sent me in completely the wrong direction, but I recalled a discussion about multiple passwords being stored in the LDAP database, which led me to wonder if the userPassword wasn’t the only field needing to be updated.

A colleague stumbled across the documentation for pGina fork: http://mutonufoai.github.io/pgina/documentation/plugins/ldap.html which shows a rather more complete “Change Password” configuration for the LDAP plugin. Unfortunately pGina main doesn’t support the DES or Timestamp methods so we couldn’t configure sambaLMPassword, shadowLastChange or sambaPwdLastSet, but adding sambaNTPassword (MD4) alongside userPassword (SHA1) seems to have done the trick.

#3- Data was replicating but the users could not login

I’m not sure exactly how I figured this one out. I think I had a vague recollection of reading a discussion about passwords not replication because default permissions do not allow them to be read from the database.

I added a line in slapd.conf above the existing ACL include;

include /usr/syno/etc/openldap/acls.conf
include /usr/syno/etc/openldap/slapd-acls.conf

The contents of which;

access to attrs=userPassword,sambaLMPassword,sambaNTPassword
     by dn.base="cn=replication,cn=users,dc=example,dc=com" write

Allow the password to be read from the database by the replication user.

This drove me close to insanity, but I got there eventually!

I found an old discussion on the Synology forum http://forum.synology.com/enu/viewtopic.php?f=183&t=55020 and was optimistic it’d be pretty simple. The thread talks about compiling a later version of OpenLDAP from source, but the version included (in DSM5.0) is later than that discussed;

file-20> slapd -VV
@(#) $OpenLDAP: slapd 2.4.34 (Feb 27 2014 03:17:07) $
root@build3:/source/openldap-2.4.x/servers/slapd

I tried configuring my provider and consumer using the example and referring to http://www.openldap.org/doc/admin23/syncrepl.html but wasn’t getting anywhere (after changing slapd.conf I would disable and re-enable the LDAP server through the web ui). I was getting an error “Permission denied. Please contact the server administrator.” and an entry in /var/log/messages;

file-20> tail /var/log/messages
Aug 14 21:51:59 file-20 ldap.cgi: ldap_server_default_add.c:146 add [admin] to [cn=Directory Operators,cn=groups,dc=example,dc=com] failed, ldap_insufficient_access (53)

Oddly the slapd process continues to run but no replication is taking place. I believed the error might be because the admin account is locked in some way and wont allow any modification. I tried adding a filter;

filter="(!cn=admin)"

This prevented the error message popping up and the error in /var/log/messages but still no replication was taking place.

I imagine it would have been a trivial task on a standard Linux distribution but it seems OpenLDAP has been compiled in a manner which does not allow debug;

file-20> slapd -d 1
must compile with LDAP_DEBUG for debugging

So there’s no real feedback as to what is (or isn’t) working.

After blindly fumbling around for hours I decided to try and compile myself so I could debug. This itself was a mammoth chore!

I wanted to stick with the same version currently running on DSM5.0 so started with the source for 2.4.34 from http://www.openldap.org/software/download/OpenLDAP/openldap-release/

In order to cross compile I followed the Synology 3rd-Party Package Developers guide; http://www.synology.com/en-uk/support/third_party_app_int. I had a spare ubuntu machine I could use for compiling… I needed the DSM5.0 toolchain from http://sourceforge.net/projects/dsgpl/files/DSM%205.0%20Tool%20Chains/ as i’m using the DS214 which apparently has a marvell amanda xp processor. And extracted the archive;

tar zxpf gcc464_glibc215_hard_armada-GPL.tgz –C /usr/local/

Then Berkeley DB 5.1.25 from http://pkgs.fedoraproject.org/repo/pkgs/libdb/db-5.1.25.tar.gz/06656429bfc1abb6c0498eaeff70cd04/

tar xvfdb-5.1.25.tar.gz
cd db-5.1.25
cd build_unix
export CC=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-gcc
export LD=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ld
export RANLIB=/usr/local/arm-marvell-linux-gnueabi/bin/arm-marvell-linux-gnueabi-ranlib
export CFLAGS="-I/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/include -mhard-float -mfpu=vfpv3-d16"
export LDFLAGS="-L/usr/local/arm-marvell-linux-gnueabi/arm-marvell-linux-gnueabi/libc/lib"
../dist/configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local 

I also had to install;

sudo apt-get install lib32z1

Now I was able to configure OpenLDAP;

export LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.5.1/lib -R/usr/local/lib -R/usr/local/BerkeleyDB.5.1/lib"
export LD_LIBRARY_PATH=/usr/local/BerkeleyDB.5.1/lib
export LD_RUN_PATH=/usr/local/BerkeleyDB.5.1/lib
export CPPFLAGS="-I/usr/local/BerkeleyDB.5.1/include"
./configure --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --prefix=/usr/local --with-yielding-select=no --enable-crypt

But when I tried to;

make depend
make

I received an error; cross compile openldap error: undefined reference to `lutil_memcmp’ – http://zhuqy.wordpress.com/2010/04/22/cross-compile-openldap-error-undefined-reference-to-lutil_memcmp/ put me straight- I just had to comment out a line from include/portable.h;

//#define NEED_MEMCMP_REPLACEMENT 1

make was now successful and I moved my newly compiled slapd to the synology diskstation, chown’d & chmod’d it,  and tested debug… we see an instant result;

file-20> chown root:root slapd.me
file-20> chmod 755 slapd.me
file-20> slapd.me -d 1
ldap_url_parse_ext(ldap://localhost/)
ldap_init: trying /usr/local/etc/openldap/ldap.conf
ldap_init: HOME env is /root
ldap_init: trying /root/ldaprc

Now I disabled the directory server in the web ui and instead ran my new version from the commandline with debug 1;

./slapd.me -d 1 -f /usr/syno/etc/openldap/slapd.conf

It failed with an error referring to;

password-hash {CRYPT}

Turns out I had to recompile slapd with –enable-crypt. I copied the newly compiled slapd over, ran again with -d 1 and now I could see it failing with error relating to an invalid filter;

filter="(!cn=admin)"

So I removed this… Try again, now;

ldap_sasl_bind_s failed

I think that sent me in the wrong direction (I thought it was an ssl/tls/authentication issue) and I spent hours messing with certificates, unsupported tls configuration parameters etc but got nowhere. Eventually I determined this error essentially means “can’t connect”. Eventually I tried without ssl and as if by magic everything sprung to life!

Here are the lines I added to the default slapd.conf on the provider;

index entryCSN eq
index entryUUID eq

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 10

And the consumer;

index entryCSN eq
index entryUUID eq

syncrepl rid=20
 provider=ldap://192.168.10.250
 type=refreshAndPersist
 interval=00:00:10:00
 searchbase="dc=example,dc=com"
 bindmethod=simple
 binddn="uid=admin,cn=users,dc=example,dc=com"
 credentials=password
 scope=sub
 retry="60 +"

If you want to download my compiled version of slapd you can find it here; https://www.dropbox.com/s/sfb06uo0leqxqq9/slapd

I hope this will help you!

Further to http://tickett.wordpress.com/2014/04/17/new-lab-nas/ I have now received my new hardware and had time to think and experiment.

My first experiment was iSCSI vs NFS. I created 2x 2-disk RAID0 Volumes on the Synology DS 1813+ (using the 4x Samsung EVO 840 SSD) and attached them to my new ESXi host. I then installed a clean Windows 7 VM on each volume. After installing windows I did some file copies and made a few notes.

Image

The graph above (along with the figures in the table) do a great job of showing the results. NFS cam out over 20% faster. You can see the max read speed recorded as 93 MB/s (only 71 MB/s for iSCSI) and write rate recorded as 77 MB/s (only 62 MB/s for iSCSI).

I then ran Crystal Disk Mark on the iSCSI VM:

Image

And the NFS VM:

Image

Here some of the results are less of an improvements, but others are more than doubled. All of the above ruled iSCSI out for me.

But I then got thinking about local storage. I only run one ESXi host these days (I have a spare host ready to swap out if I need to, but to save electricity I only run one at a time), so the benefit of networked/shared storage is almost non-existent (I will be backing up snapshots to the NAS).

I couldn’t initially create a 2 disk RAID0 array because ESXi doesn’t support my motherboard’s onboard raid, so I stuck with a single local disk (the Samsung EVO 840 SSD again); Installed Windows 7 and ran the Crystal Disk Mark benchmark:

Image

I then found an old PCIe SSD (OCZ Revodrive x3) and thought i’d give that a try more out of interest:

Image

Nice! Unfortunately the PCIe SSD isn’t directly supported in ESXi so I had to create a normal VM and connect the SSD using passthrough. This would essentially mean it could only be connected to one VM (which wouldn’t be a huge problem as I’d want to connect it to my SQL VM) but the stability isn’t great either.

I picked up a cheap LSI SAS3041E raid card from eBay and went about setting up the local 2 disk RAID0 array. The results were very surprising:

Image

These are all below the speeds seen using a single SSD. See the below table to easily compare:

I’m not sure whether this is because the raid card, or the lack of TRIM support or some other obscure reason. I decided i’m happier running 2 separate SSDs anyway (I can split SQL db & logs between the two discs to see a performance boost) and if something goes wrong I will only have to restore half my VMs from nightly backup.

iSCSI NFS Local Single SSD Passthrough PCIe SSD Local RAID0 2x SSD
Seq Read 96.74 101.4 238.7 1371 233.4
Seq Write 30.61 72.91 229.1 1089 219.9
512K Read 74.77 74.48 228.3 1051 210.1
512K Write 45.46 66.23 223.8 1010 208.2
4K Read 4.138 5.213 22.92 30.36 18.31
4K Write 4.337 4.781 52.73 68.65 23.59
4K QD32 Read 6.575 8.661 212.8 281.1 62.26
4K QE32 Write 5.582 8.791 199.7 240.9 92.81

Screen Shot 2014-04-25 at 16.20.55

 

 

And another without the PCIe SSD to make it a little easier to compare:

Screen Shot 2014-04-25 at 16.26.58

So, in conclusion- I will be running 2 of the 250GB Samsung EVO 840 SSDs locally in the ESXi host. This will provide optimal performance and hugely reduce my dependance on the network and NAS (currently the VMs live on the NAS and I can’t take it or the network down without powering everything down first; my pfSense software router resides in a VM too, so I lose internet connectivity!). I will continue to use Veem to take nightly backups should anything go wrong with the ESXi host.

I hope to migrate everything over the weekend- fingers crossed.

Follow

Get every new post delivered to your Inbox.

Join 2,255 other followers

%d bloggers like this: