Tag Archive: SQL Server

Further to a post over 3 years ago https://tickett.wordpress.com/2011/09/27/updateable-excel-frontend-for-sql-view/ – I have finally given up the search for a ready made solution and built something myself.

I work with a number of systems which have functionality gaps and/or need a simpler (quicker) interface for data entry. The solution is essentially a SQL view (or stored procedure) with a simple grid (excel like) front-end. If any data needs to be captured not currently handled by the system a custom table is created to hold the fields.

My previous Excel based solution works rather well, but is starting to show it’s age. I am now beta testing a web based solution i have built;

Screen Shot 2014-12-11 at 20.34.43

The application is extremely simple to configure- enter the SQL to retrieve & update the data;

Screen Shot 2014-12-11 at 20.43.43

And list the columns with some attributes (ReadOnly, Hidden, Title, Width etc);

Screen Shot 2014-12-11 at 20.42.10

And you’re all set!

Functionality currently includes;

  • Easy configuration
  • Simple/fast data entry (with validation)
  • Column resizing (non-persistent)
  • Sort on any column

Next steps;

  • Test (find and fix bugs)
  • Optimise code
  • Allow parameters in select SQL

Server/Network Monitoring

The problem is that we are spoilt for choice! My requirements are pretty basic, yet making a decision and wading through all the information/trialling etc was proving quite a challenge!

The feature set I’m looking for is pretty basic;

  • connectivity (probably through ping)
  • network throughput/bandwidth usage
  • website availability
  • sql server availability
  • disk space
  • cpu utilisation
  • disk queue length

The number of monitors/sensors/servers initially will be quite low, but i’m always nervous about purchasing something which is going to end up costing me substantial amounts if/when it needs expanding (always prefer unlimited licenses!).

And perhaps most importantly, I was really hoping to find something which sites on top of a Microsoft SQL Server backend database. This will make ad-hoc queries much simpler and allow me to pull data into existing SQL Server Reporting Services (SSRS) reports.

  • The top players in the industry appear to be Whatsup Gold & IPMonitor, both of them hold a pretty hefty price tag.
  • We then have a few big players with no price tag… Spiceworks & Nagios. I’ve used Spiceworks before, and find it too bloated (it does everything OK, but nothing very well… not to mention the adverts. Nagios sounds quite complex to get up and running, although a potential contender.
  • ServersAlive looked like an option with a very reasonable price tag. Although it doesn’t directly support a SQL Server backend, it does support the logging of “state changes” to SQL Server. Unfortunately the software looks incredibly dated- i’m not too sure it’s being actively maintained/developed.

And none of these use SQL Server as a backend database. Maybe I should just build something myself… my requirements are pretty basic… Luckily before making a purchase or starting to roll my own, i found this page on wikipedia;


I took a quick look at the website for each tool listed as supporting MS SQL Server backend db and quickly found myself down to NetXMS (http://www.netxms.org/).

I installed the server along with a linux and windows agent (fairly effortless) and it’s all looking rather promising. I’ve started by setting up a few basic monitors and will hopefully find time over the coming weeks to add;

  • Additional monitors
  • Alerts
  • Poke around in the database (pull some data into SSRS)

I have used this technique over and over, and although many other options exist, this is simple and effective so worth mentioning.

There is 1 prerequisite: database mail must be configured on the sql server.

The first step is to build your query- let’s say:

select blah from some_table where exception_condition = 'yes'

Once we’re happy with our query we can build it into the sp_send_dbmail command:

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Test',
@recipients = 'blah@example.com',
@query = 'select blah from some_table where exception_codition = ''yes''',
@subject = 'EBMS Exception Report',
@attach_query_result_as_file = 1;

Notice- you will have to replace any single quotes in your query with 2 single quotes. You will likely notice a the output is a bit mis-aligned:

This is because sql server automatically extends the field to the length in the table definition. I normally truncate each field to no more than about 30 characters- and we’re in business. If you don’t want to keep running the sp_send_dbmail procedure, opening the e-mail, tweaking the query and re-running- you can run the sql query in management studio and select the "output as text" option as shown below:

After a bit of tweaking you should have it looking how you want:

Once you have tested this and are happy you can now created a sql job with a single t-sql step containing the above code. Attach a schedule and you’re good to go!

If you receive an error when trying to execute the sql job, you may need to double check the sql server agent account has adequate permission on the msdb database and the databases involved in your sql query.


It has been far too long since I blogged. Work has been keeping me busy 7 days a week but here’s a quick post about a recent challenge.

I have been working on several data migration projects and have been struggling in general with sluggish SQL query performance. Most of the research conducted has suggested the primary bottleneck is disk IO (rather than throughput or CPU speed) so I decided to try and find the fastest "reasonably priced" SSD to give me a boost. I found this list http://www.storagesearch.com/ssd-fastest.html and after research each of the items (for availability and cost) I chose the OCZ Revodrive3 X2 PCI-e SSD.

I ordered the SSD and installed it in one of my spare servers (SuperMicro X8SIL-F, Intel X3470, 16GB Kingston RAM). Despite not being listed on the OCZ compatibly list http://www.ocztechnology.com/displaypage.php?name=revodrive_moboguide the SSD was detected during post. Again, despite being listed as only compatible with Windows 7, the SSD was detected in Windows 2008 R2 and after installing the driver and initialising the disk in Server Manager appeared in "My Computer".

Unfortunately after installing SQL Server 2008 R2 and loading a few database up the machine mysteriously restarted and no longer was the drive listed in "My Computer" or device manager. After further reboots the SSD was no longer showing during post (various green and blue LEDs were still lit so it wasn’t completely dead). I tried the card in an old Dell 9150 I had kicking around- nothing. Then a spare HP Microserver N36L- again, nothing.

Fairly convinced now that the SSD had died I called eBuyer up and started the RMA ball rolling along with placing an order for a new card (this was Thursday night and fortunately they were able to guarantee Saturday delivery). I also ordered a motherboard from the compatibility list- ASRock Z68 Extreme4 Gen3, an Intel i7 Quad-Core 2600 3.4GHz Sandy Bridge processor, and 16GB DDR3 RAM along with a no-name case all from Amazon (again, with the guarantee of Saturday delivery).

I put everything together and fired it up:

One of the first things I noticed was the different number of chips present on each of the SSD boards (compare this to the one at the top of the post?). But regardless, the replacement was working and still going strong after 24hrs solid use.

Keeping an eye on various counters in Windows I’ve seen the card reach over 1000MB/sec- impressive! And no longer does the disk seem to be the bottleneck, it now appears to be the CPU- doh! Unfortunately it looks like some queries only utilise a single processor core so the CPU is actually only 12.5% utilised.

That’s all for now! It looks like I was just very unfortunate with the first card but the replacement is blindingly fast and a great price (especially when compared with the competition).


BCP as SSIS Alternative

I often use BCP as a quick/easy alternative to SQL Server Integration Services Packages- creating a view or stored procedure to return the desired records then a SQL job to execute BCP using an Operating system (CmdExec) step type.

This works pretty well then a second step can be used to e-mail the BCP output using sp_send_dbmail or a similar alternative. Now I want to archive the BCP output by appending a prefix / suffix to the BCP output filename.

I already use batch files for various tasks and so know that I can pass %date:~6,4%%date:~3,2%%date:~0,2% to the command line to get the date in format YYYYMMDD. So let’s try adding that to the BCP command line:

bcp "exec ebms_sql.dbo.sp_Missing_Tax_Schemes" QUERYOUT "\fs01ReportsMissing_Tax_Scheme_"%date:~6,4%%date:~3,2%%date:~0,2%".csv" -T -S "localhost" -t "," –c

Let’s try it:

Error: SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Native Client]Unable to open BCP host data-file. Process Exit Code 1. The step failed.

OK- so it must be a permissions error? Nope- permissions are fine. Let’s trying running it from the a command prompt:

Hmmm- that works?! Let’s try snooping what’s going on when the SQL job runs with Sysinternals Process Monitor:

Right- the environment variables aren’t being played out- so the filename containing special characters is invalid. How can we force the command prompt to interpret differently? Let’s try cmd /c on the off chance:

cmd /c bcp "exec ebms_sql.dbo.sp_Missing_Tax_Schemes" QUERYOUT "\fs01ReportsMissing_Tax_Scheme_"%date:~6,4%%date:~3,2%%date:~0,2%".csv" -T -S "localhost" -t "," –c

Success- it works!

Now my only remaining frustration is that BCP won’t output a row with column headers (I do a weird union thing to get around this for the time being, but will look further into this at a later date).


%d bloggers like this: