Tag Archive: .NET


Whilst documentation/guides/info around GitLab CI on Linux, using Docker and working with languages such as Ruby seems forthcoming, I found little on .NET and Windows. So after spending a lot of time getting it working I wanted to share.

I have deployed a new, clean GitLab CE virtual machine and Windows 10 Professional virtual machine for the purposes of this post. You will need to either load a valid SSL certificate or use HTTP (there is plenty of information online around configuring either way).

The first thing is to download the 64bit Windows GitLab Runner 
from https://docs.gitlab.com/runner/install/windows.html. I chose to create a folder C:\GitLab-Runner to try and keep everything in one place. Then follow the instructions to register and install as a service (when prompted to enter the executor, enter shell).

Now let’s take a look at my .gitlab-ci.yml template;

stages:
  - build
  - test

variables:
  CI_DEBUG_TRACE: "false"

build:
 stage: build
 script:
  - 'call c:\gitlab-runner\build_script.bat'
 artifacts:
  paths:
   - Tests/bin/

test:
 stage: test
 script:
  - 'call c:\gitlab-runner\test_script.bat' 
 coverage: '/\(\d+\.\d+\)/'
 dependencies:
  - build
 artifacts:
  reports:
   junit: testresult.xml

There are a few points to note;

  • The order of the stages- it seemed odd to me at first, but the build needs to happen before the test
  • CI_DEBUG_TRACE could be omitted, but if anything doesn’t work it provides a nice way to troubleshoot
  • For both the build and test we call an external batch file- this makes it really simple/easy to change our CI by modifying a central script rather than going into every project and modifying the .yml (if we do have any special cases we can modify the .yml directly)
  • The build artifacts (we need the test binaries which include all of the compiled references)
  • The test artifacts

Now let’s look at our build_script.bat;

C:\Windows\Microsoft.NET\Framework\v4.0.30319\nuget restore
"C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\bin\msbuild" /t:Restore,Clean,ReBuild /p:Configuration=Debug;Platform="Any CPU"
"C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\bin\msbuild" /t:ReBuild /p:Configuration=Release;Platform="Any CPU"
ping 192.168.99.99 -n 1 -w 10000 2>nul || type nul>nul

To work, our .sln must sit in the root of the repository. There are essentially 3 steps;

  • Restore all nuget packages
  • Attempt to build using the debug config
  • Attempt to build using the release config
  • Wait for 10 seconds (without this some files become locked and cause the test stage to fail)

We also have a private NuGet server which needs adding for the user the GitLab runner service is executing as (SYSTEM here), so we enter this line for the first execution then it can be removed straight away;

C:\Windows\Microsoft.NET\Framework\v4.0.30319\nuget sources add -Name "Tickett Enterprises Limited" -Source https://nuget.blah.com:1234/nuget -username "svc-blah" -password "password123"

And our test_script.bat;

c:\GitLab-Runner\opencover\OpenCover.Console.exe -returntargetcode:1000 -register -target:"C:\Program Files (x86)\NUnit.org\nunit-console\nunit3-console.exe" -targetargs:"Tests\Tests.csproj --result=testresult.xml;transform=C:\gitlab-runner\nunit3-junit.xslt"

To work, our test project must be called Tests.csproj and reside in a folder named Tests. The entire script is combined into a single step which;

  • Uses OpenCover to
  • Execute our tests using nunit3
  • Ensures any error returned by nunit3 is in turn returned by OpenCover
  • Transforms nunit3’s output into a format which GitLab can interpret

So the last piece of the puzzle is the xslt template used to transform the nunit output into something GitLab can understand; you can find this
https://github.com/nunit/nunit-transforms/tree/master/nunit3-junit

If we were to run our CI pipeline now it would fail because none of the prerequisites have been installed on the machine with the runner.

So let’s go ahead and download and install git
https://git-scm.com/download/win (I went with most of the defaults and selected C:\Windows\notepad.exe as the default editor as we won’t really be using it anyway). I’m sure there is a more minimal install we could do, but this works.

You also need to launch a command prompt and run;

git lfs install --system

Next we need to install nuget- the windows binary can be downloaded from
https://www.nuget.org/downloads (and we decided to place it in C:\Windows\Microsoft.NET\Framework\v4.0.30319).

Now we need the Visual Studio 2017 build tools (currently available at
https://my.visualstudio.com/Downloads?q=visual%20studio%202017&wt.mc_id=o~msft~vscom~older-downloads or
https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=15&src=myvs although I know Microsoft have a nasty habit of breaking old links).

You should be able to run the installation and select the “workloads” (or components) relevant to you; we use .NET desktop build tools, Web development build tools and Data storage and processing build tools. We also need to install .NET Framework 4.7 SDK/targeting pack (from the individual components tab).

Right- let’s give it another run and see how we’re getting on;

Excellent, our build is now working AOK, we can focus on the tests. Let’s start by downloading OpenCover from
https://github.com/OpenCover/opencover/releases (at time of writing the latest release is 4.7.922). I chose the .zip archive and simply extracted it to C:\GitLab-Runner\opencover

And now we install NUnit Console from
https://github.com/nunit/nunit-console/releases (at time of writing the latest release is 3.10.0). I chose the .msi and installed using the defaults.

And now if we try and run our pipeline again;

Bingo! We can see the build and test stages both passed and our test shows a result for code coverage! Now let’s check what happens if we deliberately break a test;

Perfect! This time we can see the pipeline has failed and if we raise a merge request the summary summary indicated 1 test failed out of 33 total and highlights the failed test.

The final little nicety we added a few badges to our projects (I did this this via the groups so they appear for all projects within the group rather than adding them to each project).

Go to Groups -> Settings -> General -> Badges then add;

https://yourgitlaburl.com/%{project_path}/badges/
%{default_branch}/pipeline.svg and https://yourgitlaburl.com/%{project_path}/badges/ %{default_branch}/coverage.svg (you can link them to wherever you like). I am curious to find out a little more about badges, I would quite like to show the master, test and development branch pipeline and test coverage badges all on the project but I’ve yet to figure out if you can pass a parameter to change the badge label.

I suspect the next steps will be to;

  • Add some form of code style/best practice analysis
  • Start thinking about deployment (this could be tricky as it involves a lot of differents ifs, buts and maybes along with VPNS, domain users etc)

Any questions, please shout- me or my team would be happy to help you get up and running!

GitLab Setup : Summary

We have now been using GitLab CE (self hosted) for about 16 months and have made some good progress. I hope to post a short series of blog entries describing how we got GitLab setup to work for us in a predominantly Microsoft .NET development environment.

In this initial post I hope to summarise what we have acheived which I can then detail “how-to” in subsequent posts. I will probably spin up a new test instance to run through and ensure the steps are accurate (a lot of them were trial and error the first time round).

I may not go into detail on these ones as they are completely bespoke solutions- but if you are interested post a comment and I will try and do a write up;

  • Merge request approvals
  • Project “templates”
  • Global search across all projects and branches
  • Webhooks/integration with our helpdesk

Watch this space!

We recently migrated our internal reports from SSRS 2008 R2 to SSRS 2016 and had issues with a custom assembly / dll.

The deployment process seemed roughly the same (this time copying the .dll to C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin and modifying rssrvpolicy.config in c:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer to give FullTrust), but while this allowed us to deploy the report and didn’t complain about the file being missing; we were getting #Error returned by the report.

After a lot of messing around, it turns out the .dll was set to target .net framework 3.5 (this was a requirement for SSRS 2008 R2). Now however, it seems the assembly needs to target .net framework 4 (for SSRS 2016). Once I changed this in Visual Studio, recompiled and deployed the new .dll, voila!

Further to a post over 3 years ago https://tickett.wordpress.com/2011/09/27/updateable-excel-frontend-for-sql-view/ – I have finally given up the search for a ready made solution and built something myself.

I work with a number of systems which have functionality gaps and/or need a simpler (quicker) interface for data entry. The solution is essentially a SQL view (or stored procedure) with a simple grid (excel like) front-end. If any data needs to be captured not currently handled by the system a custom table is created to hold the fields.

My previous Excel based solution works rather well, but is starting to show it’s age. I am now beta testing a web based solution i have built;

Screen Shot 2014-12-11 at 20.34.43

The application is extremely simple to configure- enter the SQL to retrieve & update the data;

Screen Shot 2014-12-11 at 20.43.43

And list the columns with some attributes (ReadOnly, Hidden, Title, Width etc);

Screen Shot 2014-12-11 at 20.42.10

And you’re all set!

Functionality currently includes;

  • Easy configuration
  • Simple/fast data entry (with validation)
  • Column resizing (non-persistent)
  • Sort on any column

Next steps;

  • Test (find and fix bugs)
  • Optimise code
  • Allow parameters in select SQL

Crystal Report Data Source Updater

To address a long standing bug/issue with the Crystal Reports application forcing you to update every table/view/sproc one by one I built a quick/dirty application in c#.

You will need .NET installed and the Crystal runtime v13 to run it: http://snk.to/f-cztp0jk7

Here’s the code… no error trapping etc… I may improve it in the future:

            InitializeComponent();
            ReportDocument rpt = new ReportDocument();
            OpenFileDialog ofd = new OpenFileDialog();
            ofd.Filter = "Crystal Reports (*.rpt)|*.rpt";
            ofd.ShowDialog();
            rpt.Load(ofd.FileName);
            ConnectionInfo ci = new ConnectionInfo();
            ci.ServerName = Microsoft.VisualBasic.Interaction.InputBox("Server");
            ci.DatabaseName = Microsoft.VisualBasic.Interaction.InputBox("Database");
            ci.UserID = Microsoft.VisualBasic.Interaction.InputBox("User");
            ci.Password = Microsoft.VisualBasic.Interaction.InputBox("Password");
            ci.IntegratedSecurity = (Microsoft.VisualBasic.Interaction.InputBox("SSPI (Y/N)") == "Y");
            TableLogOnInfo tli = new TableLogOnInfo();
            tli.ConnectionInfo = ci;
            for (int i = 0; i < rpt.Database.Tables.Count; i++)
            {
                rpt.Database.Tables[i].ApplyLogOnInfo(tli);
            }
            SaveFileDialog sfd = new SaveFileDialog();
            sfd.DefaultExt = ".rpt";
            sfd.ShowDialog();
            rpt.SaveAs(sfd.FileName);
            Environment.Exit(0);

Systems Integration / Interfacing

I am currently researching a project for a client to improve and centralise the interfacing of various systems.

Some of the systems will be known in certain industry’s: Microsoft Dynamics CRM, SunSystems, adDEPOT and EBMS / Ungerboeck for example, and others are bespoke systems developed internally or solely for the use of the client.

Interfaces include:

  • Financials (Invoices, Payments, Debtor Accounts)
  • Marketing Data (Prospects, Order History)
  • Orders
  • Other

Currently the systems interface in a variety of ways including but not limited to:

  • SSIS Packages
  • Bespoke .NET Applications
  • SQL Jobs
  • Custom ASP Scripts

I will go into a little more detail about the aims of the project at a later date, but the single biggest reason would most likely be sharing reusable code (for example where multiple systems are all interfacing invoices to Sun). My initial searches led me toward BizTalk and Jitterbit:

Microsoft BizTalk

  • Likely that the Enterprise edition would be required which is pretty expensive
  • Seems to provide a good solution to some common obstacles such as scalability and

JitterBit

  • Open Source option- although it appears to be limited so the commercial edition would probably be required
  • Essentially appears a great tool for simple interfaces but not really cut out for the complexity of most of the integrations I’m looking at

A few links that may be worth checking out if you’re considering the above:

http://blogs.msdn.com/b/chrisromp/archive/2008/02/15/using-biztalk-vs-rolling-your-own.aspx
http://stackoverflow.com/questions/61437/what-are-some-viable-alternatives-to-biztalk-server
http://stackoverflow.com/questions/378088/jitterbit-vs-biztalk

The research and discussion with the IT team (along with a very brief trial of both applications) has led us to the conclusion that neither suit our requirements that well and a "home-grown" solution is the way to go. I tried to sketch what the end product may look like:

The red lines represent links which may not exist: Should all communication go through our custom web service? Or should we connect directly to the central database for certain functions/queries? Or possibly even directly to the underlying system? Hopefully I will be able to firm this up at a later date.

The first step will most likely be to focus on a single interface: Building the necessary web service methods, invoking application and supporting logging tables in an underlying database. Sun is most probably the biggest player in all these interfaces so we’ll start with one of the interfaces to/from Sun.

SunSystems Connect (SSC): A web service already exists to communicate with Sun however we still want to wrap this within our own web service. I started by browsing to SSC and executing a query using the web interface:

But I kept hitting errors:

The following error(s) occurrent while executing the method:

An error occurred while exporting data
Unknown column specified in filter definition. Unknown column name was 'Column id = ERROR: invalid element name '/Accounts/AccountCode ''.
Column requested does not exist in the column dictionary. Column id is 'Column id = ERROR: invalid element name '/Accounts/AccountCode ''

I guess there must be a bug as the payload.xml appears malformed:

When I fix it:

And execute using my fixed payload.xml:

Everything works as intended:

Now it’s time to try communicating with SSC from .NET. Finding documentation was a bit of a nightmare but I managed to get my hands on some old paperwork: SunSystems Connect 521 SP1.pdf which includes code snippets including C#, Visual Basic.NET and Java- so far, so good…

Watch this space!

L

%d bloggers like this: