Express install Jekyll on Windows and import a Wordpress blog

There’s a huge array of great posts covering installing Jekyll and migrating from Wordpress to Jekyll. I still needed to pull everything together from a few sources (and needed some test material for my first post since migrating to Jekyll), so I’ve put together a quick shortcut guide (mostly scripted) below. Past this point, I’ll assume you’ve already made the decision to jump ship and just need a quick set of commands to run!

Pull up a powershell terminal elevated to Administrator and lets get going.

Install Jekyll locally on Windows for debugging (optional)

New-Item -ItemType Directory -Force -Path "C:\temp\wpjekyll"
cd "C:\temp\wpjekyll"
Invoke-WebRequest ""
rubyinstaller-2.1.5-x64.exe /verysilent /tasks="modpath"
Invoke-WebRequest ""
DevKit-mingw64-64-4.7.2-20130224-1432-sfx.exe -o "C:\RubyDevKit" -y
ruby "C:\RubyDevKit\dk.rb" init
ruby "C:\RubyDevKit\dk.rb" install
gem install jekyll

Grab an awesome template of your choice

cd "C:\temp\wpjekyll"
Invoke-WebRequest ""
$shell = new-object -com shell.application
$template = $shell.NameSpace("")
New-Item -ItemType Directory -Force -Path "jekyll-template"
foreach($item in $template.items()) { $shell.Namespace("jekyll-template").copyhere($item) }

Give our new blog a test run

gem install bundler
cd "C:\temp\wpjekyll\jekyll-template\poole-master"
Add-Content "_config.yml" "`r`nmarkdown: redcarpet`r`nhighlighter: rouge"
"source ''`r`ngem 'github-pages'`r`ngem 'rouge'`r`ngem 'wdm', '>= 0.1.0' if Gem.win_platform?"|sc "Gemfile"
bundle install
jekyll serve

Check it out at http://localhost:4000!

Host your blog

There’s lots of hosting options - after all, Jekyll ends up generating plain old HTML! GitHub Pages is a common choice - they’ll both build the content and host the result for you and it’s free including using a custom domain name.

  1. Create a repository on GitHub Pages named
  2. Create a git repository using the template folder from the previous section and push it up to your new GitHub repository using your tool of choice. You’ll want to make sure you’ve pushed to the master branch of the repository.
  3. Hang tight! It might take up to 10 minutes, but your blog should become available at

Migrate from Wordpress

In your wordpress admin panel, Tools > Export > All Content. Save the resultant file to your blog folder as wordpress.xml, or modify the filename in the command below.

cd "C:\temp\wpjekyll\jekyll-template\poole-master"
gem install jekyll-import hpricot
ruby -rubygems -e "require 'jekyll-import';{ 'source' => 'wordpress.xml' })"

Set your permalink configuration to match the Wordpress default (customise this if you need to):

cd "C:\temp\wpjekyll\jekyll-template\poole-master"
Add-Content "_config.yml" "`r`npermalink: /:year/:month/:day/:title/"

Import comments

The above will have imported all your posts and images. A common requirement at this point is to set up a comment plugin such as Disqus and import your old comments.

  1. Create a Disqus account
  2. Grab the Universal Code snippet and drop it into your blog wherever you’d like comments to appear. In our example template above, you’d most likely drop it into _layouts\post.html.
  3. To make sure we exclude index.html from our URL which Disqus uses to uniquely identify a page, drop the following line into the template for each page - in the above, _includes\head.html (inside <head></head>):
<link rel="canonical" href="" />

And the following line of javascript into the Universal Code snippet you pasted in step 2:

var disqus_url = "";

In the Disqus admin panel under Discussions -> Import -> Wordpress, upload your wordpress.xml file from earlier. You can now delete that file. You’ll need to make sure your new blog URLs match the old ones for comments to carry across properly, though Disqus has some cool tools to help you migrate domains/URL formats in their admin panel under the Discussions -> Edit tab.

All done! There’s loads more you can do to customise Jekyll from here. Check out plugins supported by GitHub Pages, settings up a custom domain, troubleshooting GitHub Pages build failures, and this infinite scroll plugin which slots right in to most Jekyll templates with no markup changes.

Azure Mobile Services .NET Backend: WebApiConfig.Register() usage

When using Azure Mobile Services .NET Backend, you might run into some of the following errors in the portal:

Boot strapping failed: executing 'WebApiConfig.Register' caused an exception: 'Parameter count mismatch.'.
Exception=System.InvalidOperationException: Boot strapping failed: executing 'WebApiConfig.Register' caused an exception: 'Late bound operations cannot be performed on types or methods for which ContainsGenericParameters is true.'.

Or the following in your event log:

The service has not been initialized correctly. Please ensure that 'StartupOwinAppBuilder' has been initialized.
   at Microsoft.WindowsAzure.Mobile.Service.Config.StartupOwinAppBuilder.Configuration(IAppBuilder appBuilder)
Exception has been thrown by the target of an invocation.
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at Owin.Loader.DefaultLoader.&lt;&gt;c__DisplayClass12.&lt;MakeDelegate&gt;b__b(IAppBuilder builder)
   at Owin.Loader.DefaultLoader.&lt;&gt;c__DisplayClass1.&lt;LoadImplementation&gt;b__0(IAppBuilder builder)
   at Microsoft.Owin.Host.SystemWeb.OwinHttpModule.&lt;&gt;c__DisplayClass2.&lt;InitializeBlueprint&gt;b__0(IAppBuilder builder)
   at Microsoft.Owin.Host.SystemWeb.OwinAppContext.Initialize(Action`1 startup)
   at Microsoft.Owin.Host.SystemWeb.OwinBuilder.Build(Action`1 startup)
   at Microsoft.Owin.Host.SystemWeb.OwinHttpModule.InitializeBlueprint()
   at System.Threading.LazyInitializer.EnsureInitializedCore[T](T&amp; target, Boolean&amp; initialized, Object&amp; syncLock, Func`1 valueFactory)
   at Microsoft.Owin.Host.SystemWeb.OwinHttpModule.Init(HttpApplication context)
   at System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers)
   at System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context)
   at System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context)
   at System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext)

It’s not clearly documented anywhere at the moment, but it turns out that Azure Mobile Services expects you to have a WebApiConfig.Register() method defined exactly, which it invokes via reflection. If you change this, it won’t call that method correctly - if you’re working from the examples the standard OWIN services won’t be configured.

Of course, you can always go ahead and configure OWIN yourself if you need to customise it’s configuration.

Creating and integration testing a .NET / C# Backend for Azure Mobile Services

I’ve been experimenting with the new .NET backend for Azure Mobile Services, and for various reasons decided I’d get the highest business value from testing the backend API controllers at a high level rather than unit testing them. This proved to be a little tricky as there isn’t a lot of documentation out there on the .NET backend yet, so I’ll cover the steps I took to get it working with a quick walkthrough.

Creating the backend

Firstly, let’s create a new mobile service and grab the sample app generated by the portal.

Open up the sample app in Visual Studio. You should give it a test run at this point to make sure it all works correctly.

Testing an API method

All looking good? Awesome. Add a .NET 4.5+ class library for your tests and set up the unit testing framework of your choice. You’ll also need to install the WindowsAzure.MobileServices.Backend.Entity NuGet package into your tests project.

While you’re there, add a connection string to the app.config in your tests project by copying in the one from web.config and changing the file and initial catalog names. This will be used for our test database.

Let’s see what happens when we add a really simple test:

Cool, wouldn’t have been any fun if it was that easy. If you take a look at the TodoItemController you’ll notice some dependencies configured in the Initialize method, which isn’t called by our test. We’ll match that with a fake implementation in our test setup method - we can add further information to the Request and Services objects if necessary for more complex tests. You’ll notice I’ve also chosen to manually configure the DataDirectory used in the test connection string (something that ASP.NET does automatically for our normal requests) so that the test database is created correctly under bin\Debug.

With that in place, let’s give our basic test another run.


I'm certainly no expert on the .NET Mobile Services backend, so I'm sure I've missed things in the above steps for more complex scenarios. As I explore more, I'll update the walkthrough and add potential issues below for advanced readers.

1. If you use |DataDirectory| in your test connection string but don't set this up, you'll see an error like this:

A file activation error occurred. The physical file name '\{databasename}.mdf' may be incorrect. Diagnose and correct additional errors, and retry the operation.
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

2. If whatever reason your version doesn't include these two calls, you might face errors like these:

System.Data.SqlClient.SqlException: Cannot create more than one clustered index on table 'dbo.TodoItems'. Drop the existing clustered index 'PK_dbo.TodoItems' before creating another.

Validation failed for one or more entities. See 'EntityValidationErrors' property for more details. errors complaining about your Id or CreatedAt properties being NULL, or a Cannot insert the value NULL into column error (especially if your seed data expects the database IDs to be auto generated).

There are two things going on here which cause this - bear with me as it gets a bit crazy! The Microsoft.WindowsAzure.Mobile.Service.ServiceConfig.Initialize() method called from our scaffolded WebApiConfig.Register() method injects it's own SQL rewriting method into EntityFramework using the Microsoft.WindowsAzure.Mobile.Service.Config.EntityExtensionConfig and Microsoft.WindowsAzure.Mobile.Service.Tables.EntityTableSqlGenerator classes included with the backend libraries. This SQL generator performs some functionality behind the scenes which prevent the above errors occuring, like configuring autogenerated Guids and DateTimes for the EntityData properties marked with TableColumnType.Id and TableColumnType.CreatedAt attributes, and force disabling index clustering on the primary key if you have an entity property marked with TableColumnType.CreatedAt (all defaults if your data models inherit from EntityData).

If your test doesn't call into a method from Microsoft.WindowsAzure.Mobile.Service.Entity or Microsoft.WindowsAzure.Mobile.Service.Entity.Tables assemblies which both contain [assembly: ExtensionConfigProvider(typeof (EntityExtensionConfig))] in their AssemblyInfo.cs files, you might not load those assemblies into your test project app domain - my initial approach to this involved the controller handling the initialisation in a different way, which meant the test never touched those assemblies.

Unfortunately, the SQL generator is injected by scanning your loaded assemblies for the attribute, which is never an issue for your main API project - see internal static void InitializeExtensions(_Assembly[] loadedAssemblies, HttpConfiguration config, ContainerBuilder containerBuilder) within Microsoft.WindowsAzure.Mobile.Service.ConfigBuilder for how this has been implemented. If you're having this problem, try adding the above attribute to your test AssemblyInfo.cs file yourself.

Wrapping it up

I recommend that if using migrations, when running in a test you should configure your seed class to to fully drop and recreate the database each time otherwise it's easy to miss configuration changes that will break the backend down the track.

I've created a base class for my controllers and for my controller tests to add support for this as well as simplify our controllers and tests:

Usage looks like this:

TeamCity deployment pipeline (part 2: TeamCity 8, Build once and UI tests)

The last post in this series covers a range of techniques for using TeamCity to build a continuous delivery deployment pipeline in a scalable way. This post builds on those techniques by providing some more options to consider given the new features in TeamCity 8 as well as what we've learnt since the original post.

Maintainable, large-scale continuous delivery with TeamCity series

This post is part of a blog series jointly written by myself and Matt Davies called Maintainable, large-scale continuous delivery with TeamCity:

  1. Intro
  2. TeamCity deployment pipeline
  3. Deploying Web Applications
    • MsDeploy (onprem and Azure Web Sites)
    • OctopusDeploy (nuget)
    • Git push (Windows Azure Web Sites)
  4. Deploying Windows Services
    • MsDeploy
    • OctopusDeploy
    • Git push (Windows Azure Web Sites Web Jobs)
  5. Deploying Windows Azure Cloud Services
    • OctopusDeploy
    • PowerShell
  6. How to choose your deployment technology

TeamCity 8 (and some other goodness we missed last time around)

Following is a bunch of (mostly new) TeamCity features we didn't cover in the last post that we think are useful to know if you are looking after a large-scale TeamCity deployment.

Project groupings

One of the big improvements in Teamcity 8 is the addition of Projects Hierarchy, which allows you to nest projects. In a large-scale TeamCity deployment this is probably very useful:

  • You can group related projects e.g. per-client, per-team, per-department etc.
  • If you follow our advice in the last post about splitting up production deployments to a separate project (if you need to have different permissions) then you can group the prod and non-prod projects together

As well as a logical grouping, project hierarchy gives you the ability to share the following:

  • Build configuration parameters and templates
  • Clean-up rules
  • VCS Roots
  • Users and group roles
  • Shared resources
  • Meta-runners

Pull requests

Rob has written before about the importance of using pull requests for commercial software development. One of the really cool things in TeamCity is the ability to automatically build pull requests and then to report back their status via one of a number of plugins (if using GitHubBitBucketStash). You can see more about how to do this in the excellent post by Mehdi Khalili.


Semantic Versioning is really useful for software libraries because it helps communicate the scope of a change to the users of that library. Jake Ginnivan has talked to us before about using semver for commercial projects to communicate the status of a build to the client e.g. major revision can be used for a large change / feature / rewrite, a minor revision can be used for a new feature and a patch can be used for bug fixes or small changes to an existing feature. If you want to use custom versioning with TeamCity then it actually provides you with the ability to dynamically change the build number to help accommodate this.

If you are using Git for your source control, then you are in luck because Jake Ginnivan has created a continuous delivery compatible way of using semver with TeamCity via the GitHubFlowVersion project (note: it's currently in the stages of being merged into the GitVersion project). It's important to note that you can no longer use the Assembly Info Patcher Build Feature when using GitHubFlowVersion, but it has the ability to patch the assemblies for you.

Sharing versions across dependent build configurations

In the last post we discussed the disadvantage of not having the build number propagate to each of the consequential build configurations and provided a number of links to potential solutions. We have since discovered a way of sharing versions between build configurations that share a dependency. You can simply put in %dep.<DEPENDENT_BUILD_CONFIG_ID> where <DEPENDENT_BUILD_CONFIG_ID> is the id of the build configuration you want to use the build number from (note: it must have a snapshot dependency for you to be able to reference it). TeamCity 8 makes this much easier since you can specify the build configuration ID and the auto-completion of dependent variable names works a lot better.


If you find you need multiple build templates and these templates all repeat two or more build configuration steps or a single complex build configuration step then you can extract the common step(s) into a meta-runner. For an example of a Meta Runner check out the TeamCity Meta-runner power pack and the recent post by Rob.

Disk usage report

When you are administrating a large scale TeamCity deployment in a limited resource environment you may find you need to quickly monitor where your hard disk space is being allocated. One of the new features in TeamCity 8 that can help in this situation is the disk usage report. You should also keep in mind the clean-up configuration options available to you.

Shared resources

If you have a large TeamCity deployment then it's likely some of your builds may depend on limited external resources such as test databases, metered resources etc. If this applies to you then you may want to look into TeamCity 8's shared resources feature which allows you to limit the number of builds using these shared resources at any point in time.

Queued build page

If you find you have a lot of contention for your build agents you can take advantage of a new feature in TeamCity 8 which allows you to monitor all current builds queued and running on the server.

NuGet server

TeamCity simplifies using custom NuGet packages for your own libraries and products by providing a built-in NuGet server (introduced in TeamCity 7) which integrates with package artifacts generated by a build configuration. This feature is incredibly useful for large scale deployments where you have shared code between projects and you are using NuGet to manage your inter-project dependencies.

Build once

An important philosophy in Continuous Delivery is to make sure that your application is compiled once only and those same binaries are reused for every subsequent part of the deployment pipeline. This gives confidence that you are deploying the same code that has been tested and deployed earlier in the pipeline and form a part of the confidence that a proper deployment pipeline gives you.

One of the weaknesses in our last post was that the code could be rebuilt across build configurations if there is more than one agent (or MSBuild decided it wanted to rebuild). This was correctly pointed out in the comments by Marcin.

We have experimented with tweaking the pipeline to get build once and we have three main options we have explored:

UI Tests

One thing that wasn't mentioned in the previous post about TeamCity pipelines was running UI Tests. Since that post we have done a lot of work with integrating automated UI tests as part of our TeamCity pipelines. Providing guidance about running automated UI testing in TeamCity is a post in itself, but here are a few pointers to get you started:

  • Consider what point in the pipeline you want to run your UI tests e.g. alongside your unit tests, after automatically deploying to your first environment after unit tests, after deploying to a test environment to provide more confidence before hitting prod, after deploying to a dedicated (static or on the fly) UI testing environment
  • Remember that UI tests are generally slow so preferably you want to run it out of band of your first build configuration with unit tests
  • If possible include a separate TeamCity agent that is dedicated for running UI tests so your continuous integration builds don't get delayed and you increase your immediate feedback loop
  • Consider whether you want to run UI tests against all pull requests or not (on one hand you get a lot of confidence, on the other it takes a lot more time and requires isolated environments, which may be more complex
  • Configure your agent machines so they are configured to be able to run the automated UI tests if you need interactive mode; note: if you are using PhantomJS then you might not need the agent to have interactive mode
  • Make sure that you take screenshots whenever your UI tests fail and that the artifacts are dropped in a directory that is configured in your build configuration to pick up all files as artifacts
    • Make sure you clear that directory before every build otherwise the screenshots will start stacking up from previous builds - TeamCity has an option for this

Fixing MSDeploy Error: Web deployment task failed. (Root element is missing.)

In one of my MRCollective open source projects AzureWebFarm, we configure a set of Windows Azure Web Roles such that a change deployed to any of them using MSDeploy will automatically sync to the other servers. Up to and including MSDeploy v2 this worked fine, but in a Windows Server 2012 Azure server environment MSDeploy v3 returns the following exception (if deploying to a role with more than one instance):

Web deployment task failed. (Root element is missing.)

The more verbose version of this exception:

System.Xml.XmlException: Root element is missing.
     at System.Xml.XmlTextReaderImpl.ThrowWithoutLineInfo(String res)
     at System.Xml.XmlTextReaderImpl.ParseDocumentContent()
     at System.Xml.XmlReader.MoveToContent()
     at Microsoft.Web.Deployment.TraceEventSerializer.Deserialize(Stream responseStream, DeploymentBaseContext baseContext, DeploymentSyncContext syncContext)
     at Microsoft.Web.Deployment.AgentClientProvider.RemoteDestSync(DeploymentObject sourceObject, DeploymentSyncContext syncContext, Nullable`1 syncPass) at Microsoft.Web.Deployment.DeploymentObject.SyncToInternal(DeploymentObjectdestObject, DeploymentSyncOptions syncOptions, PayloadTable payloadTable, ContentRootTable contentRootTable, Nullable`1 syncPassId) at Microsoft.Web.Deployment.DeploymentObject.SyncTo(DeploymentProviderOptions providerOptions, DeploymentBaseOptions baseOptions, DeploymentSyncOptions syncOptions) at MSDeploy.MSDeploy.ExecuteWorker()

You might also observe the following exception on the server:

wmsvc.exe Error: 0 : ERROR_SERIALIZER_ALREADY_DISPOSED - The object 'Microsoft.Web.Deployment.TraceEventStreamSerializer' has already been disposed. 

Tracing this further showed that MSDeploy v3 appears to use multiple connections, and those connections were distributed to multiple servers in our test web farms for a single deployment, causing msdeploy to fail pretty miserably with these XML errors.

The solution was to ensure that all MSDeploy connections from the client end up at a single server in the web farm, rather than being distributed to multiple servers. To achieve this in Windows Azure, you can create a custom load balancer endpoint in ServiceDefinition.csdef - take a look at the loadBalancerProbe="WebDeploy" and the LoadBalancerProbe config setting in our example ServiceDefinition.csdef for reference. You can read more about custom load balancer endpoints in this post.

That custom probe hits a file we created called Probe.aspx. That file contains a really simple piece of code which attempts to get an Azure Blob Storage lease on a blob we specify. Only one instance will ever be able to get that lease at a time, and if it ever fails, another instance will pick it up, ensuring this solution still gives us redundancy for deployments.

Should an instance successfully get the lease, it will return a HTTP OK status on that load balancer endpoint, and the Azure Fabric will know that it's fine to route web deploy requests to that instance. All the other instances will fail to get the lease, and return a HTTP error status back to the endpoint, meaning the Azure Fabric will not send any web deploy requests to all other instances (ensuring it all works happily with MSDeploy v3).