Recovering a docker vm on Windows with Virtual Box

NOTE: this is more of a windows / virtual box problem but the boot2docker experience as will a lot of things is not great on Windows… yet.

I was running the docker VM with 2 newly configured containers, all was going good, the docker part was quite easy (once up and running) then my entire machine crashed due to something else.

Reboot, open up shell and type ./boot2docker.exe start

Failed to start machine “boot2docker-vm” (run again with -v
for details)

So I run it with -v, no useful information.

Oh no.

sad_keanu

Hopefully I haven’t lost the containers I spent a day setting up. The first windows issue with getting docker up and running was resolved by wiping all the data produced and starting fresh. That was not going to be the best outcome this time around.

First thing first, find out where that docker VM is:

virtual box UI

So it was in C:\Users\<username>\VirtualBox VMs\boot2docker-vm, there it was 580mb and several hours of work I didn’t want to do again, minor relief, made a copy of this.

Next step: getting docker back into a good state, I tried many combinations of; poweroff, reset, init, uninstalling and reinstalling, no luck. A note on this none of those commands hurt the vmdk file there, but still make a copy.

So next step move that boot2docker-vm folder out of there, and do a new init..

Small success docker starts.

Stop it and then try to drop the VMDK file back in…

Nope.

Failed to start machine “boot2docker-vm” (run again with -v
for details)

nope

Ok so now it’s starting to look like the problem is specific to VirtualBox and just getting the VM to spin up again. A little of bit of digging and I see the vbox file has some UIDs and MACAddresses, comparing the 2 newly-installed to the backup, the differences that look to be the cause appear.

diff of vbox file

Success, change those variables to match.

It starts, and the containers are there, phew!

docker ps -a

docker start <containerid>

Summary

It appears that a new init sets up new adapater MAC addresses and a new machine uuid, the rest of the differences can stay as they are specific to your old VMDK file. If you’re reading this good luck, and as always YMMV.

Using AutoMapper to help you map FSharpOption<> types

Why?

Because your model is structured this way, and you have realised you need this, otherwise this doesn’t apply to you.

Scenario

When you get used to using AutoMapper to help you everywhere, you begin to demand it helps you everywhere by default. In this scenario you have to configure it to help you map from an F# type that has option (Guid is just an example).

In our event sourcing setup, we have commands that now change to have an additional property (not option), but the event now needs to have option (as that data was not always present).

We end up using those types/classes (events) that have the optional value to map to C# classes that are used for persistence (in this case RavenDB), and they are reference type fields so a null value is acceptable for persistence.

Here’s the Source and Destination classes, hopefully seeing that makes this scenario clearer.

public class SourceWithOption
{
    public string Standard { get; set; }
    public FSharpOption<Guid> PropertyUnderTest { get; set; }
}

public class DestinationWithNoOption
{
    public string Standard { get; set; }
    public Guid PropertyUnderTest { get; set; }
}

Note: the DestinationWithNoOption is the equivalent C# class that we get our of the F# types, so the F# code is really this trivial (SubItemId is the optional one):

type JobCreatedEvent = {
    Id : Guid
    Name: string
    SubItemId : option<Guid>
}

Solution

Where you do all your AutoMapper configuration you’re going to make use of the MapperRegistry and add your own.

(Note: all this code is up as a gist.

var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;

AutoMapper.Mappers.MapperRegistry.AllMappers = () 
    => allMappers().Concat(new List<IObjectMapper>
    {
            new FSharpOptionObjectMapper()
    });

And the logic for FSharpOptionObjectMapper is:

public class FSharpOptionObjectMapper : IObjectMapper
{
    public object Map(ResolutionContext context, IMappingEngineRunner mapper)
    {
        var sourceValue = ((dynamic) context.SourceValue);

        return (sourceValue == null || OptionModule.IsNone(sourceValue)) 
		? null : 
		sourceValue.Value;
    }

    public bool IsMatch(ResolutionContext context)
    {
        var isMatch = 
		    context.SourceType.IsGenericType &&
		    context.SourceType.GetGenericTypeDefinition() 
				== typeof (FSharpOption<>);

        if (context.DestinationType.IsGenericType)
        {
            isMatch &= 
			    context.DestinationType.GetGenericTypeDefinition() 
			    != typeof(FSharpOption<>);
        }

        return isMatch;
    }
}

Tests to prove it

Here’s a test you can run to show that this works, I started using Custom Type Coverters (ITypeConverter) but found that would not work in a generic fashion across all variations of FSharpOption<>.

[Test]
public void FSharpOptionObjectMapperTest()
{
    Mapper.CreateMap();
    
    var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;
    AutoMapper.Mappers.MapperRegistry.AllMappers = () =&gt; allMappers().Concat(new List
        {
            new DustAutomapper.FSharpOptionObjectMapper()
        });

    var id = Guid.NewGuid();
    var source1 = new SourceWithOption
    {
        Standard = "test",
        PropertyUnderTest = new FSharpOption(id)
    };

    var source2 = new SourceWithOption
    {
        Standard = "test"
        //PropertyUnderTest is null
    };

    var result1 = Mapper.Map(source1);
    
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.AreEqual(id, result1.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' on Guid didn't work as expected");

    var result2 = Mapper.Map(source2);
    
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.IsNull(result2.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' for null failed");
}

Thinking in a document centric world with RavenDB @ ALT.NET

Last night (25th Feb 2014), I presented on RavenDB at ALT.NET Melbourne.

I got some great feedback from the audience and was happy to share my experience so far with RavenDB. If you were there / watch the recording and have some suggestions good or bad would love to hear them so I can improve.

Here’s the ALT.NET recording with slides, plus me up at the projector screen.

If you just want the slides and audio then here’s an alternate recording.

I’ve also put the slides up on slide share.

HTTP Error 500.19 – Internal Server Error with ASP.NET web on Windows 8 and IIS 8

I’m blogging this (again*) so next time I do a search I find my own post, it may also help you. I manage a few development VMs which I swear I have setup in the past correctly, but every so often I find one that’s got the following problem.

*Again: it’s in an older post in the trouble shooting part, but this time, the error is the title of the post to help indexing, as when I searched this time around, that post did not come up.

Scenario / Error

Windows 8 + IIS launching your ASP.NET web app, you get the following IIS error:

HTTP Error 500.19 – Internal Server Error
The requested page cannot be accessed because the related configuration data for the page is invalid.

Error Code 0x80070021
Config Error This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault=”Deny”), or set explicitly by a location tag with overrideMode=”Deny” or the legacy allowOverride=”false”.

Note the ‘locked at a parent level’, in the config error.

Solution

Turn on the Application Development Features. If they’re already on, then you have a different problem, there’s a few questions about ASP.NET IIS and 500.19 on Stack Overflow.

Enable IIS 8 App Dev Features

Hope this helps.

Cross subdomain ASP.NET Forms Authentication for local developement

I’ve had this issue twice now, and both times when I did my search I would end up this popular Stack Overflow question but adding an answer to a popular question that doesn’t directly* answer the question will get the attention of down vote police.

*For some values of direct.

So I’ll just have to blog it here, and maybe the comment will help someone out who is likely to end up on that question, at least until the comment is flagged as unconstructive or offensive because “somewhat related” isn’t in the spirit of StackOverflow.

So with the grievance aired.

Objective

To be able to have subdomain1.machine-name and subdomain2.machine-name share a cookie locally via forms authentication.

Steps

To go about achieving the saving of an authentication cookie valid across multiple domains locally under IIS.

Configurations

The most important thing here is to ensure that your local domain has at least ‘.’ in it. I often try to just have it be the machine name, this does not work, so I select something like the .app suffix.

Authentication configuration section in web.config:

   <authentication mode="Forms">
      <forms loginUrl="~/login" timeout="2880" domain="pic-nick.app" />
   </authentication>

IIS Setup

Will look like this:

iis settings

HOSTS File

hosts file

Done

There we go, with this set up you can go to red. and blue. and have it share the authentication cookie to be logged into your app across sub domains locally.

dashbaord blue

Troubleshooting

I also ran into some extra issues on Windows 8 similar to this StackOverflow question.

Exception from IIS:

HTTP Error 500.19 – Internal Server Error

The requested page cannon be access because the related configuration data for this page is invalid.


This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault=”Deny”), or set explicitly by a location tag with overrideMode=”Deny” or the legacy allowOverride=”false.

To solve this you need to probably enable some Windows Features related to Security and .NET.

features toggle pointing

Tracking application errors with Raygun.io

A nice coincidence a few weeks was the news of Raygun going in to public beta crossing my radar.

At the time we were fine tuning some things in an application that was in a private beta, we had put a little effort in to ensure that we would get reliable results about errors that happened to the users, but at that point we were just storing the details in a database table.

Background

We were capturing 3 levels of errors in the application.
– Client-side (JavaScript)
– Web Tier (ASP.NET MVC / WebApi)
– Back-end (Topshelf hosted services)

Any client side error would be captured, and sent to the Web Tier, Web Tier forwards that and it’s own errors on to the back end where they would be persisted with low overhead. In a previous post I have covered this approach.

But to get from entries stored in a database to something actually useful to correctly monitory and to start a resolution process is quite a bit of work.

From our own application structure; we can easily query that table, and just as easily send emails to the dev team when they occur. But this is still short of a robust solution, so a quick glance at the Raygun features and there was very good reason to give it a go.

What it took for us to set up Raygun

A quick look at the provided setup instructions and their github sample, it looked very easy.

With our particular application structure the global Application_Error method and the sample usage of Server.GetLastError() didn’t fit well. The clearest example is the arrival of data from client side, which isn’t a .NET exception, so simply issuing the RaygunClient().Send(exception); call doesn’t work. In this scenario we basically recreate an exception that represents the issue in the web tier, then have that sent to Raygun.

For errors that originate in our controllers (regular and WebApi) which extend a common base class, we make use of the HandleError attribute so we can execute a method to do some extra work, the code looks like:

[HandleError]
public abstract class BaseController
{
    protected override void OnException(ExceptionContext filterContext)
    {
        //our other logic, some to deal with 500s, some to show 404s

        //make the call here to raygun if it was anything but a 404 that brought us here.
        new RaygunClient().SendInBackground(filterContext.Exception);
    }
}

In the scenarios where we actually do have the exception, then it’s great and it “just works”, and we send it off asynchronously, in the catch block by calling a wrapping function like this:

public static void LogWithRaygun(Exception ex)
{
    new RaygunClient().SendInBackground(ex);
}

Conclusion

So Raygun really helped us avoid using a weakly hand-rolled half-way solution for tracking errors, now with nice email notifications that look like this, and link into the Raygun detailed information view.

It’s lacking a few nice to have features, but that’s more than acceptable for version 1 of the application, and from what we’ve been told our suggestions are already on track for a future release. One particular one that would benefit lots of people would be to allow an association of errors to be mapped by the user. An example is, 2 seemingly different errors get logged but in actual fact are the same cause, this way the reporting and similarity tracking can continue to group the 2 variations under the one umbrella.

raygun email example

Along with the dashboard summary.

Part of the Raygun  Dashboard

It’s one less thing we need to worry about. Just an FYI we didn’t stop saving records into our own database table, we’re just unlikely to have to go looking in there very much, if ever.

When you need to generate and send templated emails, consider mailzor

Mailzor is a basic utility library to help generate and send emails using the Razor view engine to populate email templates, designed to be quickly pluggable into your .NET app.

In our applications we send out HTML formatted emails, and seed them with a variety of data. I thought it would be easy to write them as razor files (cshtml) and then use the razor engine to generate them and send.

It’s up on NuGet and with the release of v1.0.0.11, it’s more stable.

For the most up to date info follow along with the usage sections of the readme.md file on the github repository.

How it works

I thought I would share some background about the development of it, and hiccups along the way. The original set of code came from Kazi Manzur Rashid, which solved the problem of making use of System.Web.RazorTemplateEngine, which I extended (with permission) to be usable as an injectable dependency and via NuGet.

The core elements are, the creation and management of the SMTP client, the building up of the MailMessage. Then all the compilation related work to get the RazorTemplateEngine up and running.

The RazorTemplateEngine logic boils down to taking the razor file stored on disk and using CSharpCodeProvider.CompileAssemblyFromDom. So if you’re curious about this code in particular dig into EmailTemplateEngine.cs in the project files.

Prior to version .10 where I went down the path of using ilmerge to solve conflicts of version mismatch with System.Web.Razor.

It seems easy seeing how I took an existing chunk of operational code and extended, and it only seems easy when it is working, but when it doesn’t work and you’ve got strange compilation errors, debugging this mechanism is not the greatest. I found myself hunting for temporary files and trying to have other compiler flags to output more information.

In the early versions it was heavily the case of “works on my machine”, but now its fine and seems to be feature complete…