JIRA Cookie-based auth API calls in F# with RestSharp

Today I was trying to create a quick integration with a bug tracking tool, as a little spike, unfortunately as is often the case, out of date documentation, vague errors, etc held up the task. None-the-less I got something working with cookie based authentication (I’ll be switching to OAuth based soon I’m sure that’ll go just as smooth). I’ll also submit a report about the problems with the documentation I discovered.

The rest of the documentation seems ok, but only time will tell as I get further with it, but from my experience it’s usually the initial steps that result in the most frustrations, as you get reminded not to trust the documentation…

Following along with this guide – JIRA REST API Example – Cookie-based Authentication.

What’s not clear in the documentation:

  • Even though you sign in with your email address, you really need to use your username (which is different), at least is for the `admin` account.
  • Error 1: there is a leading `/jira/` element in the auth API route, this may be old or may be specific to self hosting, it’s not part of the URI for on-demand, it should be `http://jira.example.com:8090/rest/auth/1/session`
  • Error 2: (the big one) what is returned is a different `session` object and it not only contains the required `JSESSIONID` but also another key,value pair you need to have in your cookie when you make a subsequent request: `studio.crowd.tokenkey`.

The core objective was to get specific events from 1 system to be reflected in new or existing locations in a second system. In this case the second system is JIRA the bug tracker.

Complete Solution

The break down will follow below.

Sorry this code is a bit awkward because RestSharp is written for fluent C# style usage, I’ll be looking for an F# focused rest client, and I only chose ReshSharp because I had used it ages ago.

Walk Through

Some types to send along, notice the casing, that’s because I don’t have any JSON serialization code wired up yet to do the case change from .NET uppercase convention to lowercase JSON style.

type PostData = {
    body: string

type Login = {
    username : string
    password : string

Using these records, against your on-demand account make the auth request, and inspect what you get back (see Error 2) and you’ll discover more cookie details come back than documented. As an extra note as to why it was even more frustrating if you capture and review the calls from the web UI those calls supply even more cookie details such as: `ondemand.autologin`, `xsrf.token` and others.

let uname = "admin"
let pw = "your-password"

let restClient = 

let authReq = 
        .AddJsonBody({ username = uname; password = pw })

Issue that auth request and now you’ll have the session cookie values you’ll need.

let authResponse = restClient.Post authReq
let cookiesToAdd = 
    |> Seq.map (fun x -> (x.Name, x.Value))

In this case I’m updating an existing comment to which I know the identifier.

let addCommentReq = 
        .AddJsonBody({ body = "a new comment" })

for (name, value) in cookiesToAdd do
    addCommentReq.AddCookie(name, value) |> ignore

Finally issue that add comment request, and the status code should be ‘Created’, I saw ‘Unauthorized’ responses for far to long.

let commentResp = restClient.Post addCommentReq

Lastly another fun hiccup was discovering that curl on Windows doesn’t support https “Protocol https not supported or disabled in libcurl”, but that was the least of my problems.

IIS, Visual Studio, unable to start debugging on the web server.


So this is another logging this issue after some frustrating problem sovling process. This is to help future me when I forget the same windows configuration step in another 8-15 months, maybe it’ll help you too.

On Windows 8 you have go in and turn on many individual items to just get an ASP.NET IIS hosted web app running, I’ve a variation of this problem in the past and blogged about it.

This time round it’s basically the same problem, but with Visual Studio upon trying to spin up and attach to an IIS hosted web project kicks up 1 of 3 errors.


The most common one was:

Unable to start debugging on the web server. The debugger cannot connect to the remote computer. The debugger was unable to resolve the specified computer name.

Which is not helpful, and the posts I found were people trying to remote debug machines or some thing else not helpful.

Unable to start debugging on the web server

Other variations included:

… error occured on a send

Unable to start debugging on the web server.

If you dig further into the windows event store you’ll be sent down an even further wrong path with errors such as:

The program can’t start because SecRuntime.dll is missing from your computer. Try reinstalling the program to fix this problem.


Ensure that you turn on ‘Internet Information Services Hostable Web Core’ along with all the other .NET / IIS feautes you need that I mentioned in the previous post.

Windows Features

RavenDB invalid HTTP Header characters bug

So you’ve done a search and you’ve arrived here, and you’re using RavenDB (at least version 2). In my case 2.5.2907, if this is fixed in 3.0 that’s good but not in our case we’re not ready to move to v3 yet based on our challenges in production with v2 (which is stable enough now).

I’ll try and help you out first (or a future me if I make this mistake again) then I’ll explain more. This may help you also in another place where you’re using HTTP headers and have used invalid characters.

The Error

Specified value has invalid HTTP Header characters

“Error”: “System.ArgumentException: Specified value has invalid HTTP Header characters.
Parameter name: name
at System.Net.WebHeaderCollection.CheckBadChars(String name, Boolean isHeaderValue)
at System.Net.WebHeaderCollection.SetInternal(String name, String value)
at Raven.Database.Extensions.HttpExtensions.WriteHeaders(IHttpContext context, RavenJObject headers, Etag etag)
at Raven.Database.Extensions.HttpExtensions.WriteData(IHttpContext context, Byte[] data, RavenJObject headers, Etag etag)
at Raven.Database.Server.Responders.Document.GetDocumentDirectly(IHttpContext context, String docId)
at Raven.Database.Server.Responders.Document.Respond(IHttpContext context)
at Raven.Database.Server.HttpServer.DispatchRequest(IHttpContext ctx)
at Raven.Database.Server.HttpServer.HandleActualRequest(IHttpContext ctx)”

The Fix

Check for invalid characters as per the Http spec (RFC 2616) Thanks to this StackOverflow answer in finding it in the spec faster.

Good luck, in my case it was an email, the ‘@’ is invalid.

So make sure you’re only storing US-ASCII and not using any of the control characters or separators:

“(” | “)” | “” | “@”
| “,” | “;” | “:” | “\” |
| “/” | “[” | “]” | “?” | “=”
| “{” | “}” | SP | HT

The Details

Using metadata on RavenDB documents is quite helpful and the data stored there so far has been pretty simple, to support a special case the storage of an email was being worked in to the metadata. The nature of this bug in the RavenDB IDE was that when you list all the documents of that collection they show up, and you see their selected fields, but when you click to load the document you get the “Specified value has invalid HTTP Header characters” error, and you’re scratching your head about how the document is in the database but you can’t load it.


I encountered what really feels like a bug, as a developer using the metadata mechanism of Raven documents, it shouldn’t be the responsibility of that developer to ensure they are meeting the specification HTTP headers (see page 16 of RFC 2616).

RavenDB Invalid metadata

This is invalid metadata on a raven document (see the ‘@’)


You do want the tools you use to really help you, and it’s frustrating when something obscure like this happens, it may be more complex or difficult but what I would like to see is a check on the data on its way in, so you can clearly see when this became a problem, instead of hunting it down after the fact.

I raised this on the RavenDB support group, and it’s been since raised as a bug, so if it’ll eventually (hopefully) come in v3.

The future of Alt.Net

Today Richard Banks asked us a question about the future of Alt.Net.

I strongly agree with his conclusions, but wanted to get my thoughts down too and answer his questions.

I’ve been part of the Melbourne Alt.Net community since our first meeting on April 28th 2010. We started a little after Richard and the Sydney guys, but have kept a solid core of attendees and survived wavering levels of interest from the broader community and multiple sponsorship and venue changes. I’m glad we started at that time because that “why so mean” moment had already passed and it didn’t seed a negative undertone in our community here.

Here’s the only photo I have of that April 28th meeting, it seems accidental as I was putting my phone away.

Alt.Net 28th April 2010

I became a better software developer thanks to Alt.Net and it helped where I work now build the fantastic team we have, seeing that there were more options out there and learning from others in the community.

Here’s a better shot of Richard visiting us in October that year (2010):

Alt.Net October 2010 with rbanks

Richard asked

“Is that enough now, should we now disband?”

No, because we still need continuous improvement – we don’t stop, we grow and improve.

Richard stated “I still need what the Alt.Net community provides”.

I do too. Sharing ideas and frustrations with friends, peers and other new people is very important to me.

What do I want to see in the future?

Even more mainstream.

There are still many developers who haven’t heard about our user group meetings.

Working on 2 presumptions:
– A percentage of people can’t physically attend often or at all.
– The topics we’re discussing are of value and will improve what people deliver in their jobs / be better software developers.

We just need to get our message/content out there better, by pushing stronger for input on topics the group covers. Getting our content out there, which has been happening for the last year (recording and publishing on YouTube), but sharing more on twitter/linkedin/blogs.

So …


If we want to reach more people then yes maybe branding will help, there’s now very high quality content available online for developers, so there’s more competition now days.

When our team from Picnic Software presented last year the turnout was huge, we had some good questions recorded, but many more good discussions after the fact and the attendance was record breaking.

Based on our chats with those in attendance our honest and direct coverage of issues/challenges and what we’re doing is what people did come and wanted to see and why so many came to talk to us after.

So any new branding I believe should be have the feeling of one strong community of software developers spread throughout Australia gathering together to share locally and online.


All this depends on the collective objective…

If we’re trying to reach more people then yes branding and putting time and money behind it, should help (right? it’s the reason companies pay so much for marketing). I stand here and want to reach more people.

If we’re just self-evaluating our community is strong, we’re doing a good job sharing and enough people are finding us (we’re not shrinking) then steady as she goes, is fine and the branding is less important, it’s about our content and we can just focus on that.

Recovering a docker vm on Windows with Virtual Box

NOTE: this is more of a windows / virtual box problem but the boot2docker experience as will a lot of things is not great on Windows… yet.

I was running the docker VM with 2 newly configured containers, all was going good, the docker part was quite easy (once up and running) then my entire machine crashed due to something else.

Reboot, open up shell and type ./boot2docker.exe start

Failed to start machine “boot2docker-vm” (run again with -v
for details)

So I run it with -v, no useful information.

Oh no.


Hopefully I haven’t lost the containers I spent a day setting up. The first windows issue with getting docker up and running was resolved by wiping all the data produced and starting fresh. That was not going to be the best outcome this time around.

First thing first, find out where that docker VM is:

virtual box UI

So it was in C:\Users\<username>\VirtualBox VMs\boot2docker-vm, there it was 580mb and several hours of work I didn’t want to do again, minor relief, made a copy of this.

Next step: getting docker back into a good state, I tried many combinations of; poweroff, reset, init, uninstalling and reinstalling, no luck. A note on this none of those commands hurt the vmdk file there, but still make a copy.

So next step move that boot2docker-vm folder out of there, and do a new init..

Small success docker starts.

Stop it and then try to drop the VMDK file back in…


Failed to start machine “boot2docker-vm” (run again with -v
for details)


Ok so now it’s starting to look like the problem is specific to VirtualBox and just getting the VM to spin up again. A little of bit of digging and I see the vbox file has some UIDs and MACAddresses, comparing the 2 newly-installed to the backup, the differences that look to be the cause appear.

diff of vbox file

Success, change those variables to match.

It starts, and the containers are there, phew!

docker ps -a

docker start <containerid>


It appears that a new init sets up new adapater MAC addresses and a new machine uuid, the rest of the differences can stay as they are specific to your old VMDK file. If you’re reading this good luck, and as always YMMV.

Using AutoMapper to help you map FSharpOption<> types


Because your model is structured this way, and you have realised you need this, otherwise this doesn’t apply to you.


When you get used to using AutoMapper to help you everywhere, you begin to demand it helps you everywhere by default. In this scenario you have to configure it to help you map from an F# type that has option (Guid is just an example).

In our event sourcing setup, we have commands that now change to have an additional property (not option), but the event now needs to have option (as that data was not always present).

We end up using those types/classes (events) that have the optional value to map to C# classes that are used for persistence (in this case RavenDB), and they are reference type fields so a null value is acceptable for persistence.

Here’s the Source and Destination classes, hopefully seeing that makes this scenario clearer.

public class SourceWithOption
    public string Standard { get; set; }
    public FSharpOption<Guid> PropertyUnderTest { get; set; }

public class DestinationWithNoOption
    public string Standard { get; set; }
    public Guid PropertyUnderTest { get; set; }

Note: the DestinationWithNoOption is the equivalent C# class that we get our of the F# types, so the F# code is really this trivial (SubItemId is the optional one):

type JobCreatedEvent = {
    Id : Guid
    Name: string
    SubItemId : option<Guid>


Where you do all your AutoMapper configuration you’re going to make use of the MapperRegistry and add your own.

(Note: all this code is up as a gist.

var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;

AutoMapper.Mappers.MapperRegistry.AllMappers = () 
    => allMappers().Concat(new List<IObjectMapper>
            new FSharpOptionObjectMapper()

And the logic for FSharpOptionObjectMapper is:

public class FSharpOptionObjectMapper : IObjectMapper
    public object Map(ResolutionContext context, IMappingEngineRunner mapper)
        var sourceValue = ((dynamic) context.SourceValue);

        return (sourceValue == null || OptionModule.IsNone(sourceValue)) 
		? null : 

    public bool IsMatch(ResolutionContext context)
        var isMatch = 
		    context.SourceType.IsGenericType &&
				== typeof (FSharpOption<>);

        if (context.DestinationType.IsGenericType)
            isMatch &= 
			    != typeof(FSharpOption<>);

        return isMatch;

Tests to prove it

Here’s a test you can run to show that this works, I started using Custom Type Coverters (ITypeConverter) but found that would not work in a generic fashion across all variations of FSharpOption<>.

public void FSharpOptionObjectMapperTest()
    var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;
    AutoMapper.Mappers.MapperRegistry.AllMappers = () =&gt; allMappers().Concat(new List
            new DustAutomapper.FSharpOptionObjectMapper()

    var id = Guid.NewGuid();
    var source1 = new SourceWithOption
        Standard = "test",
        PropertyUnderTest = new FSharpOption(id)

    var source2 = new SourceWithOption
        Standard = "test"
        //PropertyUnderTest is null

    var result1 = Mapper.Map(source1);
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.AreEqual(id, result1.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' on Guid didn't work as expected");

    var result2 = Mapper.Map(source2);
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.IsNull(result2.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' for null failed");

Thinking in a document centric world with RavenDB @ ALT.NET

Last night (25th Feb 2014), I presented on RavenDB at ALT.NET Melbourne.

I got some great feedback from the audience and was happy to share my experience so far with RavenDB. If you were there / watch the recording and have some suggestions good or bad would love to hear them so I can improve.

Here’s the ALT.NET recording with slides, plus me up at the projector screen.

If you just want the slides and audio then here’s an alternate recording.

I’ve also put the slides up on slide share.