API call based Azure Functions with DocumentDB

Azure Functions caught my eye recently, mainly because of the F# support where I was hoping to write some F# based integrations. But the F# support is classified as experimental, so while I learn the capabilities of Azure Functions, I didn’t want to get held up on issues that may be only specific to their support of F#. Once I get my core objective operational via C#, I’ll attempt to rewrite the functions later, and I’ll share that too (in fact I’ll likely just update this post with re-written at some point).

I’m taking my simplest application and migrating that to make use of Azure Functions, it’s the trusty Used Guid service. If you haven’t heard of it… you’re missing out!

Playing with AppHarbor twitter and WebAPI



So in the original WebAPI app, it was straight forward enough process:

The dependencies are:

  • DataBase (Read + Write)
  • Twitter (Write)

Function Based Architecture

The initial challenge here with the Azure Function approach is off the back of the user request, how to do the Guid lookup. The integrations offered by Azure Functions are designed in a way that it’s INPUT + OUTPUT(S).The first function has to be the input from the user and that’s an HTTP call.

I started thinking about the coordination between multiple azure functions early in this process. I thought the second function could just operate on the back of a new document showing up in the Azure DocumentDB. But upon digging into the documents I could not see any examples of how to “subscribe” to the feed of new documents. After not being able to solve it via experimentation it started to look unsupported. So I went to Stack Overflow to get confirmation (or what I was really hoping to hear: “yes, this is coming soon”.)

That was not the case. The answer now (Sept 2016) is NO: not supported. So the list of supported bindinds in the documentation was accurate and up to date:untitled_clipping_091716_100900_pm

So now the first function that takes the http INPUT, needs to have 2 OUTPUTs; DocumentDB and Queue.

Function 1

Input – HTTP

It’s going against the ease of use of the Azure Functions to do a database read and return the failure cause, though I may not have a choice but to do that.

I wrestled with the architectural approach to this problem, and it’s because this problem space is absolutely contrived and doesn’t lend itself to an elegant solution. When you actually step back at the problem domain / business’s requirement 2 users really won’t have colliding data (… the Guid). They would just ask a service to deliver them the next datam of vulue.

So with that I’ll continue on following the happy path, because the core objectives are to get to deployment concerns around functions, and just lay some groundwork here.

Outputs – DocumentDB + Queue

They simplest way to write a DocumentDB document from your Azure Function is to have it as out out object. Now in many cases you only want to write the document if you pass initial validation. It seems valid to just assign null to the out paramters you don’t want to pass data to on the invalid/error cases. To feed data to the subsequent functions, a second out paramter is needed which is the queue.


Function 2

Output – Tweet

I thought this was going to be the simpler of the 2; I wanted to look at how to get secrets (API keys, OAUTH etc) into the functions. But when I went to write the function, oh that’s right I don’t a 1 step approach to fetch NuGet packages. So making use of TweetSharp to do the authenticated twitter API call, will take a bit of extra time too.

Well I started digging around that code, and to extract out exactly what I need is taking a while. Below is a link to the original code in the WebAPI app, where making use of the library makes producing a tweet quite easy (once configured with authentication).

So the options I’ll investigate later will be:

  1. The minimum set of code that can do the authentication and post the tweet so it’s all embedded in the 1 function.
  2. Making use of Azure Logic Apps (which I need to investigate more), but look to offer some abstraction around common integrations.

Original C# code in WebAPI app:


Azure Function:

For now just proving can read off the queue. The basic set up is the data on the queue being a string.

With the integration panel looking like this:



What’s working:

  1. API endpoint go get user requests in
  2. Writing to DocumentDB
  3. Writing to a Queue
  4. A second function reads from that Queue

Initial Frustrations

The Azure portal is quite nice, the effects, the theming, it does look nicer than the AWS console which I’m much more familiar with. But deep linking into Azure functions doesn’t work as expected, say you duplicate a tab, you either end up back at the dashboard level, or on the create new function screen. Sometimes it would just spin/hang for a while.



I wanted to add a quick note on pricing, I’m no expert in this yet, but when I first start playing with the functions, I had a dedicated app instance, that was draining my balance, when I realised I switched to dynamic pricing model which I thought would have been the default.

It’s good to track the cost of running features, especially while trying out new ones, but one thing that kept showing up in the notifications (bell area) was my current balance, it would always try to get my attention, but more often than not the outstanding balance did not chang.


What’s Next?

In the coming posts I’ll be covering the deployment pipeline for these functions, stay tuned.

JIRA Cookie-based auth API calls in F# with RestSharp

Today I was trying to create a quick integration with a bug tracking tool, as a little spike, unfortunately as is often the case, out of date documentation, vague errors, etc held up the task. None-the-less I got something working with cookie based authentication (I’ll be switching to OAuth based soon I’m sure that’ll go just as smooth). I’ll also submit a report about the problems with the documentation I discovered.

The rest of the documentation seems ok, but only time will tell as I get further with it, but from my experience it’s usually the initial steps that result in the most frustrations, as you get reminded not to trust the documentation…

Following along with this guide – JIRA REST API Example – Cookie-based Authentication.

What’s not clear in the documentation:

  • Even though you sign in with your email address, you really need to use your username (which is different), at least is for the `admin` account.
  • Error 1: there is a leading `/jira/` element in the auth API route, this may be old or may be specific to self hosting, it’s not part of the URI for on-demand, it should be `http://jira.example.com:8090/rest/auth/1/session`
  • Error 2: (the big one) what is returned is a different `session` object and it not only contains the required `JSESSIONID` but also another key,value pair you need to have in your cookie when you make a subsequent request: `studio.crowd.tokenkey`.

The core objective was to get specific events from 1 system to be reflected in new or existing locations in a second system. In this case the second system is JIRA the bug tracker.

Complete Solution

The break down will follow below.

Sorry this code is a bit awkward because RestSharp is written for fluent C# style usage, I’ll be looking for an F# focused rest client, and I only chose ReshSharp because I had used it ages ago.

Walk Through

Some types to send along, notice the casing, that’s because I don’t have any JSON serialization code wired up yet to do the case change from .NET uppercase convention to lowercase JSON style.

type PostData = {
    body: string

type Login = {
    username : string
    password : string

Using these records, against your on-demand account make the auth request, and inspect what you get back (see Error 2) and you’ll discover more cookie details come back than documented. As an extra note as to why it was even more frustrating if you capture and review the calls from the web UI those calls supply even more cookie details such as: `ondemand.autologin`, `xsrf.token` and others.

let uname = "admin"
let pw = "your-password"

let restClient = 

let authReq = 
        .AddJsonBody({ username = uname; password = pw })

Issue that auth request and now you’ll have the session cookie values you’ll need.

let authResponse = restClient.Post authReq
let cookiesToAdd = 
    |> Seq.map (fun x -> (x.Name, x.Value))

In this case I’m updating an existing comment to which I know the identifier.

let addCommentReq = 
        .AddJsonBody({ body = "a new comment" })

for (name, value) in cookiesToAdd do
    addCommentReq.AddCookie(name, value) |> ignore

Finally issue that add comment request, and the status code should be ‘Created’, I saw ‘Unauthorized’ responses for far to long.

let commentResp = restClient.Post addCommentReq

Lastly another fun hiccup was discovering that curl on Windows doesn’t support https “Protocol https not supported or disabled in libcurl”, but that was the least of my problems.

IIS, Visual Studio, unable to start debugging on the web server.


So this is another logging this issue after some frustrating problem sovling process. This is to help future me when I forget the same windows configuration step in another 8-15 months, maybe it’ll help you too.

On Windows 8 you have go in and turn on many individual items to just get an ASP.NET IIS hosted web app running, I’ve a variation of this problem in the past and blogged about it.

This time round it’s basically the same problem, but with Visual Studio upon trying to spin up and attach to an IIS hosted web project kicks up 1 of 3 errors.


The most common one was:

Unable to start debugging on the web server. The debugger cannot connect to the remote computer. The debugger was unable to resolve the specified computer name.

Which is not helpful, and the posts I found were people trying to remote debug machines or some thing else not helpful.

Unable to start debugging on the web server

Other variations included:

… error occured on a send

Unable to start debugging on the web server.

If you dig further into the windows event store you’ll be sent down an even further wrong path with errors such as:

The program can’t start because SecRuntime.dll is missing from your computer. Try reinstalling the program to fix this problem.


Ensure that you turn on ‘Internet Information Services Hostable Web Core’ along with all the other .NET / IIS feautes you need that I mentioned in the previous post.

Windows Features

RavenDB invalid HTTP Header characters bug

So you’ve done a search and you’ve arrived here, and you’re using RavenDB (at least version 2). In my case 2.5.2907, if this is fixed in 3.0 that’s good but not in our case we’re not ready to move to v3 yet based on our challenges in production with v2 (which is stable enough now).

I’ll try and help you out first (or a future me if I make this mistake again) then I’ll explain more. This may help you also in another place where you’re using HTTP headers and have used invalid characters.

The Error

Specified value has invalid HTTP Header characters

“Error”: “System.ArgumentException: Specified value has invalid HTTP Header characters.
Parameter name: name
at System.Net.WebHeaderCollection.CheckBadChars(String name, Boolean isHeaderValue)
at System.Net.WebHeaderCollection.SetInternal(String name, String value)
at Raven.Database.Extensions.HttpExtensions.WriteHeaders(IHttpContext context, RavenJObject headers, Etag etag)
at Raven.Database.Extensions.HttpExtensions.WriteData(IHttpContext context, Byte[] data, RavenJObject headers, Etag etag)
at Raven.Database.Server.Responders.Document.GetDocumentDirectly(IHttpContext context, String docId)
at Raven.Database.Server.Responders.Document.Respond(IHttpContext context)
at Raven.Database.Server.HttpServer.DispatchRequest(IHttpContext ctx)
at Raven.Database.Server.HttpServer.HandleActualRequest(IHttpContext ctx)”

The Fix

Check for invalid characters as per the Http spec (RFC 2616) Thanks to this StackOverflow answer in finding it in the spec faster.

Good luck, in my case it was an email, the ‘@’ is invalid.

So make sure you’re only storing US-ASCII and not using any of the control characters or separators:

“(” | “)” | “” | “@”
| “,” | “;” | “:” | “\” |
| “/” | “[” | “]” | “?” | “=”
| “{” | “}” | SP | HT

The Details

Using metadata on RavenDB documents is quite helpful and the data stored there so far has been pretty simple, to support a special case the storage of an email was being worked in to the metadata. The nature of this bug in the RavenDB IDE was that when you list all the documents of that collection they show up, and you see their selected fields, but when you click to load the document you get the “Specified value has invalid HTTP Header characters” error, and you’re scratching your head about how the document is in the database but you can’t load it.


I encountered what really feels like a bug, as a developer using the metadata mechanism of Raven documents, it shouldn’t be the responsibility of that developer to ensure they are meeting the specification HTTP headers (see page 16 of RFC 2616).

RavenDB Invalid metadata

This is invalid metadata on a raven document (see the ‘@’)


You do want the tools you use to really help you, and it’s frustrating when something obscure like this happens, it may be more complex or difficult but what I would like to see is a check on the data on its way in, so you can clearly see when this became a problem, instead of hunting it down after the fact.

I raised this on the RavenDB support group, and it’s been since raised as a bug, so if it’ll eventually (hopefully) come in v3.

The future of Alt.Net

Today Richard Banks asked us a question about the future of Alt.Net.

I strongly agree with his conclusions, but wanted to get my thoughts down too and answer his questions.

I’ve been part of the Melbourne Alt.Net community since our first meeting on April 28th 2010. We started a little after Richard and the Sydney guys, but have kept a solid core of attendees and survived wavering levels of interest from the broader community and multiple sponsorship and venue changes. I’m glad we started at that time because that “why so mean” moment had already passed and it didn’t seed a negative undertone in our community here.

Here’s the only photo I have of that April 28th meeting, it seems accidental as I was putting my phone away.

Alt.Net 28th April 2010

I became a better software developer thanks to Alt.Net and it helped where I work now build the fantastic team we have, seeing that there were more options out there and learning from others in the community.

Here’s a better shot of Richard visiting us in October that year (2010):

Alt.Net October 2010 with rbanks

Richard asked

“Is that enough now, should we now disband?”

No, because we still need continuous improvement – we don’t stop, we grow and improve.

Richard stated “I still need what the Alt.Net community provides”.

I do too. Sharing ideas and frustrations with friends, peers and other new people is very important to me.

What do I want to see in the future?

Even more mainstream.

There are still many developers who haven’t heard about our user group meetings.

Working on 2 presumptions:
– A percentage of people can’t physically attend often or at all.
– The topics we’re discussing are of value and will improve what people deliver in their jobs / be better software developers.

We just need to get our message/content out there better, by pushing stronger for input on topics the group covers. Getting our content out there, which has been happening for the last year (recording and publishing on YouTube), but sharing more on twitter/linkedin/blogs.

So …


If we want to reach more people then yes maybe branding will help, there’s now very high quality content available online for developers, so there’s more competition now days.

When our team from Picnic Software presented last year the turnout was huge, we had some good questions recorded, but many more good discussions after the fact and the attendance was record breaking.

Based on our chats with those in attendance our honest and direct coverage of issues/challenges and what we’re doing is what people did come and wanted to see and why so many came to talk to us after.

So any new branding I believe should be have the feeling of one strong community of software developers spread throughout Australia gathering together to share locally and online.


All this depends on the collective objective…

If we’re trying to reach more people then yes branding and putting time and money behind it, should help (right? it’s the reason companies pay so much for marketing). I stand here and want to reach more people.

If we’re just self-evaluating our community is strong, we’re doing a good job sharing and enough people are finding us (we’re not shrinking) then steady as she goes, is fine and the branding is less important, it’s about our content and we can just focus on that.

Recovering a docker vm on Windows with Virtual Box

NOTE: this is more of a windows / virtual box problem but the boot2docker experience as will a lot of things is not great on Windows… yet.

I was running the docker VM with 2 newly configured containers, all was going good, the docker part was quite easy (once up and running) then my entire machine crashed due to something else.

Reboot, open up shell and type ./boot2docker.exe start

Failed to start machine “boot2docker-vm” (run again with -v
for details)

So I run it with -v, no useful information.

Oh no.


Hopefully I haven’t lost the containers I spent a day setting up. The first windows issue with getting docker up and running was resolved by wiping all the data produced and starting fresh. That was not going to be the best outcome this time around.

First thing first, find out where that docker VM is:

virtual box UI

So it was in C:\Users\<username>\VirtualBox VMs\boot2docker-vm, there it was 580mb and several hours of work I didn’t want to do again, minor relief, made a copy of this.

Next step: getting docker back into a good state, I tried many combinations of; poweroff, reset, init, uninstalling and reinstalling, no luck. A note on this none of those commands hurt the vmdk file there, but still make a copy.

So next step move that boot2docker-vm folder out of there, and do a new init..

Small success docker starts.

Stop it and then try to drop the VMDK file back in…


Failed to start machine “boot2docker-vm” (run again with -v
for details)


Ok so now it’s starting to look like the problem is specific to VirtualBox and just getting the VM to spin up again. A little of bit of digging and I see the vbox file has some UIDs and MACAddresses, comparing the 2 newly-installed to the backup, the differences that look to be the cause appear.

diff of vbox file

Success, change those variables to match.

It starts, and the containers are there, phew!

docker ps -a

docker start <containerid>


It appears that a new init sets up new adapater MAC addresses and a new machine uuid, the rest of the differences can stay as they are specific to your old VMDK file. If you’re reading this good luck, and as always YMMV.

Using AutoMapper to help you map FSharpOption<> types


Because your model is structured this way, and you have realised you need this, otherwise this doesn’t apply to you.


When you get used to using AutoMapper to help you everywhere, you begin to demand it helps you everywhere by default. In this scenario you have to configure it to help you map from an F# type that has option (Guid is just an example).

In our event sourcing setup, we have commands that now change to have an additional property (not option), but the event now needs to have option (as that data was not always present).

We end up using those types/classes (events) that have the optional value to map to C# classes that are used for persistence (in this case RavenDB), and they are reference type fields so a null value is acceptable for persistence.

Here’s the Source and Destination classes, hopefully seeing that makes this scenario clearer.

public class SourceWithOption
    public string Standard { get; set; }
    public FSharpOption<Guid> PropertyUnderTest { get; set; }

public class DestinationWithNoOption
    public string Standard { get; set; }
    public Guid PropertyUnderTest { get; set; }

Note: the DestinationWithNoOption is the equivalent C# class that we get our of the F# types, so the F# code is really this trivial (SubItemId is the optional one):

type JobCreatedEvent = {
    Id : Guid
    Name: string
    SubItemId : option<Guid>


Where you do all your AutoMapper configuration you’re going to make use of the MapperRegistry and add your own.

(Note: all this code is up as a gist.

var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;

AutoMapper.Mappers.MapperRegistry.AllMappers = () 
    => allMappers().Concat(new List<IObjectMapper>
            new FSharpOptionObjectMapper()

And the logic for FSharpOptionObjectMapper is:

public class FSharpOptionObjectMapper : IObjectMapper
    public object Map(ResolutionContext context, IMappingEngineRunner mapper)
        var sourceValue = ((dynamic) context.SourceValue);

        return (sourceValue == null || OptionModule.IsNone(sourceValue)) 
		? null : 

    public bool IsMatch(ResolutionContext context)
        var isMatch = 
		    context.SourceType.IsGenericType &&
				== typeof (FSharpOption<>);

        if (context.DestinationType.IsGenericType)
            isMatch &= 
			    != typeof(FSharpOption<>);

        return isMatch;

Tests to prove it

Here’s a test you can run to show that this works, I started using Custom Type Coverters (ITypeConverter) but found that would not work in a generic fashion across all variations of FSharpOption<>.

public void FSharpOptionObjectMapperTest()
    var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;
    AutoMapper.Mappers.MapperRegistry.AllMappers = () =&gt; allMappers().Concat(new List
            new DustAutomapper.FSharpOptionObjectMapper()

    var id = Guid.NewGuid();
    var source1 = new SourceWithOption
        Standard = "test",
        PropertyUnderTest = new FSharpOption(id)

    var source2 = new SourceWithOption
        Standard = "test"
        //PropertyUnderTest is null

    var result1 = Mapper.Map(source1);
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.AreEqual(id, result1.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' on Guid didn't work as expected");

    var result2 = Mapper.Map(source2);
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.IsNull(result2.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' for null failed");