API call based Azure Functions with DocumentDB

Azure Functions caught my eye recently, mainly because of the F# support where I was hoping to write some F# based integrations. But the F# support is classified as experimental, so while I learn the capabilities of Azure Functions, I didn’t want to get held up on issues that may be only specific to their support of F#. Once I get my core objective operational via C#, I’ll attempt to rewrite the functions later, and I’ll share that too (in fact I’ll likely just update this post with re-written at some point).

I’m taking my simplest application and migrating that to make use of Azure Functions, it’s the trusty Used Guid service. If you haven’t heard of it… you’re missing out!

Playing with AppHarbor twitter and WebAPI

untitled_clipping_091816_103748_am

Background

So in the original WebAPI app, it was straight forward enough process:

The dependencies are:

  • DataBase (Read + Write)
  • Twitter (Write)

Function Based Architecture

The initial challenge here with the Azure Function approach is off the back of the user request, how to do the Guid lookup. The integrations offered by Azure Functions are designed in a way that it’s INPUT + OUTPUT(S).The first function has to be the input from the user and that’s an HTTP call.

I started thinking about the coordination between multiple azure functions early in this process. I thought the second function could just operate on the back of a new document showing up in the Azure DocumentDB. But upon digging into the documents I could not see any examples of how to “subscribe” to the feed of new documents. After not being able to solve it via experimentation it started to look unsupported. So I went to Stack Overflow to get confirmation (or what I was really hoping to hear: “yes, this is coming soon”.)

That was not the case. The answer now (Sept 2016) is NO: not supported. So the list of supported bindinds in the documentation was accurate and up to date:untitled_clipping_091716_100900_pm

So now the first function that takes the http INPUT, needs to have 2 OUTPUTs; DocumentDB and Queue.

Function 1

Input – HTTP

It’s going against the ease of use of the Azure Functions to do a database read and return the failure cause, though I may not have a choice but to do that.

I wrestled with the architectural approach to this problem, and it’s because this problem space is absolutely contrived and doesn’t lend itself to an elegant solution. When you actually step back at the problem domain / business’s requirement 2 users really won’t have colliding data (… the Guid). They would just ask a service to deliver them the next datam of vulue.

So with that I’ll continue on following the happy path, because the core objectives are to get to deployment concerns around functions, and just lay some groundwork here.

Outputs – DocumentDB + Queue

They simplest way to write a DocumentDB document from your Azure Function is to have it as out out object. Now in many cases you only want to write the document if you pass initial validation. It seems valid to just assign null to the out paramters you don’t want to pass data to on the invalid/error cases. To feed data to the subsequent functions, a second out paramter is needed which is the queue.

Code:

Function 2

Output – Tweet

I thought this was going to be the simpler of the 2; I wanted to look at how to get secrets (API keys, OAUTH etc) into the functions. But when I went to write the function, oh that’s right I don’t a 1 step approach to fetch NuGet packages. So making use of TweetSharp to do the authenticated twitter API call, will take a bit of extra time too.

Well I started digging around that code, and to extract out exactly what I need is taking a while. Below is a link to the original code in the WebAPI app, where making use of the library makes producing a tweet quite easy (once configured with authentication).

So the options I’ll investigate later will be:

  1. The minimum set of code that can do the authentication and post the tweet so it’s all embedded in the 1 function.
  2. Making use of Azure Logic Apps (which I need to investigate more), but look to offer some abstraction around common integrations.

Original C# code in WebAPI app:

https://github.com/NickJosevski/usedguids/blob/master/UsedGuidTwitter/Logic/Tweeting.cs

Azure Function:

For now just proving can read off the queue. The basic set up is the data on the queue being a string.

With the integration panel looking like this:

untitled_clipping_091816_103117_am

Summary

What’s working:

  1. API endpoint go get user requests in
  2. Writing to DocumentDB
  3. Writing to a Queue
  4. A second function reads from that Queue

Initial Frustrations

The Azure portal is quite nice, the effects, the theming, it does look nicer than the AWS console which I’m much more familiar with. But deep linking into Azure functions doesn’t work as expected, say you duplicate a tab, you either end up back at the dashboard level, or on the create new function screen. Sometimes it would just spin/hang for a while.

untitled_clipping_091816_102055_am

Pricing

I wanted to add a quick note on pricing, I’m no expert in this yet, but when I first start playing with the functions, I had a dedicated app instance, that was draining my balance, when I realised I switched to dynamic pricing model which I thought would have been the default.

It’s good to track the cost of running features, especially while trying out new ones, but one thing that kept showing up in the notifications (bell area) was my current balance, it would always try to get my attention, but more often than not the outstanding balance did not chang.

untitled_clipping_091816_101845_am

What’s Next?

In the coming posts I’ll be covering the deployment pipeline for these functions, stay tuned.

Advertisements

Using AutoMapper to help you map FSharpOption<> types

Why?

Because your model is structured this way, and you have realised you need this, otherwise this doesn’t apply to you.

Scenario

When you get used to using AutoMapper to help you everywhere, you begin to demand it helps you everywhere by default. In this scenario you have to configure it to help you map from an F# type that has option (Guid is just an example).

In our event sourcing setup, we have commands that now change to have an additional property (not option), but the event now needs to have option (as that data was not always present).

We end up using those types/classes (events) that have the optional value to map to C# classes that are used for persistence (in this case RavenDB), and they are reference type fields so a null value is acceptable for persistence.

Here’s the Source and Destination classes, hopefully seeing that makes this scenario clearer.

public class SourceWithOption
{
    public string Standard { get; set; }
    public FSharpOption<Guid> PropertyUnderTest { get; set; }
}

public class DestinationWithNoOption
{
    public string Standard { get; set; }
    public Guid PropertyUnderTest { get; set; }
}

Note: the DestinationWithNoOption is the equivalent C# class that we get our of the F# types, so the F# code is really this trivial (SubItemId is the optional one):

type JobCreatedEvent = {
    Id : Guid
    Name: string
    SubItemId : option<Guid>
}

Solution

Where you do all your AutoMapper configuration you’re going to make use of the MapperRegistry and add your own.

(Note: all this code is up as a gist.

var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;

AutoMapper.Mappers.MapperRegistry.AllMappers = () 
    => allMappers().Concat(new List<IObjectMapper>
    {
            new FSharpOptionObjectMapper()
    });

And the logic for FSharpOptionObjectMapper is:

public class FSharpOptionObjectMapper : IObjectMapper
{
    public object Map(ResolutionContext context, IMappingEngineRunner mapper)
    {
        var sourceValue = ((dynamic) context.SourceValue);

        return (sourceValue == null || OptionModule.IsNone(sourceValue)) 
		? null : 
		sourceValue.Value;
    }

    public bool IsMatch(ResolutionContext context)
    {
        var isMatch = 
		    context.SourceType.IsGenericType &&
		    context.SourceType.GetGenericTypeDefinition() 
				== typeof (FSharpOption<>);

        if (context.DestinationType.IsGenericType)
        {
            isMatch &= 
			    context.DestinationType.GetGenericTypeDefinition() 
			    != typeof(FSharpOption<>);
        }

        return isMatch;
    }
}

Tests to prove it

Here’s a test you can run to show that this works, I started using Custom Type Coverters (ITypeConverter) but found that would not work in a generic fashion across all variations of FSharpOption<>.

[Test]
public void FSharpOptionObjectMapperTest()
{
    Mapper.CreateMap();
    
    var allMappers = AutoMapper.Mappers.MapperRegistry.AllMappers;
    AutoMapper.Mappers.MapperRegistry.AllMappers = () =&gt; allMappers().Concat(new List
        {
            new DustAutomapper.FSharpOptionObjectMapper()
        });

    var id = Guid.NewGuid();
    var source1 = new SourceWithOption
    {
        Standard = "test",
        PropertyUnderTest = new FSharpOption(id)
    };

    var source2 = new SourceWithOption
    {
        Standard = "test"
        //PropertyUnderTest is null
    };

    var result1 = Mapper.Map(source1);
    
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.AreEqual(id, result1.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' on Guid didn't work as expected");

    var result2 = Mapper.Map(source2);
    
    Assert.AreEqual("test", result1.Standard, "basic property failed to map");
    Assert.IsNull(result2.PropertyUnderTest, "'FSharpOptionObjectMapper : IObjectMapper' for null failed");
}

Tracking application errors with Raygun.io

A nice coincidence a few weeks was the news of Raygun going in to public beta crossing my radar.

At the time we were fine tuning some things in an application that was in a private beta, we had put a little effort in to ensure that we would get reliable results about errors that happened to the users, but at that point we were just storing the details in a database table.

Background

We were capturing 3 levels of errors in the application.
– Client-side (JavaScript)
– Web Tier (ASP.NET MVC / WebApi)
– Back-end (Topshelf hosted services)

Any client side error would be captured, and sent to the Web Tier, Web Tier forwards that and it’s own errors on to the back end where they would be persisted with low overhead. In a previous post I have covered this approach.

But to get from entries stored in a database to something actually useful to correctly monitory and to start a resolution process is quite a bit of work.

From our own application structure; we can easily query that table, and just as easily send emails to the dev team when they occur. But this is still short of a robust solution, so a quick glance at the Raygun features and there was very good reason to give it a go.

What it took for us to set up Raygun

A quick look at the provided setup instructions and their github sample, it looked very easy.

With our particular application structure the global Application_Error method and the sample usage of Server.GetLastError() didn’t fit well. The clearest example is the arrival of data from client side, which isn’t a .NET exception, so simply issuing the RaygunClient().Send(exception); call doesn’t work. In this scenario we basically recreate an exception that represents the issue in the web tier, then have that sent to Raygun.

For errors that originate in our controllers (regular and WebApi) which extend a common base class, we make use of the HandleError attribute so we can execute a method to do some extra work, the code looks like:

[HandleError]
public abstract class BaseController
{
    protected override void OnException(ExceptionContext filterContext)
    {
        //our other logic, some to deal with 500s, some to show 404s

        //make the call here to raygun if it was anything but a 404 that brought us here.
        new RaygunClient().SendInBackground(filterContext.Exception);
    }
}

In the scenarios where we actually do have the exception, then it’s great and it “just works”, and we send it off asynchronously, in the catch block by calling a wrapping function like this:

public static void LogWithRaygun(Exception ex)
{
    new RaygunClient().SendInBackground(ex);
}

Conclusion

So Raygun really helped us avoid using a weakly hand-rolled half-way solution for tracking errors, now with nice email notifications that look like this, and link into the Raygun detailed information view.

It’s lacking a few nice to have features, but that’s more than acceptable for version 1 of the application, and from what we’ve been told our suggestions are already on track for a future release. One particular one that would benefit lots of people would be to allow an association of errors to be mapped by the user. An example is, 2 seemingly different errors get logged but in actual fact are the same cause, this way the reporting and similarity tracking can continue to group the 2 variations under the one umbrella.

raygun email example

Along with the dashboard summary.

Part of the Raygun  Dashboard

It’s one less thing we need to worry about. Just an FYI we didn’t stop saving records into our own database table, we’re just unlikely to have to go looking in there very much, if ever.

A basic walkthrough of RabbitMQ using C#.NET examples

In my last post “A trivial message loop using RabbitMQ in C#.NET” I demonstrated a simple concept of a circular message queue using RabbitMQ. I chose not to complicate the post with explaining some fundamentals of RabbitMQ leaving that for this post.

As of writing the version of RabbitMQ I am using is 2.4.1.

Some of the concepts I introduced in the last post were, and I’ll break them down in a hopefully a logical flow as steps to take.

  1. IConnection
  2. IModel
  3. ExchangeDeclare()
  4. QueueHelper.EnsureQueue()
  5. Subscription
  6. SimpleRpcClient
  7. SimpleRpcClient.Cast()
  8. Your own SimpleRpcServer
  9. HandleSimpleCast

First if you’re looking to get setup on Windows head on over to the RabbitMQ official documentation and follow the steps.

Also before you continue reading I urge you to go have a look at Mike Hadlow’s EasyNeqQ API and accompanying blog posts.

Some other good blog posts I have stumbled upon after starting to write my version of this come from Simon Dixon, at the time of me writing this he had 1, 2, 3 part series

On to my take, I’ll be explaining RabbitMQ along the lines of how my simple demonstration application functions, as a reminder it’s available on BitBucket – https://bitbucket.org/NickJosevski/rabbitloop

Step 1 – The Connection

I am using ConnectionFactory with a local machine address to create the connection the client and server will use to build the next required component.

IConnection connection = new ConnectionFactory
  { 
      Address = "127.0.0.1" //localhost
  }.CreateConnection();

Step 2 – The Model

Using the connection create a model this is used to configure the rest of details on how communication will take place between the client and server.

IModel model = connection.CreateModel();

Step 3 – ExchangeDeclare()

Declaring an exchange is the concept of setting up a named path for communication, the second parameter offers options on how messages are to be delivered, in this example I’m sticking with a simple choice of ‘Direct’

var exchange = "my.exchange";
model.ExchangeDeclare(exchange, ExchangeType.Direct);

Step 4 – QueueHelper.EnsureQueue()

This is my own helper extension method to help perform a few tasks; creating a queueId, using that queueId to declare a queue on the model, and then bind the queue and exchange together, finally returning the newly created queueName.

public static class QueueHelper
{
    public static string EnsureQueue(this IModel ch, String exchangeName)
    {
        var queueId = String.Format("{0}.reply", exchangeName);
        var queueName = ch.QueueDeclare(queueId, false, false, false, null);
        ch.QueueBind(queueName, exchangeName, "", null);
        return queueName;
    }
}

Step 5 – Subscription

The subscription is supplied to the server logic and is created using the model and the queueName

var sub = Subscription(model, queueName);

Step 6 – SimpleRpcClient
Creating the client is just as simple, and we’re ready to send a message.

var client = new SimpleRpcClient(model, queueName);

Step 7 – SimpleRpcClient.Cast()

The Cast() method on the SimpleRpcClient is the asynchronous way to send a message, I want to keep this post more straight forward so I am excluding the message class and how to serialize it but those steps are very basic.

var myMsg = new SerializableClassYouHaveCreated().Serialize();

rpcClient.Cast(new BasicProperties(), myMsg);

Step 8 – NicksSimpleRpcServer

This is where things start to get a little more involved, but it’s relatively straight forward. In this approach I create my own RpcServer that extends SimpleRpcServer.

public class NicksSimpleRpcServer : SimpleRpcServer
{
    private readonly SimpleRpcClient rpcClient;

    public NicksSimpleRpcServer(Subscription subscription, SimpleRpcClient client)
    : base(subscription)
    {
        rpcClient = client;
    }

}

Step 9 – HandleSimpleCast

All I do now is override the method that handles asynchronous messages (it also needs to de-serialize the content form byte[] body). Just to follow along from my sample application the server also has a client private member so it can continue to forward a modified message on wards.

//as an override method inside NicksSimpleRpcServer:
public override void HandleSimpleCast(Boolean isRedelivered, IBasicProperties requestProperties, byte[] body)
{
    //deal with incoming message, create new message to forward
     rpcClient.Cast(new BasicProperties(), msgToForward);
}

This is where our story comes to an end… As a final step we need to get our server running…

Using the Task Parallel Library create a Task that kicks off another method of the RpcServer – MainLoop(). MainLoop simply sits there (blocks) and accepts incoming messages, each time an asynchronous arrives HandleSimpleCast will fire and the message will be processed.

var server = new NicksSimpleRpcServer(sub, client);

new Task(server.MainLoop).Start();

Hope this is easy to follow.

A trivial message loop using RabbitMQ in C#.NET

During the May 2011 meeting of the Melbourne ALT.NET group, 3 presenters each with their chosen functional language tackled a basic problem of transmitting a message along a chain, with the objective of all nodes contributing to a final output. Here’s some of their code in the the languages of; Erlang, Scala and F#.

As an introductory post to RabbitMQ for my blog I thought I would cover how you go about setting up a simple RabbitMQ server-client in an asynchronous fashion. In this post I’m only going to cover the abstract concept and introduce some terms, I’ll go into more specific detail in another post as to the technical details of using RabbitMQ. I presented this concept as a very quick 10 minute lightning talk at DDD Melbourne 2011.

So stay tuned for more RabbitMQ based posts (links will appear here when they’re complete).

The objective:

  1. Start with a message that contains a letter ‘A’
  2. Send this message off to a node (N)
  3. Have that node increment the letter, e.g. first node makes it ‘B’
  4. Then that node sends it off, and so on looping for M number of messages.

A Node:

Node Structure

The Algorithm:
My RabbitMQ and C# based solution (simplest version), this list will contain some RabbitMQ specific concepts that I’ll describe later in the post. This is the algorithm to create, wire up and kick off processing.

  1. Create the N clients SimpleRpcClient.
  2. Keep a reference to the 1st client, this is what we use to begin messaging.
  3. Create N Subscription elements.
  4. Create N SimpleRpcServer elements these are the actual nodes.
  5. Supply the second client onwards and subscibtion to the node
  6. Create a new Threading.Tasks.Task() for each node.
  7. To complete the loop, wire up the first client, to the last node.

Node Communication (click for larger view):
Each node, houses a client to continue on sending the message.

Node Communication

The Code:

IConnection connection;
IModel model;

char[] letters; //A-Z, repeating
for (x = 0; x < totalToCreate; x++)
{
    var nextLetter = letters[x];
    //this builds up a string in the format of comm.0.a, comm.0.b, etc
    var exchange = String.Format("comm.{0}.{1}", x, nextLetter); 
    model.ExchangeDeclare(exchange, ExchangeType.Direct);
    var queueName = model.EnsureQueue(exchange);
    subscriptions.Add(new Subscription(model, queueName));

    clients.Add(new SimpleRpcClient(model, queueName));
}

for (x = 0; x < totalToCreate; x++)
{
    //note the use of [x+1] on the clients, 
    server = new SimpleRpcServer(subscriptions[x], clients[x-1]);

    new Task(server.MainLoop).Start(); //MainLoop is a RabbitMQ concept
}

//Inside RpcServer
public override void HandleSimpleCast(bool isRedelivered, IBasicProperties requestProperties, byte[] body)
{
    var messageToSend = body.Deserialize().IncrementLetter();

    rpcClient.Cast(new BasicProperties(), messageToSend);
}

The code above had been simplified a bit more just to demonstrate the concept, please review the project code for further intricacies in it how it needs to operate.

Working demo code on BitBucket – https://bitbucket.org/NickJosevski/rabbitloop

*Note: you’ll have to excuse some of the roughness of the code as it stands at 29th of May, I haven’t had a chance to refactor it to be more elegant.

There is a long list of RabbitMQ concepts that are beyond the scope of this blog post, a new one will follow soon with my explanations of how they work:

  • IConnection
  • IModel
  • QueueHelper.EnsureQueue()
  • HandleSimpleCast
  • rpcClient.Cast()
  • Subscription
  • ExchangeDeclare()

Also check back I may have completed a Prezi on the subject.