Moving from WebAPI to ServiceStack

Having used WebAPI in conjunction with a hybrid WebAPI and ASP.NET MVC app quite recently, it does a good job, but once you start to get deeper in a more complex application some weaknesses start to show. A trivial example is mapping exceptions to HttpStatus codes, this is something you get easily with ServiceStack.

The WebAPI controllers looked like this, with a route prefix at the top, and then the specific part with

    [RoutePrefix("/api/product")]
    public class ProductController : ApiController
    {
        [GET("{id}")]
        [AcceptVerbs("GET")]
        public Product Get(ProductId id)
        { /* ... */ }
        
        [POST("create")]
        [AcceptVerbs("POST")]
        public void Post(CreateProductCommand cmd)
        { /* ... */ }
    }
	
    // New style as route decorating an F# record
    [<TypeScript>]
    [<Route("/product/create", "POST")>]
    type CreateProductCommand =
    { 
        ProductId: ProductId
        Name: string
    }

Yes F#, checkout out my post on initial learnings with F#. There’s something interesting about our route decorations on that record type, I’ll try to get around to writing about it. But for context this time round it’s

Issues with WebAPI

Our primary issue with WebAPI was that its route matching was limited.

As a result, it frequently did not match routes in an expected way. This often resulted in a great deal of time lost to fiddling about and trying to come up with a pattern that would satisfy what Web API wanted, often at the expense of our public API design. Also, we wanted to take advantage of routing features that already exist in ServiceStack, but are still only planned in the future for WebAPI.

Finally, as our product’s hosting needs grow, we may like to take advantage of cheaper Amazon machine images and run our services on Linux; ServiceStack is a first-class Mono citizen.

Conclusion

We’re quite happy so far having run it for ages.

[Update]
Months later in production still very happy.

Cross subdomain ASP.NET Forms Authentication for local developement

I’ve had this issue twice now, and both times when I did my search I would end up this popular Stack Overflow question but adding an answer to a popular question that doesn’t directly* answer the question will get the attention of down vote police.

*For some values of direct.

So I’ll just have to blog it here, and maybe the comment will help someone out who is likely to end up on that question, at least until the comment is flagged as unconstructive or offensive because “somewhat related” isn’t in the spirit of StackOverflow.

So with the grievance aired.

Objective

To be able to have subdomain1.machine-name and subdomain2.machine-name share a cookie locally via forms authentication.

Steps

To go about achieving the saving of an authentication cookie valid across multiple domains locally under IIS.

Configurations

The most important thing here is to ensure that your local domain has at least ‘.’ in it. I often try to just have it be the machine name, this does not work, so I select something like the .app suffix.

Authentication configuration section in web.config:

   <authentication mode="Forms">
      <forms loginUrl="~/login" timeout="2880" domain="pic-nick.app" />
   </authentication>

IIS Setup

Will look like this:

iis settings

HOSTS File

hosts file

Done

There we go, with this set up you can go to red. and blue. and have it share the authentication cookie to be logged into your app across sub domains locally.

dashbaord blue

Troubleshooting

I also ran into some extra issues on Windows 8 similar to this StackOverflow question.

Exception from IIS:

HTTP Error 500.19 – Internal Server Error

The requested page cannon be access because the related configuration data for this page is invalid.


This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault=”Deny”), or set explicitly by a location tag with overrideMode=”Deny” or the legacy allowOverride=”false.

To solve this you need to probably enable some Windows Features related to Security and .NET.

features toggle pointing

Tracking application errors with Raygun.io

A nice coincidence a few weeks was the news of Raygun going in to public beta crossing my radar.

At the time we were fine tuning some things in an application that was in a private beta, we had put a little effort in to ensure that we would get reliable results about errors that happened to the users, but at that point we were just storing the details in a database table.

Background

We were capturing 3 levels of errors in the application.
– Client-side (JavaScript)
– Web Tier (ASP.NET MVC / WebApi)
– Back-end (Topshelf hosted services)

Any client side error would be captured, and sent to the Web Tier, Web Tier forwards that and it’s own errors on to the back end where they would be persisted with low overhead. In a previous post I have covered this approach.

But to get from entries stored in a database to something actually useful to correctly monitory and to start a resolution process is quite a bit of work.

From our own application structure; we can easily query that table, and just as easily send emails to the dev team when they occur. But this is still short of a robust solution, so a quick glance at the Raygun features and there was very good reason to give it a go.

What it took for us to set up Raygun

A quick look at the provided setup instructions and their github sample, it looked very easy.

With our particular application structure the global Application_Error method and the sample usage of Server.GetLastError() didn’t fit well. The clearest example is the arrival of data from client side, which isn’t a .NET exception, so simply issuing the RaygunClient().Send(exception); call doesn’t work. In this scenario we basically recreate an exception that represents the issue in the web tier, then have that sent to Raygun.

For errors that originate in our controllers (regular and WebApi) which extend a common base class, we make use of the HandleError attribute so we can execute a method to do some extra work, the code looks like:

[HandleError]
public abstract class BaseController
{
    protected override void OnException(ExceptionContext filterContext)
    {
        //our other logic, some to deal with 500s, some to show 404s

        //make the call here to raygun if it was anything but a 404 that brought us here.
        new RaygunClient().SendInBackground(filterContext.Exception);
    }
}

In the scenarios where we actually do have the exception, then it’s great and it “just works”, and we send it off asynchronously, in the catch block by calling a wrapping function like this:

public static void LogWithRaygun(Exception ex)
{
    new RaygunClient().SendInBackground(ex);
}

Conclusion

So Raygun really helped us avoid using a weakly hand-rolled half-way solution for tracking errors, now with nice email notifications that look like this, and link into the Raygun detailed information view.

It’s lacking a few nice to have features, but that’s more than acceptable for version 1 of the application, and from what we’ve been told our suggestions are already on track for a future release. One particular one that would benefit lots of people would be to allow an association of errors to be mapped by the user. An example is, 2 seemingly different errors get logged but in actual fact are the same cause, this way the reporting and similarity tracking can continue to group the 2 variations under the one umbrella.

raygun email example

Along with the dashboard summary.

Part of the Raygun  Dashboard

It’s one less thing we need to worry about. Just an FYI we didn’t stop saving records into our own database table, we’re just unlikely to have to go looking in there very much, if ever.

When you need to generate and send templated emails, consider mailzor

Mailzor is a basic utility library to help generate and send emails using the Razor view engine to populate email templates, designed to be quickly pluggable into your .NET app.

In our applications we send out HTML formatted emails, and seed them with a variety of data. I thought it would be easy to write them as razor files (cshtml) and then use the razor engine to generate them and send.

It’s up on NuGet and with the release of v1.0.0.11, it’s more stable.

For the most up to date info follow along with the usage sections of the readme.md file on the github repository.

How it works

I thought I would share some background about the development of it, and hiccups along the way. The original set of code came from Kazi Manzur Rashid, which solved the problem of making use of System.Web.RazorTemplateEngine, which I extended (with permission) to be usable as an injectable dependency and via NuGet.

The core elements are, the creation and management of the SMTP client, the building up of the MailMessage. Then all the compilation related work to get the RazorTemplateEngine up and running.

The RazorTemplateEngine logic boils down to taking the razor file stored on disk and using CSharpCodeProvider.CompileAssemblyFromDom. So if you’re curious about this code in particular dig into EmailTemplateEngine.cs in the project files.

Prior to version .10 where I went down the path of using ilmerge to solve conflicts of version mismatch with System.Web.Razor.

It seems easy seeing how I took an existing chunk of operational code and extended, and it only seems easy when it is working, but when it doesn’t work and you’ve got strange compilation errors, debugging this mechanism is not the greatest. I found myself hunting for temporary files and trying to have other compiler flags to output more information.

In the early versions it was heavily the case of “works on my machine”, but now its fine and seems to be feature complete…

Playing with AppHarbor, Twitter and WebAPI.

What?

This sample application is very contrived, and came out of a throw away twitter account creation joke.

what started it

Landing Page: usedguids.apphb.com

Usage Info: gist.github.com/3964492

Service Features

  1. Submit a Guid, have it reserved
  2. Will inform you if it’s taken, or ok.
  3. Tweets

Tech Details / Steps

  • File, New, Web Api Project
  • ASP.NET 4.5 Web Api Controller
  • PM> Install-Package TweetSharp (nuget link, github link)
  • Git push to remote repo on BitBucket
  • AppHarbor link to BitBucket account (this was great, very easy)
  • Select an app name usedguids.apphb.com

Api Controller Logic

    public class UsedGuidController : ApiController
    {
        public HttpResponseMessage Post(UsedGuidInputModel ug)
        {
            //check for duplicates,
            //twitter authenticate, tweet
            //save guid
            //return new HttpResponseMessage(what_happened)
        }
    }

Conclusion

It’s up and running, we’ll see how stable it is. In this case the code is very sloppy, the focus was to get this concept up and running, so I decided to host it on BitBucket where I have private repositories on a free account.

The AppHarbor experience was great, no fuss to get it up and running, via the authorise AppHarbor app action when I was taken to BitBucket. Even setting up a back end store was very easy.

app harbor ui

The hardest part was working out how to deal with the Twitter API, and that was only tricky because I was in a hurry to just get it working, without reading enough documentation.

It’s unlikely I’ll make time to tidy up that code so it can be of any reasonable use to anyone, there’s too many hack points to get it operational, in particular around twitter API keys, for the application that performs the posting, and the user linked to the account. Not to mention hard coded connection strings with passwords in them. Quite a long list of what-not-to-do.

I did like that the wizard does warn you of such bad behaviour. There’s some insight into the storage model on the back end 😉

EfDataModelCreation

Queuing ajax calls to ASP.NET WebApi Controllers

Objective

To halt processing of subsequent ajax calls after one causes an error.

Why

Any actions related to a similar set of data (or concept) will all go through the same queue. This way if something falls over during processing of any given request, halting the processing of further requests should help not worsen things, or at least avoid subsequent errors that are a direct result of the first error.

When something unexpected has happened it’s likely the underlying system data is not in a valid state. Further actions may complicate things, possibly making matters worse. But more likely further actions will also error and we shouldn’t subject the system to having to handle them. So the solution is to get the user to reload the application into a known good state, i.e. complete fresh request of data.

There’s a competing concern that particular user actions may be desirable to have processed even after particular unknown/unexpected errors occur, but for now we’re going to assume these are rarer and can be dealt with specifically simply by having them bypass this queue (or make use of an alternate queue).

How

A toggled case
– After an error set a flag that will keep prompting the user to reload the page after an error.

A fixed case
– Do not process items already queued up in the processing queue.

Queue entry management

This part is simple. Most of the code is plumbing around building up the ajaxQueue call.

#CoffeeScript
class ActionCommandQueue

    validState: true
    
    #todo: surface the console log issues to the user

    sendData: (url, data, type, settings) =>
        settings = settings || {}
        settings.url = url
        settings.data = JSON.stringify(data)
        settings.type = type
        settings.contentType = 'application/json; charset=utf-8'
        settings.error = @onError
        if @validState
            $.ajaxQueue(settings)
        else
            console.log "You need to reload the page to continue. (click here)."

    onError: (xhr, status, error) =>
        @validState = false
        console.log "Sorry that action failed. Please reload the page and try again"

Usage

    #CoffeeScript

    sendData '/api/Tasks', { description: "new task" }

http://api.jquery.com/queue/

The queue iteself

This part is also simple, seeing how the hard work in creating a jQuery .ajaxQueue() method has already been done by gnarf, and here’s the blog post outlining the code. This came out of a great stackoverflow answer and this one.

Except one modification

The .then( next, next ) call was replaced with a failure case, in production code it’s likely to be a silent failure because there’s an error handling function higher up in the call chain that will report the problem to the user (no need for repeating the message).

    function doRequest( next ) {

        fail = function () { console.log('cannot continue to process queue after an error');

        jqXHR = $.ajax( ajaxOpts )
            .done( dfd.resolve )
            .fail( dfd.reject )
            .then( next, fail /*was next*/ );
    }

An added benefit here is not just around safety for the error case, but this queue approach ensures important user actions will get dispatched and arrive in a correct order.

Example

Scenario
1. Create a Task.
2. Update task details
3. Add Sub Task
4. Edit Sub Task [Fails]
5. Some other action (possibly related to the sub task)
6. 7. 8. 9. all same as 5.

At step 4, it fails, because something prevented the sub task being added. But before Task 4 failed, the user also issued action 5 very quickly. Step 5 is what the above code is preventing from running because we’re not sure what the impacts may be.

Before I release some Unit Tests to demonstrate this. I have a crude annotation on the Chrome Developer tools console output. The proof is lack of calls to ‘doneCb‘ after the error. Note the 500 error, then the 2 error messages that follow.

Ajax Queue Drop

Chrome debug output showing the error output.

Queues, Deferred and Promises

It’s beyond the scope of what I wanted to talk about here, but the jQuery.ajax wrapper jQuery.ajaxQueue makes use of Deferred, Deffered.then() and Promises, they’re worth looking into to gain a detailed understanding. Some reading about the queue itself.

Conclusion

So now a user can click to their hearts content on a variety of partially related actions on a page, having each fire off an ajax() request. If something goes wrong they can be alerted, and you can save a flood of subsequent errors happening.

How much to scratch your own itch as a Startup?

Let me first define the itch concept – the itch here on in will refer to how far to take of your own opinions and desires of how a piece of software should operate.

So the question is from the title:

Q: How much to scratch your own itch as a startup?

Let me answer this right up front:

A: The correct amount.

Off the back of the Thursday night WDYK event, some Startup and User Experience talking points were raised that I wanted to discuss. I started off by trying to fit it in to the last post, but it didn’t quite fit there. I’m working in a Startupesque environment right now and just wanted to put some ideas down on ‘paper’, so here goes…

Scratch?

The statement that sent me off on this thought path was “don’t just scratch your own itch“, it’s good that Joel reminded the audience of this. Often as software developers we inject too much of our own usability ideas into the software being built. This falls over when we eventually realise this is not how typical users of our application would like to use it, even if they are other software developers. What I’m currently working on is not a core software engineer’s tool, say like bug tracking software. It is targeted at a specific type of user and process. But of course we developers often put our ‘application user’ hat on as we build features. Knowing that we’re building the system for someone other than ourselves doesn’t inhibit members of our development team from having strong opinions on how it should operate. This isn’t a bad thing…

Start scratching

There is something in the argument of “scratching your own itch” being beneficial – it is a reasonable starting point to turning your idea into a functioning application. But you always need to keep in mind your needs aren’t going to be exactly those of the customer. There’s a fair bit of this kind of opinion floating around on blogs: “Focusing on our own problems doesn’t necessarily mean we’re solving other people’s problems, or solving problems that matter at scale“- Ben Yoskovitz, source.

When subject matter experience/expertise comes in to play in building the application, you possibly are focussing on your interpretation of the problems that do matter. You can’t always get the cleanest/best problem definition from your users. So you go on to manage the itch combining the expertise and opinion with some user experience analysis. Your team probably has a vision of what the application will be about, heading towards hopefully at least one killer feature/aspect that makes your product stand out. So you go forward combining your ideas and refining with some user testing, now days focussed around the users experience and flow through the application.

Scratch right

This is how you get to that magical place which is the correct amount of scratching your own itch. Have the application be capable of (within reason) all you desire, but reign that in to simpler flows refined by actual results of real users navigating through the system to achieve their normal expectation of work supported by the system.

When my reaches that magical place, I’ll share what it took to get there, but for now it’s just an objective off in the not too far distance.

Scratch well

Joel describing his own start-up raised some great points about not needing venture capital, and how your product would likely be better off without it. There’s no financial pressure from the investor wanting to cash out sometime down the track. The way you do this is by:

1. Make something people want,
2. Make it better than what is out there,
3. Tell people about it.

That last point was a great theme to touch on, Mark had his ideas on this which were about going out to find your users, and not just shouting as he put it (a company blog, a company twitter account). Finding and engaging with users is critical. This is what it will take to get the widest range of feedback to help build your application. But it needs to be guided into a solution that’s not something that has come out of a very large handful of a ‘committee design meetings’.

As an FYI; Joel was Joel Friedlaender – Founder at Red Guava and Mark was Mark Mansour – Founder at Agile Bench

Web Directions South – What Do You Know? Night in Melbourne

Last night Thursday 23rd August 2012, I went along the What Do You Know? event held at The Appartment a great little place I used to frequent when I was working on Exhibition Street.

Earlier this year I was at the Web Directions South Code event, so anything put on the Web Directions South team is great and you should attend. In particular the up coming conference in mid October 2012, Sydney.

So back to Thursday nights event. There were 12 lightning talks, each 5 minutes long, I’ll list them off with links to what was most interesting / their entire presentation.

I wanted to talk about a few that stood out to me, mostly because it’s relevant to what is happening at Picnic Software at the moment.

The 4 presentations that stood out as very relevant to what we’re doing at Picnic Software were:

  • Mark – 5 Simple Things You’ll Forget When You Start a Startup
  • Joel – DevOps for Startups: Tales from the trenches
  • Matt – What the $%&# is UX Design?
  • Will – The User is Drunk

I started off by trying to fit it in to this post, but it ended up longer than I expected so it’s here: How much to scratch your own itch as a Startup?.

Summary of the event with links, in order of appearance:

The State of Our Web Performance Union
John Bristowe, @JohnBristowe

Content being deliver over the web is getting larger faster than bandwidth increases. Aim for performance. Have a look at a data trends httparchive.org

DevOps for Startups: Tales from the trenches
Lucas Chan, @geekylucas

Monitor and be ready for spikes. Uptime is critical. Don’t build what you can rent.

What the $%&# is UX Design?
Matt Magain, @mattymcg

Watch this on YouTube and check out uxmastery.com

A whirlwind tour of D3.js
Tony Milne, @tonymilne

It’s very powerful, check it out d3js.org/

A brief introduction to the Gamepad API
Anette Bergo, @anettebgo

html5rocks.com/en/tutorials/doodles/gamepad/

Getting Sourcey with Javascript
Michael Mifsud, @xzyfer

Source Maps are the future of debugging the web – html5rocks.com/en/tutorials/developertools/sourcemaps/

Startup Myths Debunked
Joel Friedlaender, @jfriedlaender

Named some common myths that are all likely wrong; failure is high e.g. 9 in 10 Startups fail, your idea is worthless, you need venture capital, the only costs are your time.

CSS checkboxes and the ridiculous things you can build with them
Ryan Seddon, @ryanseddon

cssn.in/ja/wdyk2012

50 handy things you’ve never heard of
Charlie Somerville, @charliesome

charlie.bz/presos/50resources/

From zero to superpimp mobile web app using Tres
Julio Cesar Ody, @julio_ody

Julio amazingly wrote some non-trivial JavaScript using Backbone.js and his library:
tres.io

The User is Drunk
Will Dayble, @willdayble

Good UI is ‘not there’, say things twice (icon and words), you can’t beat over the shoulder testing (watching your user).

5 Simple Things You’ll Forget When You Start a Startup
Mark Mansour, @markmansour

Marketing (is critical and not easy), Product (focus on benefits and customers), Promotion (talk to customers), Price (Tiers and known costs for customers), Place (don’t shout at your customers, go find them)

Automating IIS actions with PowerShell – Create Multiple Sites

I’m working towards a more complex SignalR based post, but in the mean time part of the work on that involves setting up a few ASP.NET web apps.

If you’re after a more comprehensive guide check out this post on learn.iis.net by Thomas Deml. I’ve summarised the steps required to get some basic .NET 4 web applications deployed.

Objective
To create N identical websites in local IIS, each with an incrementing name Id. Each linked to the same application directory. The exact reason as to why will come in a future post, for now consider it an exercise in manipulating IIS via PowerShell.

s1.site.local
s2.site.local

Step 1 – Ensure you’re running the scripts in x86 mode.

Which seems quite common a problem, with a stackoverflow question. I haven’t worked a way around this yet, but this is the error when not running as x86:

New-Item : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154.
At line:1 char:9
+ New-Item <<<< AppPools\test-app-pool
+ CategoryInfo : InvalidArgument: (:) [New-Item], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.NewItemCommand

Step 2 – Import-Module WebAdministration
This loads the IIS namespace which allows you to just navigate in the same way you would the filesystem

> CD IIS:\

Step 3 – Create & Configure App Pool

    New-Item AppPools\test.pool

    Set-ItemProperty IIS:\AppPools\test.pool -name "enable32BitAppOnWin64" -Value "true"

    Set-ItemProperty IIS:\AppPools\test.pool -name "managedRuntimeVersion" -Value "v4.0"

NOTE: here I didn’t have any luck storing the app pool path in a variable then using Set-ItemProperty, hence the repeating.

Step 4 – Variables

    $sitesToCreate = 10
    $path = "C:\dev\project-x\App.Web"
    $appPool = "test.pool"

Step 5 – Create Site & Link AppPool Loop

For N(10) to zero:

  • Create a new site
  • Set binding info and physical path
  • Set the app pool
    while ($sitesToCreate -gt 0)
    { 
        $siteName = "s" + $sitesToCreate + ".site.local"
        $siteWithIISPrefix = "IIS:\Sites\" + $siteName
        Write-Host "Creating: " $siteName
        
        $site = New-Item $siteWithIISPrefix -bindings @{protocol="http";bindingInformation="*:80:" + $siteName } -physicalPath $path
        
        Set-ItemProperty IIS:\Sites\$siteName -name applicationPool -value $appPool
        $sitesToCreate--
    }

Note: ‘appPool’ is a text variable, not the ‘Get-Item’. Set-ItemProperty operates on a path not a variable representing the item.

We’re done

One last note, to get these to resolve correctly on your local developer machine you’ll need to modify your hosts file.

IIS multisite

The complete code up as a Github Gist.

Capturing client side JavaScript errors for later analysis

We’re getting close to pushing an application to a larger set of test users, and we’ll be interested in what happens when a larger variety of machine configurations (browsers and operating systems) and of course user actions encounter errors.

We started with simply catching any server side errors, and log them to a single database table (separate database to application).

To achieve this part of it, it is simple as having a very lightweight database connection mechanism to insert this log entry record, this way if there was an error with our applications standard database access mechanism we could also capture that.

In this fashion we can be reasonably sure that anything short of major production environment failure (network infrastructure / machine specific) that we’ll be able to capture errors while the system is being used. Those types of issues will be handled differently.

Then we realised it would be just as helpful to capture and store client side JavaScript errors in a similar fashion.

Capture

The JavaScript error handler looks like this, making use of window.error:

window.$debug.globalErrorHandler = function () {
  window.onerror = function (m, u, l) {
    
    $.ajax({
      url: '/Error/Occurred',
      type: 'POST',
      dataType: 'json',
      data: JSON.stringify({ errorMsg: m, url: u, line: l, uri: window.location.href }),
      contentType: 'application/json; charset=utf-8',
    });
    
    return true;
  };
};

An extra note here is I came across a JS library that offered a common ‘printStackTrace()’ method. Created by Eric Wendelingithub project. Which doesn’t work how I would have hoped it would have worked in our global error handler, but that’s the nature of the error handler event.

Never the less it does look quite helpful, to make use of it you need to have a specific try { } catch { } block around something that may fail. Checkout the readme on the github project page.

Back to the main focus of this post…

We can also decide to call $debug.globalErrorHandler() method in some central part of the web application, so that later it doesn’t have to be always turned on.

Receive

The MVC side of this is even simpler, since our users are logged in while using the application, we have ‘context’ information about them, so that’s one extra useful piece of information we can capture as part of the error.

public class ErrorController
{
  public void Occurred(ClientSideJavaScriptException error)
  {
    // this context in our application represents the user, so we know who experienced the error.
    error.User = _userContext.UserName;

    // we send this off elsewhere to be persisted, you could simply persist it here
    _service.Handle(error);
  }
}

The ClientSideJavaScriptException class simply has the properties required to send over the information from the ajax post.

public class ClientSideJavaScriptException
{
  public string ErrorMsg { get; set; }
  public string Url { get; set; }
  public string Line { get; set; }
}

Process

Finally the persistence logic here is via the Micro.ORM NPoco selected because Adam said it was good 😉 and the Nuget Package helped.

//setup
Db = new Database("configuration_key_name");

Db.Insert(new Details
{
  OccurredWhere = "client-side-js",
  ExceptionMessage = errorDetails.ErrorMsg,
  When = DateTime.Now,
  StackTrace = errorDetails.Stack,
  Method = "Line Number:" + errorDetails.Line
}