Thinking in a document centric world with RavenDB @ ALT.NET

Last night (25th Feb 2014), I presented on RavenDB at ALT.NET Melbourne.

I got some great feedback from the audience and was happy to share my experience so far with RavenDB. If you were there / watch the recording and have some suggestions good or bad would love to hear them so I can improve.

Here’s the ALT.NET recording with slides, plus me up at the projector screen.

If you just want the slides and audio then here’s an alternate recording.

I’ve also put the slides up on slide share.

Tracking application errors with Raygun.io

A nice coincidence a few weeks was the news of Raygun going in to public beta crossing my radar.

At the time we were fine tuning some things in an application that was in a private beta, we had put a little effort in to ensure that we would get reliable results about errors that happened to the users, but at that point we were just storing the details in a database table.

Background

We were capturing 3 levels of errors in the application.
– Client-side (JavaScript)
– Web Tier (ASP.NET MVC / WebApi)
– Back-end (Topshelf hosted services)

Any client side error would be captured, and sent to the Web Tier, Web Tier forwards that and it’s own errors on to the back end where they would be persisted with low overhead. In a previous post I have covered this approach.

But to get from entries stored in a database to something actually useful to correctly monitory and to start a resolution process is quite a bit of work.

From our own application structure; we can easily query that table, and just as easily send emails to the dev team when they occur. But this is still short of a robust solution, so a quick glance at the Raygun features and there was very good reason to give it a go.

What it took for us to set up Raygun

A quick look at the provided setup instructions and their github sample, it looked very easy.

With our particular application structure the global Application_Error method and the sample usage of Server.GetLastError() didn’t fit well. The clearest example is the arrival of data from client side, which isn’t a .NET exception, so simply issuing the RaygunClient().Send(exception); call doesn’t work. In this scenario we basically recreate an exception that represents the issue in the web tier, then have that sent to Raygun.

For errors that originate in our controllers (regular and WebApi) which extend a common base class, we make use of the HandleError attribute so we can execute a method to do some extra work, the code looks like:

[HandleError]
public abstract class BaseController
{
    protected override void OnException(ExceptionContext filterContext)
    {
        //our other logic, some to deal with 500s, some to show 404s

        //make the call here to raygun if it was anything but a 404 that brought us here.
        new RaygunClient().SendInBackground(filterContext.Exception);
    }
}

In the scenarios where we actually do have the exception, then it’s great and it “just works”, and we send it off asynchronously, in the catch block by calling a wrapping function like this:

public static void LogWithRaygun(Exception ex)
{
    new RaygunClient().SendInBackground(ex);
}

Conclusion

So Raygun really helped us avoid using a weakly hand-rolled half-way solution for tracking errors, now with nice email notifications that look like this, and link into the Raygun detailed information view.

It’s lacking a few nice to have features, but that’s more than acceptable for version 1 of the application, and from what we’ve been told our suggestions are already on track for a future release. One particular one that would benefit lots of people would be to allow an association of errors to be mapped by the user. An example is, 2 seemingly different errors get logged but in actual fact are the same cause, this way the reporting and similarity tracking can continue to group the 2 variations under the one umbrella.

raygun email example

Along with the dashboard summary.

Part of the Raygun  Dashboard

It’s one less thing we need to worry about. Just an FYI we didn’t stop saving records into our own database table, we’re just unlikely to have to go looking in there very much, if ever.

PowerShell Recursive Rename for an SVN directory

On a large repository, I was attempting to rename the SVN tracking folders that are nested at every directory level, I needed to do this because of a difference in the leading character ‘.’ (period) vs ‘_’ (underscore). I know this could have easily been resolved with a new fetch but I wanted to avoid a lengthy download over a VPN connection.

I thought I would just quickly list some PowerShell commands I was playing with to clean up the repository as a blog post.

The closest I got to a solution but with a lot of errors/warnings during the process was:

Get-ChildItem * -Recurse -force | Where-Object { $_.Mode -eq "d--h-" } | Rename-Item -force -newname '_svn'

It seems the -force parameter was required. I’m not sure why it errors but it still works. Further investigation would be around how many times the commands run per directory, possibly too many times. Another avenue for investigation is the -silent parameter but that’s likely only going to obscure any issues.

Just for reference here’s what else I tried, these did not succeed.

Get-Childitem -path . -include .svn -recurse | Rename-Item -newname {$_.name -replace '.svn','_svn'}
Get-Childitem -path . -include .svn -recurse | foreach { Rename-Item .svn _svn }
Get-Childitem -path . -recurse -include '.svn' | foreach { Rename-Item .svn _svn }
Get-Childitem -path . -recurse | rename-item -newname { $_.name -replace '.svn','_svn' }
Get-Childitem -path . -recurse -include .svn | move-item -destination _svn

If you’re a PowerShell expert feel free to correct my possibly misguided attempt at a recursive rename.

Update 29th Dec 2011
I stumbled upon someone much more clever undertaking a similar rename process. In this case jQuery text, but the logic serves the same purpose it goes and renames content of the files that logic can be replaced with the move-item command.

$find = 'jquery-1\.4\.4'
$replace = 'jquery-1\.5\.1'
$match = '*.cshtml' , '*.vbhtml'
$preview = $true

foreach ($sc in dir -recurse -include $match | where { test-path $_.fullname -pathtype leaf} ) {
    select-string -path $sc -pattern $find
    if (!$preview) {
       (get-content $sc) | foreach-object { $_ -replace $find, $replace } | set-content $sc
    }
}

A Look Back at Discovering PowerShell

This is part 1 of a 3 part series, in which I will be creating a PowerShell script that accepts as input a folder location via a standard windows popup dialog and then performs some repetitive action. I’ll get to the details in the next post where I actually build the script. In the 3rd post will be putting it to use…

But first some background on PowerShell and some details on my awkward attachment to it, back in December 2007 I attended a Readify (RDN) session about PowerShell presented by Mitch Denny and was blown away by its potential and power, along with an attention grabbing demo of it running a Space Invaders game. I proceeded to adapt his demo and do my own investigation and the following month demonstrated the PowerShell concepts internally to my colleagues who specialise in the .NET development space.

I was trying to promote the use of PowerShell to replace a large collection of batch (.bat) scripts we were using at the time across many projects, to do things ranging from mass source control check-outs/check-ins to building deployments of production packages, yes **shudder**. Sadly more often than not, I did not make time to improve and replace all batch scripts I encountered either through the use of PowerShell or alternate approaches to running a script.

Double checking with this post on the MSDN PowerShell blog, I can confirm that at the time it was the first CTP release of PowerShell 2.0 that got my attention. With my switch now to Windows 7 and currently working on a new laptop it was fantastic to see an already installed out of the box PowerShell, which I simply searched for by typing into my start menu.

A few months back I discovered a site that got my attention again with little tips and video tutorials about using PowerShell, sadly a lot of their videos are no longer available, but I still subscribe to an almost daily newsletter with 1 single tip per day. So for a while I’ve been amassing the emails from PowerShell.com in one of my gmail accounts, labelling them thinking “oh that’s cool tip” but not having the time to test it out. Well the other day the “perfect” tip for a planned task came along to get me cracking on this script and blog posts.

In the next post I’ll be running through its actual creation.

ASP.NET MVC 1.0 Out and Free (as in Free Speech)

So if you feel the need go nuts; using it, customising and contributing – http://www.codeplex.com/aspnet.

For detailed information go right to the source: Scott Guthrie’s Blog post along with Scott Hanselman’s Blog Post that has some additional information.

More Links:

To go way off topic. COBOL.

Inspired by a recent StackOverflow question about reading a file line by line in various languages. I quickly (evidently not quickly enough) dug up some 3rd year Uni labs on COBOL and tried to clean it up into a basic file reader. The post was closed up and made into a community wiki, while I was putting the comments on my COBOL application.

So I’ll be sharing it here. But first some background.

Back then we were writing labs and assignments to process large files, with the goal of doing it efficiently in particular on a machine that could not support storing the entire file contents in RAM. Having said that with the even larger volumes of data today it is still not practical to always load an entire file into memory.

This time I speak of wasn’t that long ago (only 2004), so we had reasonable machines hosting our Linux development environments. From what I recall the hype back then was 64 bit architectures, with the AMD Opteron having only just come out. And we’re still running x86 OS’s, shame.

The subject was focussed on understanding the processing cost of hard drive access and had us calculating latency, seek, and read times. Less of a concern with the speed of Hard Drives today (or even back then) but none the less very useful from a concept of learning fundamentals of the machines. I believe we were dealing with files that were under 1Gb, but obviously restricted from loading the entire file into memory at once. We were also required to incorporate sorting algorithms, no point in just reading and dumping a file, the trick being you couldn’t access the entire file so you had to sort in chunks. But that’s another concept.

The most annoying aspect was having to write code for a compiler that couldn’t handle a code file width greater than 80 characters. So the trick we were taught was to put in some comments at the top counting the spaces. (We would also adjust our terminal window to 80 characters wide).

      *  1         2         3         4         5         6         7
123456*89012345678901234567890123456789012345678901234567890123456789012
      *

Note in the samples COBOL reserved keywords are capitalised. The use of the full-stop was also a great headache for us. Even tho we were learning C/C++ and already well accustomed to putting semi-colons every where especially after struct/class definitions { }; The trick was actually not every line has a full stop. As a coding style you would continue on the next line for nested/associated sub-calls.

I don’t think there’s a WordPress ‘sourcecode’ tag attribute that will accept language=’COBOL’, so the highlighting won’t be perfect, even tho it’s not perfect for a lot of the WCF and C#3.0 code.

We would always start a code file off with some descriptive information. Of course indented to start at the 7th character position. The first 6 positions were used for line numbers. But it wasn’t mandatory to have line numbers in that region, at least not mandatory for our compiler.

IDENTIFICATION DIVISION.
PROGRAM-ID. myCobolFileInput.
AUTHOR. nj.

Then get right into declaring and opening file handles.

INPUT-SETUP SECTION.

FILE-CONTROL.
  SELECT file-in
    ASSIGN TO 'input.dat'.

In this example handling a file that’s 80 characters wide too, (79 data, 1 newline \n).

FD file-in.
01 line-in.
  05 data-part.
    10 current-line            PIC X(79).
  05 line-end-marker         PIC X. 

99 end-of-data  PIC XXX.

The main processing block (program loop etc).

PROCEDURE DIVISION.
100-executive-routine.
    PERFORM 200-open-files.
    PERFORM 300-read-input.
    PERFORM 400-write-output
        UNTIL end-of-data IS EQUAL TO "yes"
    PERFORM 500-close-file.

    STOP RUN.

And the rest:

200-open-files.
    OPEN INPUT  employee-details-in.
    MOVE "no" TO end-of-data.

300-read-input.
    READ employee-details-in
        AT END MOVE "yes" TO end-of-data.

400-write-output.
    DISPLAY current-line.

500-close-file.
    CLOSE file-in.

END PROGRAM myCobolFileInput.

No guarantees that’ll run as expected, as I really cannot be bothered even googling what would be required to get a compiler setup to handle this.

Interestingly the book (still on my book-shelf) that we were using for the course Mastering Cobol Programming (Palgrave Master) is available on Amazon.com.

I will get back to WCF/WPF material soon.