IIS, Visual Studio, unable to start debugging on the web server.

Background

So this is another logging this issue after some frustrating problem sovling process. This is to help future me when I forget the same windows configuration step in another 8-15 months, maybe it’ll help you too.

On Windows 8 you have go in and turn on many individual items to just get an ASP.NET IIS hosted web app running, I’ve a variation of this problem in the past and blogged about it.

This time round it’s basically the same problem, but with Visual Studio upon trying to spin up and attach to an IIS hosted web project kicks up 1 of 3 errors.

Issue

The most common one was:

Unable to start debugging on the web server. The debugger cannot connect to the remote computer. The debugger was unable to resolve the specified computer name.

Which is not helpful, and the posts I found were people trying to remote debug machines or some thing else not helpful.

Unable to start debugging on the web server

Other variations included:

… error occured on a send

Unable to start debugging on the web server.

If you dig further into the windows event store you’ll be sent down an even further wrong path with errors such as:

The program can’t start because SecRuntime.dll is missing from your computer. Try reinstalling the program to fix this problem.

Solution

Ensure that you turn on ‘Internet Information Services Hostable Web Core’ along with all the other .NET / IIS feautes you need that I mentioned in the previous post.

Windows Features

Recovering a docker vm on Windows with Virtual Box

NOTE: this is more of a windows / virtual box problem but the boot2docker experience as will a lot of things is not great on Windows… yet.

I was running the docker VM with 2 newly configured containers, all was going good, the docker part was quite easy (once up and running) then my entire machine crashed due to something else.

Reboot, open up shell and type ./boot2docker.exe start

Failed to start machine “boot2docker-vm” (run again with -v
for details)

So I run it with -v, no useful information.

Oh no.

sad_keanu

Hopefully I haven’t lost the containers I spent a day setting up. The first windows issue with getting docker up and running was resolved by wiping all the data produced and starting fresh. That was not going to be the best outcome this time around.

First thing first, find out where that docker VM is:

virtual box UI

So it was in C:\Users\<username>\VirtualBox VMs\boot2docker-vm, there it was 580mb and several hours of work I didn’t want to do again, minor relief, made a copy of this.

Next step: getting docker back into a good state, I tried many combinations of; poweroff, reset, init, uninstalling and reinstalling, no luck. A note on this none of those commands hurt the vmdk file there, but still make a copy.

So next step move that boot2docker-vm folder out of there, and do a new init..

Small success docker starts.

Stop it and then try to drop the VMDK file back in…

Nope.

Failed to start machine “boot2docker-vm” (run again with -v
for details)

nope

Ok so now it’s starting to look like the problem is specific to VirtualBox and just getting the VM to spin up again. A little of bit of digging and I see the vbox file has some UIDs and MACAddresses, comparing the 2 newly-installed to the backup, the differences that look to be the cause appear.

diff of vbox file

Success, change those variables to match.

It starts, and the containers are there, phew!

docker ps -a

docker start <containerid>

Summary

It appears that a new init sets up new adapater MAC addresses and a new machine uuid, the rest of the differences can stay as they are specific to your old VMDK file. If you’re reading this good luck, and as always YMMV.

Tracking application errors with Raygun.io

A nice coincidence a few weeks was the news of Raygun going in to public beta crossing my radar.

At the time we were fine tuning some things in an application that was in a private beta, we had put a little effort in to ensure that we would get reliable results about errors that happened to the users, but at that point we were just storing the details in a database table.

Background

We were capturing 3 levels of errors in the application.
– Client-side (JavaScript)
– Web Tier (ASP.NET MVC / WebApi)
– Back-end (Topshelf hosted services)

Any client side error would be captured, and sent to the Web Tier, Web Tier forwards that and it’s own errors on to the back end where they would be persisted with low overhead. In a previous post I have covered this approach.

But to get from entries stored in a database to something actually useful to correctly monitory and to start a resolution process is quite a bit of work.

From our own application structure; we can easily query that table, and just as easily send emails to the dev team when they occur. But this is still short of a robust solution, so a quick glance at the Raygun features and there was very good reason to give it a go.

What it took for us to set up Raygun

A quick look at the provided setup instructions and their github sample, it looked very easy.

With our particular application structure the global Application_Error method and the sample usage of Server.GetLastError() didn’t fit well. The clearest example is the arrival of data from client side, which isn’t a .NET exception, so simply issuing the RaygunClient().Send(exception); call doesn’t work. In this scenario we basically recreate an exception that represents the issue in the web tier, then have that sent to Raygun.

For errors that originate in our controllers (regular and WebApi) which extend a common base class, we make use of the HandleError attribute so we can execute a method to do some extra work, the code looks like:

[HandleError]
public abstract class BaseController
{
    protected override void OnException(ExceptionContext filterContext)
    {
        //our other logic, some to deal with 500s, some to show 404s

        //make the call here to raygun if it was anything but a 404 that brought us here.
        new RaygunClient().SendInBackground(filterContext.Exception);
    }
}

In the scenarios where we actually do have the exception, then it’s great and it “just works”, and we send it off asynchronously, in the catch block by calling a wrapping function like this:

public static void LogWithRaygun(Exception ex)
{
    new RaygunClient().SendInBackground(ex);
}

Conclusion

So Raygun really helped us avoid using a weakly hand-rolled half-way solution for tracking errors, now with nice email notifications that look like this, and link into the Raygun detailed information view.

It’s lacking a few nice to have features, but that’s more than acceptable for version 1 of the application, and from what we’ve been told our suggestions are already on track for a future release. One particular one that would benefit lots of people would be to allow an association of errors to be mapped by the user. An example is, 2 seemingly different errors get logged but in actual fact are the same cause, this way the reporting and similarity tracking can continue to group the 2 variations under the one umbrella.

raygun email example

Along with the dashboard summary.

Part of the Raygun  Dashboard

It’s one less thing we need to worry about. Just an FYI we didn’t stop saving records into our own database table, we’re just unlikely to have to go looking in there very much, if ever.

When you need to generate and send templated emails, consider mailzor

Mailzor is a basic utility library to help generate and send emails using the Razor view engine to populate email templates, designed to be quickly pluggable into your .NET app.

In our applications we send out HTML formatted emails, and seed them with a variety of data. I thought it would be easy to write them as razor files (cshtml) and then use the razor engine to generate them and send.

It’s up on NuGet and with the release of v1.0.0.11, it’s more stable.

For the most up to date info follow along with the usage sections of the readme.md file on the github repository.

How it works

I thought I would share some background about the development of it, and hiccups along the way. The original set of code came from Kazi Manzur Rashid, which solved the problem of making use of System.Web.RazorTemplateEngine, which I extended (with permission) to be usable as an injectable dependency and via NuGet.

The core elements are, the creation and management of the SMTP client, the building up of the MailMessage. Then all the compilation related work to get the RazorTemplateEngine up and running.

The RazorTemplateEngine logic boils down to taking the razor file stored on disk and using CSharpCodeProvider.CompileAssemblyFromDom. So if you’re curious about this code in particular dig into EmailTemplateEngine.cs in the project files.

Prior to version .10 where I went down the path of using ilmerge to solve conflicts of version mismatch with System.Web.Razor.

It seems easy seeing how I took an existing chunk of operational code and extended, and it only seems easy when it is working, but when it doesn’t work and you’ve got strange compilation errors, debugging this mechanism is not the greatest. I found myself hunting for temporary files and trying to have other compiler flags to output more information.

In the early versions it was heavily the case of “works on my machine”, but now its fine and seems to be feature complete…

Automating IIS actions with PowerShell – Create Multiple Sites

I’m working towards a more complex SignalR based post, but in the mean time part of the work on that involves setting up a few ASP.NET web apps.

If you’re after a more comprehensive guide check out this post on learn.iis.net by Thomas Deml. I’ve summarised the steps required to get some basic .NET 4 web applications deployed.

Objective
To create N identical websites in local IIS, each with an incrementing name Id. Each linked to the same application directory. The exact reason as to why will come in a future post, for now consider it an exercise in manipulating IIS via PowerShell.

s1.site.local
s2.site.local

Step 1 – Ensure you’re running the scripts in x86 mode.

Which seems quite common a problem, with a stackoverflow question. I haven’t worked a way around this yet, but this is the error when not running as x86:

New-Item : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154.
At line:1 char:9
+ New-Item <<<< AppPools\test-app-pool
+ CategoryInfo : InvalidArgument: (:) [New-Item], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.NewItemCommand

Step 2 – Import-Module WebAdministration
This loads the IIS namespace which allows you to just navigate in the same way you would the filesystem

> CD IIS:\

Step 3 – Create & Configure App Pool

    New-Item AppPools\test.pool

    Set-ItemProperty IIS:\AppPools\test.pool -name "enable32BitAppOnWin64" -Value "true"

    Set-ItemProperty IIS:\AppPools\test.pool -name "managedRuntimeVersion" -Value "v4.0"

NOTE: here I didn’t have any luck storing the app pool path in a variable then using Set-ItemProperty, hence the repeating.

Step 4 – Variables

    $sitesToCreate = 10
    $path = "C:\dev\project-x\App.Web"
    $appPool = "test.pool"

Step 5 – Create Site & Link AppPool Loop

For N(10) to zero:

  • Create a new site
  • Set binding info and physical path
  • Set the app pool
    while ($sitesToCreate -gt 0)
    { 
        $siteName = "s" + $sitesToCreate + ".site.local"
        $siteWithIISPrefix = "IIS:\Sites\" + $siteName
        Write-Host "Creating: " $siteName
        
        $site = New-Item $siteWithIISPrefix -bindings @{protocol="http";bindingInformation="*:80:" + $siteName } -physicalPath $path
        
        Set-ItemProperty IIS:\Sites\$siteName -name applicationPool -value $appPool
        $sitesToCreate--
    }

Note: ‘appPool’ is a text variable, not the ‘Get-Item’. Set-ItemProperty operates on a path not a variable representing the item.

We’re done

One last note, to get these to resolve correctly on your local developer machine you’ll need to modify your hosts file.

IIS multisite

The complete code up as a Github Gist.

CoffeeScript, Jasmine tests with Cassette and Mindscape Web Workbench Visual Studio Extension

Wow that’s one hell of a title, I couldn’t make it any shorter, but that’s everything we’re dealing with in this post.

Some Background

We’ve got an ASP.NET MVC 4 web application and we’re using Cassette to bundle and minify the JavaScript and CSS files, the reason is that the new Beta 2 MVC 4 bundlers didn’t work right away, and we already had Cassette all configured.

Cassette further solidified its place as a tool of choice for us when it introduced Jasmine bundling. We can just point a Cassette bundler at a location in the project that contains the spec files (jasmine tests) and be done, not have to worry about wiring up any spec helper files with paths, etc. The current version (1.2.0) which is great, but does not handle CoffeeScript compilation in an efficient way, yet…

Only have to remember to do the standard approach of using

/// <reference path="your-js-file.js"> 

Wanting to use CoffeeScript

Now we’re also jumping on the CoffeeScript bandwagon *because it’s just JavaScript*(^TM), and we want our code to be neat, elegant and tested, and CoffeeScript looks like it will help out. The last thing that was holding me back on going all in on CoffeeScript was getting it smoothly into the Visual Studio or our build process (if also necessary).

Turns out that is very easy if you just use Web Workbench from Mindscape. It’s a free extension for Visual Studio. Go here for their getting started guide.

How we’re using Web Workbench

Our primary desire is just to have the CoffeeScript compiled for use without fuss. Including speed. So Web Workbench helps in this department, also the creation of the .js files nested under the CoffeeScript files. We are just writing our client side code in CoffeeScript and want the process as simple as possible.

The highlighting will be helpful if still in Visual Studio, and will be helpful to us when it’s working in the Visual Studio 11. We’re spending a reasonable amount of time in VS11 already.

As I mentioned earlier Cassette takes care of “packaging up” all our JavaScript files already, so to combine our newly automatically compiled CoffeeScript logic it was as simple as setting up a Cassette bundle pointing to the location of the Jasmine spec files.

Download the Web Workbench here.

The Cassette side of the story
The most basic configuration to get the Jasmine tests bundling and set up to run with the least effort is to just tell Cassette where they are wit this line:

   bundles.Add("JavaScript/Specs");

That snippet lives inside the CassetteConfiguration.cs file installed by its NuGet package.

public class CassetteConfiguration : ICassetteConfiguration
{
   public void Configure(BundleCollection bundles, CassetteSettings settings)
   {
      //all the other cassette configuration code
   }
   bundles.Add("JavaScript/Specs");
}

Putting it all together

So here’s what the CoffeeScript code looks like inside Visual Studio (v11).

The actual function we’ll be testing:

The test:

Here’s how Web Workbench presents the CoffeeScript files with their compiled JS:

The test runner, and the url path, this is what Cassette helps make simple:

Summary

That’s it. So we’ve written some logic in CoffeeScript, and then unit test it with a Jasmine spec also written in CoffeeScript, Web Workbench handles the compilation as we type, and Cassette puts it all together to display and run in the browser via the bundle url.

If you want to see the Jasmine side of things in action (the repository doesn’t have the CoffeeScript changes finalised yet), check out the sample code up on this GitHub project.

Tweaking a VS2010 plugin to run JSLint in the command line

We’ve gone to some lengths at work to automate, and have a continuous delivery style pipeline for all things build and deployment related. It’s well on its way, but not ‘perfect’ yet. Maybe perfection isn’t attainable, or maybe when you have a red button on the desk that when pushed does *everything*. None-the-less aiming for perfection will continue to drive us to improve.

So here I wanted to discuss what steps I took to get a nice JSLint Visual Studio plugin to form part of our build process. It was a bit of fussing about getting it to work, and it’s an example of still being imperfect, but for now it serves the build pipeline well enough.

If you don’t use JSLint for your JavaScript code, you probably should. It’s a static analysis tool to analyse and report on broken coding rules for JavaScript. Try it out on the creator (Douglas Crockford’s) site JSLint.com.

There is an easily accessible Visual Studio plugin called exactly what you would expect “JSLint for Visual Studio 2010” and here is the direct link for it on the VS Gallery for your installation pleasure.

If you do anything and stop reading here you’ve done well, install it and happy JavaScripting.

JSLint for Visual Studio 2010 Extension

But what about continuous integration?

For us it wasn’t enough, we wanted our build process which runs as psake scripts to fail if JSLint rules were broken.

Just for a bit more completeness on the psake digression and how the command line tool will execute under MSBuild, here’s a snippet of the ‘targets’ code that is referenced by the .csproj files that contain JavaScript. The input parameter on the executable being the directory containing the script files relative to .csproj file.

<Target Name="AfterBuild"> 
    <Message Text="Running jslint..." Importance="Normal" /> 
    <Exec Command="&quot;..\jslint\JSLint.CLI.exe&quot; .\Scripts\ " /> 
</Target>

So the search for command lines tools began. I found some existing command line tools, some which had more complex dependencies, others which seemed to be more ‘primitive’, i.e. did not report on errors the IDE based JSLint plugin was reporting.

There were 2 main objectives driving the choice of the tool

  1. Similarity and accuracy to JSLint.com and the Visual Studio extension
  2. Ease of setup

There was some discussion happening on StackOverflow, here and here but nothing I tried digging deeper into seemed suitable.

I then had an idea…

Take the core logic of the Visual Studio extension and wrap it in a very simple console application to execute as part of the build process.

With the approach I took, it seems even option 2 was difficult to achieve (consumed some time), but at least it had the least external dependencies to the other options out there.

The very first step was to obtain and install the VS2010 SDK, as this was an project which referenced many interop assemblies for interacting with the IDE. It needs to be the Service Pack 1 SDK in fact, and here’s a direct link. Once I was able to compile the extension it was then a matter of understanding how it operates and how to access a method to perform the ‘linting‘.

There were 2 major hacks to get access to some inner workings of the extension to operate:

  1. Making some ‘protected/internal’ methods and properties public
  2. Modifying where the JSLint logic obtained the settings file from (JSLintOptions.xml).

Locating JSLintOptions.xml proved somewhat difficult at first, as it was tucked away hiding in the Roaming section of my user folder on Windows (\Users\*\AppData\Roaming\). These hacks could greatly benefit from some re-factoring effort if I have ever have the time, or someone else is so inclined. But after an initial attempt to refactor out the most core of logic things fell apart in the land of SDK dependencies, so I rolled back and opted for a less elegant approach which was the 2 hacks listed above.

The logic for the console application is then trivially simple:

  1. Read .js files.
  2. Using the JSLinter class method Lint() supply the .js file content/
  3. Write errors to console
  4. Return error code, 0 (Success), 1 (JavaScript warnings), etc.

Show me the code!

If you want to see the hacks I had to make to get this to work head on over to this git repository on BitBucket. I offer no warranty it may be fragile so manipulate with caution.

If you just want the command line executable, then it’s built and stored in the repository in the /output folder, if I make any updates to the executable, I’ll also update that executable.

Areas for improvement in the console app:

  • Taking path locations as input parameters/external file of locations
  • Taking alternate settings files for rules to ignore/include
  • General tidyup

Outstanding issue

The final hiccup is not directly related to the use of the command line tool itself, but building the command line tool from source as part of a more comprehensive-all-inclusive build process. Adding commands in the .csproj file post-build settings was not sufficient. The build and copy of the executable needs run as a directive in the targets ‘afterbuild’ section of the .csproj file.

CSProject Settings Dialog - Post Build Events

Not suitable to place actions in 'post build' section

This led to a conflict between MSBuild and the VS2010 SDK, which was very frustrating and currently isn’t solved yet. The question is up on StackOverflow.