API call based Azure Functions with DocumentDB

Azure Functions caught my eye recently, mainly because of the F# support where I was hoping to write some F# based integrations. But the F# support is classified as experimental, so while I learn the capabilities of Azure Functions, I didn’t want to get held up on issues that may be only specific to their support of F#. Once I get my core objective operational via C#, I’ll attempt to rewrite the functions later, and I’ll share that too (in fact I’ll likely just update this post with re-written at some point).

I’m taking my simplest application and migrating that to make use of Azure Functions, it’s the trusty Used Guid service. If you haven’t heard of it… you’re missing out!

Playing with AppHarbor twitter and WebAPI

untitled_clipping_091816_103748_am

Background

So in the original WebAPI app, it was straight forward enough process:

The dependencies are:

  • DataBase (Read + Write)
  • Twitter (Write)

Function Based Architecture

The initial challenge here with the Azure Function approach is off the back of the user request, how to do the Guid lookup. The integrations offered by Azure Functions are designed in a way that it’s INPUT + OUTPUT(S).The first function has to be the input from the user and that’s an HTTP call.

I started thinking about the coordination between multiple azure functions early in this process. I thought the second function could just operate on the back of a new document showing up in the Azure DocumentDB. But upon digging into the documents I could not see any examples of how to “subscribe” to the feed of new documents. After not being able to solve it via experimentation it started to look unsupported. So I went to Stack Overflow to get confirmation (or what I was really hoping to hear: “yes, this is coming soon”.)

That was not the case. The answer now (Sept 2016) is NO: not supported. So the list of supported bindinds in the documentation was accurate and up to date:untitled_clipping_091716_100900_pm

So now the first function that takes the http INPUT, needs to have 2 OUTPUTs; DocumentDB and Queue.

Function 1

Input – HTTP

It’s going against the ease of use of the Azure Functions to do a database read and return the failure cause, though I may not have a choice but to do that.

I wrestled with the architectural approach to this problem, and it’s because this problem space is absolutely contrived and doesn’t lend itself to an elegant solution. When you actually step back at the problem domain / business’s requirement 2 users really won’t have colliding data (… the Guid). They would just ask a service to deliver them the next datam of vulue.

So with that I’ll continue on following the happy path, because the core objectives are to get to deployment concerns around functions, and just lay some groundwork here.

Outputs – DocumentDB + Queue

They simplest way to write a DocumentDB document from your Azure Function is to have it as out out object. Now in many cases you only want to write the document if you pass initial validation. It seems valid to just assign null to the out paramters you don’t want to pass data to on the invalid/error cases. To feed data to the subsequent functions, a second out paramter is needed which is the queue.

Code:

Function 2

Output – Tweet

I thought this was going to be the simpler of the 2; I wanted to look at how to get secrets (API keys, OAUTH etc) into the functions. But when I went to write the function, oh that’s right I don’t a 1 step approach to fetch NuGet packages. So making use of TweetSharp to do the authenticated twitter API call, will take a bit of extra time too.

Well I started digging around that code, and to extract out exactly what I need is taking a while. Below is a link to the original code in the WebAPI app, where making use of the library makes producing a tweet quite easy (once configured with authentication).

So the options I’ll investigate later will be:

  1. The minimum set of code that can do the authentication and post the tweet so it’s all embedded in the 1 function.
  2. Making use of Azure Logic Apps (which I need to investigate more), but look to offer some abstraction around common integrations.

Original C# code in WebAPI app:

https://github.com/NickJosevski/usedguids/blob/master/UsedGuidTwitter/Logic/Tweeting.cs

Azure Function:

For now just proving can read off the queue. The basic set up is the data on the queue being a string.

With the integration panel looking like this:

untitled_clipping_091816_103117_am

Summary

What’s working:

  1. API endpoint go get user requests in
  2. Writing to DocumentDB
  3. Writing to a Queue
  4. A second function reads from that Queue

Initial Frustrations

The Azure portal is quite nice, the effects, the theming, it does look nicer than the AWS console which I’m much more familiar with. But deep linking into Azure functions doesn’t work as expected, say you duplicate a tab, you either end up back at the dashboard level, or on the create new function screen. Sometimes it would just spin/hang for a while.

untitled_clipping_091816_102055_am

Pricing

I wanted to add a quick note on pricing, I’m no expert in this yet, but when I first start playing with the functions, I had a dedicated app instance, that was draining my balance, when I realised I switched to dynamic pricing model which I thought would have been the default.

It’s good to track the cost of running features, especially while trying out new ones, but one thing that kept showing up in the notifications (bell area) was my current balance, it would always try to get my attention, but more often than not the outstanding balance did not chang.

untitled_clipping_091816_101845_am

What’s Next?

In the coming posts I’ll be covering the deployment pipeline for these functions, stay tuned.

RavenDB invalid HTTP Header characters bug

So you’ve done a search and you’ve arrived here, and you’re using RavenDB (at least version 2). In my case 2.5.2907, if this is fixed in 3.0 that’s good but not in our case we’re not ready to move to v3 yet based on our challenges in production with v2 (which is stable enough now).

I’ll try and help you out first (or a future me if I make this mistake again) then I’ll explain more. This may help you also in another place where you’re using HTTP headers and have used invalid characters.

The Error

Specified value has invalid HTTP Header characters

“Error”: “System.ArgumentException: Specified value has invalid HTTP Header characters.
Parameter name: name
at System.Net.WebHeaderCollection.CheckBadChars(String name, Boolean isHeaderValue)
at System.Net.WebHeaderCollection.SetInternal(String name, String value)
at Raven.Database.Extensions.HttpExtensions.WriteHeaders(IHttpContext context, RavenJObject headers, Etag etag)
at Raven.Database.Extensions.HttpExtensions.WriteData(IHttpContext context, Byte[] data, RavenJObject headers, Etag etag)
at Raven.Database.Server.Responders.Document.GetDocumentDirectly(IHttpContext context, String docId)
at Raven.Database.Server.Responders.Document.Respond(IHttpContext context)
at Raven.Database.Server.HttpServer.DispatchRequest(IHttpContext ctx)
at Raven.Database.Server.HttpServer.HandleActualRequest(IHttpContext ctx)”

The Fix

Check for invalid characters as per the Http spec (RFC 2616) Thanks to this StackOverflow answer in finding it in the spec faster.

Good luck, in my case it was an email, the ‘@’ is invalid.

So make sure you’re only storing US-ASCII and not using any of the control characters or separators:

“(” | “)” | “” | “@”
| “,” | “;” | “:” | “\” |
| “/” | “[” | “]” | “?” | “=”
| “{” | “}” | SP | HT

The Details

Using metadata on RavenDB documents is quite helpful and the data stored there so far has been pretty simple, to support a special case the storage of an email was being worked in to the metadata. The nature of this bug in the RavenDB IDE was that when you list all the documents of that collection they show up, and you see their selected fields, but when you click to load the document you get the “Specified value has invalid HTTP Header characters” error, and you’re scratching your head about how the document is in the database but you can’t load it.

RavenDB IDE

I encountered what really feels like a bug, as a developer using the metadata mechanism of Raven documents, it shouldn’t be the responsibility of that developer to ensure they are meeting the specification HTTP headers (see page 16 of RFC 2616).

RavenDB Invalid metadata

This is invalid metadata on a raven document (see the ‘@’)

Conclusion

You do want the tools you use to really help you, and it’s frustrating when something obscure like this happens, it may be more complex or difficult but what I would like to see is a check on the data on its way in, so you can clearly see when this became a problem, instead of hunting it down after the fact.

I raised this on the RavenDB support group, and it’s been since raised as a bug, so if it’ll eventually (hopefully) come in v3.

The future of Alt.Net

Today Richard Banks asked us a question about the future of Alt.Net.

I strongly agree with his conclusions, but wanted to get my thoughts down too and answer his questions.

I’ve been part of the Melbourne Alt.Net community since our first meeting on April 28th 2010. We started a little after Richard and the Sydney guys, but have kept a solid core of attendees and survived wavering levels of interest from the broader community and multiple sponsorship and venue changes. I’m glad we started at that time because that “why so mean” moment had already passed and it didn’t seed a negative undertone in our community here.

Here’s the only photo I have of that April 28th meeting, it seems accidental as I was putting my phone away.

Alt.Net 28th April 2010

I became a better software developer thanks to Alt.Net and it helped where I work now build the fantastic team we have, seeing that there were more options out there and learning from others in the community.

Here’s a better shot of Richard visiting us in October that year (2010):

Alt.Net October 2010 with rbanks

Richard asked

“Is that enough now, should we now disband?”

No, because we still need continuous improvement – we don’t stop, we grow and improve.

Richard stated “I still need what the Alt.Net community provides”.

I do too. Sharing ideas and frustrations with friends, peers and other new people is very important to me.

What do I want to see in the future?

Even more mainstream.

There are still many developers who haven’t heard about our user group meetings.

Working on 2 presumptions:
– A percentage of people can’t physically attend often or at all.
– The topics we’re discussing are of value and will improve what people deliver in their jobs / be better software developers.

We just need to get our message/content out there better, by pushing stronger for input on topics the group covers. Getting our content out there, which has been happening for the last year (recording and publishing on YouTube), but sharing more on twitter/linkedin/blogs.

So …

Branding?

If we want to reach more people then yes maybe branding will help, there’s now very high quality content available online for developers, so there’s more competition now days.

When our team from Picnic Software presented last year the turnout was huge, we had some good questions recorded, but many more good discussions after the fact and the attendance was record breaking.

Based on our chats with those in attendance our honest and direct coverage of issues/challenges and what we’re doing is what people did come and wanted to see and why so many came to talk to us after.

So any new branding I believe should be have the feeling of one strong community of software developers spread throughout Australia gathering together to share locally and online.

Objective

All this depends on the collective objective…

If we’re trying to reach more people then yes branding and putting time and money behind it, should help (right? it’s the reason companies pay so much for marketing). I stand here and want to reach more people.

If we’re just self-evaluating our community is strong, we’re doing a good job sharing and enough people are finding us (we’re not shrinking) then steady as she goes, is fine and the branding is less important, it’s about our content and we can just focus on that.

Moving from WebAPI to ServiceStack

Having used WebAPI in conjunction with a hybrid WebAPI and ASP.NET MVC app quite recently, it does a good job, but once you start to get deeper in a more complex application some weaknesses start to show. A trivial example is mapping exceptions to HttpStatus codes, this is something you get easily with ServiceStack.

The WebAPI controllers looked like this, with a route prefix at the top, and then the specific part with

    [RoutePrefix("/api/product")]
    public class ProductController : ApiController
    {
        [GET("{id}")]
        [AcceptVerbs("GET")]
        public Product Get(ProductId id)
        { /* ... */ }
        
        [POST("create")]
        [AcceptVerbs("POST")]
        public void Post(CreateProductCommand cmd)
        { /* ... */ }
    }
	
    // New style as route decorating an F# record
    [<TypeScript>]
    [<Route("/product/create", "POST")>]
    type CreateProductCommand =
    { 
        ProductId: ProductId
        Name: string
    }

Yes F#, checkout out my post on initial learnings with F#. There’s something interesting about our route decorations on that record type, I’ll try to get around to writing about it. But for context this time round it’s

Issues with WebAPI

Our primary issue with WebAPI was that its route matching was limited.

As a result, it frequently did not match routes in an expected way. This often resulted in a great deal of time lost to fiddling about and trying to come up with a pattern that would satisfy what Web API wanted, often at the expense of our public API design. Also, we wanted to take advantage of routing features that already exist in ServiceStack, but are still only planned in the future for WebAPI.

Finally, as our product’s hosting needs grow, we may like to take advantage of cheaper Amazon machine images and run our services on Linux; ServiceStack is a first-class Mono citizen.

Conclusion

We’re quite happy so far having run it for ages.

[Update]
Months later in production still very happy.

AutoSave Form Fields using jQuery – Part Two of Three

In my preivous post, I introduced “self submitting forms” with the help of some jQuery functions. I had 2 out-standing (todo) items from that post (and work). The first was to have some tracking around the input elements and when they’ve been re-modified before a delayed post has occurred.

Item 1 – Tracking Recently Modified

Problem Definition:
There are 2 input fields: “Deadline” and “Task Name”, the user modifies “Task Name” then “Deadline” then “Task Name” again. Under the previous approach, the first delayed processing of “Task Name” would end up sending 2 pieces of data in the form submit as separate actions. This is not optimal.

Solution Description:
Track when the input elements were modified, and if they’re modified in some “reasonable amount of time” again, then don’t send the previous value. This is for now is simply achieved by tracking the input element name with a “modified times” count value. Every time a change is made to an input element, we “push” the details onto a central collection, when it’s time to attempt to send the data we “pop” off that central collection.

Solution Steps:

1. Have somewhere to store the entries in a central collection, a simple dictionary style variable is fine for now.

var activeInputs;
//initialised later
activeInputs = {};

2. When an input element is modified, we simply store it’s ‘id’ and increment it’s modified counter in the ‘activeInputs’ dictionary:

var elemId = c.attr('id');
//c here is the input element that is associated with the handling event.

if(activeInputs[elemId] == null)
    activeInputs[elemId] = 1; //initialise
else
    activeInputs[elemId] = activeInputs[elemId] + 1; //increment

This method is the “push on to stack” approach, and does not need to supply feedback.

3. We next will need a method to decrement this, along with returning the modified count value:

var elemId = c.attr('id');
if(activeInputs[elemId] == null)
    return -1; //erorr state

if(activeInputs[elemId] > 1)
{
    countVal = activeInputs[elemId];
    activeInputs[elemId] = activeInputs[elemId] - 1;
}
else 
{
    countVal = activeInputs[elemId];
    activeInputs[elemId] = 0; //reset it
}

This method is the “pop from stack” approach; we need it to inform us if the input data was in fact ready to be “sent off”

So now if you recall the ‘startCountDownAndSendData’ function from the preivous post it had line to submit the form, this can now be wrapped by a check via the “pop” method to see if it should send. If there’s been a recent edit (i.e. the ‘activeInputs’ value is greater than 1), then it will not send.

I didn’t name the methods earlier, and only showed their internal workings, as naming these functions was tricky, but in the next code snippet, I give the pop method a name, to see them in action as a whole check it out on jsFiddle.

if(popProcessedControlFromStack(c) == 1)
{
    //if value "1" - active control then submit, otherwise it was editted recently                
    $('form#MyFormId').trigger('submit');
}

Item 2 – User Feedback on Progress

The second out-standing issue from the preivous post was the user feedback. For now I’m just going to borrow the one from WordPress when you save a post/draft.

Once you actually submit the data, you just need to handle the success response, this will either replace the above section or be a link. But for a bit of a wrap up on this post with the jQuery focus here’s a skeleton function to show some feedback.

function showSuccess(data) {
   $('#someDiv').html('<img id="checkmark" src="/Content/check.png" />');
}

The third (and coming soon) post, will cover the MVC side of this.

REMIX Melbourne 2010 Day 2

Kicking off Day 2. See the Day 1 post here.

Session 1
Web development with Visual Studio 2010 & ASP.NET 4
Alex Mackey

alex multi targetting

My day 2 began with Alex running through a cavalcade of improvements in Visual Studio 2010 and ASP.NET 4. These have been covered in great detail all over the place, but it is still nice to have them presented and demonstrated live allowing for feedback and questions.

Alex focussed on JavaScript and deployment general tips, along with touching on other areas.

I won’t cover off all the tips and tricks that Alex covered here, I’ll just link off to a good resource which is Scott Gu’s series on this.

Session 2
Riding the Geolocation Wave
Tatham Oddie

Tatham Geolocation

This was a great session, I hadn’t thought about the possibilities of geolocation to improve application experience. Tatham introduced the concepts and then demonstrated a simple pizza delivery application that tapped into the users current location (with their consent) to pre-seed a location aware list of options. The good news is that Windows 7 has geolocation support built in. We’ve already been using services on devices such as iPhone that use the assisted-gps (A-GPS) to tag things such as tweets with your location, drop a marker on Google maps with your current location and offer directions.

For resources and more information see Tatham’s blog post here.

Session 3
The future of exposing, visualising and interacting with data on the web.
Graham Elliot

Graham throwing punches

In this session OData was introduced in more detail to the REMIX audience. I’m covering off the basics of OData in a series myself so go check that out here.

The most audience pleasing concept demonstrated was the use of the awesome Microsoft Labs – Pivot available at GetPivot.com. It is a visual data interaction tool. To save me from failing to do it justice in a few lines of text, check out this 5 minute TED 2010 video presented by Gary Flake on Pivot.

Lunch & Live Frankly Speaking
Taping of Frankly Speaking
Michael Kordahi, Andrew Coates and guests.

Frankly Speaking

The REMIX audience during lunch got to be the live studio audience for a taping of Frankly Speaking. I’ll post a link to the episode here when it’s up.

Note the donkey is a reference to “taking the donkey work out of installation” – promoting the Web Platform Installer for Windows. It’s very handy check it out.

Labs
I didn’t end up going to any of the session in the slots 4 & 5. I joined a few colleagues in the labs.

remix labs

The first lab I attended was the XNA development introduction for Windows Phone 7. The lab was run by Luke Drumm (@lzcd) and Glenn Wilson (@myker). Glenn runs a blog focussed on this kind of development – virtualrealm.com.au

The second lab I attended was run by Steven Nagy, and was intended to get us configuring Azure AppFabric but some major hiccups like not having the lab PCs setup with internet access and some other mis-configured components prevented us from following along. None-the-less the lab ended up just being a discussion extension from Day 1s presentation on Azure. Several people had a lot of questions about actual deployment, from SLAs to locations to security.

Session 6
Pimp My App
Shane Morris

shanemo design tips

With the event coming to a close, it was nice to sit back and listen to the talented Shane Morris of Automatic Studio (former MS UX Guy) giving some basic design tips for developers to follow to ensure apps don’t suffer due to lack of professional designer input. There were 2 cool links to colour scheme assisting sites; kuler and Colour Lovers

There was a fair bit of discussion, analysis and reasoning provided by Shane, so I’ll just list out the conclusion slides.

Layout Steps:

  1. Map out the workflow.
  2. List your contents.
  3. Layout elements.
  4. Check grouping.

Presentation Steps:

  1. Remove unnecessary items.
  2. Minimise variation.
  3. Line stuff up.
  4. Space and size components evenly.
  5. Indicate grouping.
  6. Adjust visual weight.

Final Summary
All-in-all REMIX was a great 2 day event, and for the early bird price of < $200 was a bargain, when you factored in all you can drink coffee, buffet lunches and after party. Having already published my brief summary, all I can say here is that if you have the opportunity to attend REMIX, take it.

LINQ Basics (Part 2) – LINQ to SQL in .NET 4.0

Continuing my 2 part series on LINQ Basics, here I will attempt to discuss a few improvements in LINQ to SQL as part of the .NET 4.0 release scheduled soon. A key post on the 4.0 changes is by Damien Guard who works on LINQ at Microsoft along with other products. This will allow me to discuss some of the changes but also go into a bit more detail extending the “basics discussion” started in the previous post.

Note: In October 2008 there was a storm of discussion about the future of LINQ to SQL, in response to a focus shift from Microsoft for “Entity Framework”, the key reference post for this is again one by Damien G.

The upcoming 4.0 release doesn’t to cause damage to the capability of LINQ to SQL as people may have been worried about (i.e. disabling/deactivation in light of the discussions of Oct 2008). There are actually a reasonable amount of improvements, but as always there’s a great deal of potential improvements that there was not capacity/desire to implement for a technology set that is not a major focus. But LINQ to SQL is still capable enough to function well to be used for business systems.

First up there are some performance improvements in particular surrounding caching of lookups and query plans in it’s interaction with SQL Server. There are a few but not all desired improvements in the class designer subsystem, including support for some flaws with data-types, precisions, foreign key associations and general UI behaviour flaws.

There is a discussion on Damien’s post about 2 potential breaking changes due to bug fixes.

  1. A multiple foreign key association issue – which doesn’t seem to be a common occurrence (in any business systems I’ve been involved in).
  2. A call to .Skip() with a zero input: .Skip(0) is no longer treated as a special case.

A note on .Skip(), such a call is translated into subquery with the SQL NOT EXISTS clause. This comes in handy with the .Take() function to achieve a paging effect on your data set. There seems to be some discussion on potential performance issues using large data sets, but nothing conclusive came up in my searches, but this post by Magnus Paulsson was an interesting investigation.

As for the bug It does seem logical to have treated it as a special case and possibly help simplify code that might be passing a zero (or equivalent null set), but if it was an invalid case for .Skip() to be applied to with a value greater than zero it will fail forcing you to improve the query or its surrounding logic.

Just for the sake of an example here is .Skip() & .Take(). Both methods have an alternate use pattern too: SkipWhile() and TakeWhile(). The while variants take as input a predicate to perform the subquery instead of a count.

var sched = ( from p in db.Procedures  
             where p.Scheduled == DateTime.Today  
             select new {  
                      p.ProcedureType.ProcedureTypeName,  
                      p.Treatment.Patient.FullName  
                     };
                ).Skip(10).Take(10);

There are also improvements for the use of SQL Metal which is the command line code generation tool. As a side note to make use of SQL Metal to generate a DBML file for a SQL Server Database execute the command as such in a Visual Studio command prompt:

    sqlmetal /server:SomeServer /database:SomeDB /dbml:someDBmeta.dbml

A File Copy and Compress – PowerShell Script

Following on from my previous post. I’m creating a PowerShell script that takes an input of a file location via a dialog. Extracts some files through a copy action, and compresses them into an archive. As I will show in the 3rd post, the purpose is to extract out 2 files to help with deploying a SketchFlow project. Such a complex script is overkill for 2 files, but is easily extensible to handle a larger set of files.

Step 1 – User input of folder:
Create a function to launch a folder select dialog, this is achieved through a com object call.

function Select-Folder($message='Select a folder', $path = 0)
{
    $object = New-Object -comObject Shell.Application
            
    $folder = $object.BrowseForFolder(0, $message, 0, $path)
    if ($folder -ne $null)
    {
        $folder.self.Path
    }
}

Note: Could call this twice to prompt user for output location also.
OR
To speed up script execution supply input parameter(s) to the script [input (and output) directory].

To achieve this simply use $args[0] (and $args[1]) instead of the Select-Folder cmdlet call.

The Select-Folder cmdlet launches:

Browse For Folder Dialog

Browse For Folder Dialog

Step 2 – Extract (via copy) the deployable files:
There are 3 tricks here; first extracting only files via not PSisContainer, grouping them via their extensions (in order to easily process them in a copy loop) and matching only a list of valid extensions (in this case xap and html).

$types = ".xap", ".html"

$files=  Get-ChildItem $folder | Where-Object {-not$_.PSisContainer} | Group-Object Extension 

$files = $files | Where-Object {$types-contains$_.Name}

New-Item -itemType Directory -path $deploymentFolder -ea SilentlyContinue

$files | ForEach-Object { $_.Group | Copy-Item -destination  $deploymentFolder }

Tip: use Some-Cmdlet | Format-Table or its variants to output details to screen to help with debugging.

Step 3 – Compress
This last step in the script is a compression action. I got the compression functions from David Aiken’s MSDN blog post. The Add-Zip function was unable to take a file path containing a directory-up (\..\), so the final script has Move-Item cmdlet line to compensate.

function Add-Zip
{
    param([string]$zipfilename)

    if(-not (test-path($zipfilename)))
    {
        set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
        (dir $zipfilename).IsReadOnly = $false    
    }
    
    $shellApplication = new-object -com shell.application
    $zipPackage = $shellApplication.NameSpace($zipfilename)
    
    foreach($file in $input) 
    { 
            $zipPackage.CopyHere($file.FullName)
            Start-sleep -milliseconds 500
    }
}

The complete script file can be found here on Gist.GitHub.

The next step in this script could be to email the newly created zip file, but that’s something for a future post.

A long time ago, in a framework far far away

I have been really busy lately, haven’t had much of a chance to work on some of the WCF post ideas I have sitting in my drafts folder. So I thought a quick simple run through of a little problem I had to resolve the other day might be an o.k. “filler post”.

Note: This is not a problem if you’re using WPF in .NET 3.0 onwards, combo boxes by default expand to the largest element. This problem was encountered on a .NET 2.0 forms control ComboBox.

DotNET Far Far Away

The problem was with long text inside the drop-down list for any given ComboBox in the application. The text would simply be truncated as the drop down panel would only be as wide as the ComboBox.Width. A quick web search uncovered an MSDN article “Building a better ComboBox” from Jan 2005. This article and its download-able code sample solves 95% of the problem. The code supplied is very simple, it has a function which determines the length of the longest text item, then adjusts the drop down panel accordingly. There were some complications/limitations so I spent some more time investigating and playing.

A note on the MSDN article sample code, it’s very straight forward, this specialised combo box extends on the standard combo box. The bulk of the code is to handle the situation where the combo box drop down goes off the edge of the screen.

public class BetterComboBox : ComboBox

This is what I needed to achieve with a 2.0 forms control, demonstrated by this image of the default behavior of a WPF ComboBox control:

WPF Combobox

WPF Combobox

Once the existing combo boxes were modified to be of the new BetterComboBox type, the first problem was with a breakpoint I had set to verify the execution of drop down resizing was never being reached.

This was simply an issue with the order in which events were occurring after a data binding. Upon the assignment of DataSource, the event would fire, but the ComboBox.Items collection was not yet populated. This is actually by design in the framework, that the Items collection is not populated/processed until the object is displayed. It was therefore as simple as refreshing with a call to RefreshList().

This is why the design of the BetterComboBox made use of the HandleCreated event. But my particular implementation required the adjustment occur upon the DataBinding event.

The search for this issue lead me to a few common problems that people are having, and the answer to this StackOverflow question was helpful.

Once the items were showing the next problem I encountered led me down the wrong investigation path my first instinct was to challenge the ability of the improved ComboBox to calculate the appropriate pixel length of my strings. I quickly found a Code Project post (by Pierre Arnaud) about the limitations of:

System.Drawing.Graphics.MeasureString()

There in fact is a limitation, but it wasn’t the cause of my problem, none-the-less an improved measurement function is a welcome addition, so I included it in the customisation.

Once the improved MeasureString() was implemented the actual problem became quickly apparent, that the ToString() method being called on each item was too long. This was due the varying types of elements being bound to the ComboBox data sources. Each class did not have an overload for the ToString() method, so it was the fully qualified class name: Namespace.Something.SomethingElse.ClassName, instead of the actual property that would be bound as the DisplayMemeber.

public override String ToString()
{ return _property; }

So with the ComboBox item’s collection populated, the to-string overload on a few classes, an improved string calculation method, a new appropriate width would successfully be calculated and applied, and the equivalent of an auto-resizing WPF ComboBox was achieved. Fun times in 2.0!