Melbourne Silverlight Code Camp – January 2010

The final weekend in January 2010 is the Melbourne Silverlight Code Camp.

I will have a summary post of my take on the event and topics covered, but this is a very quick post to have a plain text schedule up on the web (in particular for iPhone access).

View here as 46kb PDF (wordpress doesn’t allow for .txt uploads!).

Day 1
9:00 – 10:00

10:15 – 11:15
s – Navigation Framework
a – MEF In Silverlight

11:30 – 12:30
s – Expression blend for dummies
a – Silverlight & IronRuby

12:30 – 1:15
BBQ Lunch

1:15 – 2:15
Q&A Experts

2:15 – 3:15
Lightning Talks

3:15 – 3:45
Coffee break

3:45 : 4-45
s – Developer Meets Designer
a – Automated UI Testing from simple to deep dive

5:00 – 6:00
s – Sketchflow
a – RIA Services / SQL Azure

Day 2
9:00 – 10:00
s – Sharepoint 2010 – Overview
a – Bing Maps

10:15 – 11:15
s – Creating Silverlight Controls
a – From object to MEF

11:30 – 12:30
s – Security in Silverlight
a – Sharepoint 2010 – Implementing SL controls as web parts


1:15 – 2:00
Lightning talks

2:15 – 3:15
s – Common pitfalls in enterprise apps
a – Smooth Streaming

3:15 – 3:30

Thinking Differently About System Architecture

It’s great when a presentation/discussion gets an audience riled up. As some of the audience begin to understand the general ideas, another portion are very eager to challenge and debate each point being made. This of course makes an interestingly heated session but does slow down the intended pace of a presentation. Tonight was such a session at the Melbourne .NET user group, where Udi Dahan had a very thought provoking presentation about system architecture titled: Command Query Responsibility Segregation (CQRS).

According to Udi it takes Udi about 3 days to cover in detail what CQRS is truly about, so in the brief 2 hours he was talking to us and explosion of questions the audience had through-out, the details of the concept in particular how one would implement it are a bit of a blur to me right now. But luckily Udi has a post Clarified CQRS that summaries this concept, that will be well worth the read even though it’s 3000 words.

I would like to go on discussion my perception of the audience reaction, as I myself at first found I was at least initially quick to dismiss with a thought like:

But I like my current process; I don’t want to even consider a drastic change right now.


I like using the current tool-set/framework, don’t you dare try and take that away from me.

When we’re not completely sure what the alternative is, we fear it and fall back to a defensive position (developers in general it seems).

This was made clear by concerns some members of the audience voiced that ranged from; loyalty of software such as SQL Server, to not wanting step on the toes of Business Analysts, to even not wanting to offer customers any additional options for improved software solutions. Reaching a peak of almost faith in current practices “Data is King” – don’t try and make us do it any other way than we currently are.

As the presentation went forward and more of the ideas were beginning to resonate truth with some of us, a select few of the audience were jumping to wild conclusions about abandoning current practices, concerns over changes from “the current way we do things”, and even business loosing customers with alternate approaches in software operation.

At this point I would like to make it clear; I’m not attacking individuals who were vocal in their debating against what was being presented, only to suggest be a bit more receptive of alternatives. Debate is often good, and in particular in user groups it is productive and welcomed, as we’re not “on the clock” on a behind schedule project.

I for one will be taking a closer look at the suggested alternative architecture, and if the smallest part of it can help me improve even a single screen in a web application for the better, then that’s just fantastic!

The lesson is; don’t be either to quick to accept but more so even worse to quick to dismiss. There’s always alternatives so be on the lookout for clever minds suggesting new and improved approaches that can quite well be improvements on current practices.

Victoria.NET January 2010 Session

I just got home from a slightly longer than usual Melbourne Vic.NET session, it was a very intense night, so intense I’ve got a second post lined up just to cover the second presentation. But first there were a few announcements:

  • David Burela reminded us of the April Cloud Camp.
  • Mahesh reminded us of the Silverlight Code Camp weekend end of January. I’m confirmed to be attending this.
  • Also that the User Group is looking for company sponsorship, as the current budget is shrinking

The two topics of the evening were:

  1. An overview and walk-through of some of the features in ASP.NET MVC 2, presented by Malcolm Sheridan and
  2. Command Query Responsibility Segregation, presented by Udi Dahan that I discuss in greater detail here [link coming soon].

The ASP.NET MVC 2 talk was a quick walk through with tips:
The take-away notes were:

  • Areas are useful in particular the ability to have them in a separate project (though that’s currently not functional in the RC). MSDN Link.
  • When using areas be careful on how your routes are impacted
  • Improved validation and custom validation options, through the use of ValidationAttribute interface
  • Validation re-use, in conjunction with Dynamic Data.
  • Other miscellanous improvements, such as shorter AcceptVerbs; Get/Post.

Overclocking PLINQ

Today I finally got around to tweaking my new PCs settings to achieve a small CPU overclock. I have an Intel Core i7 920, that by default runs at 2.66 Ghz, it’s now running at just over 3.00 Ghz, what’s this got to do with .NET you ask?

Well I thought I would post a simple follow up to my Sep 2009 entry about PLINQ and the Stack Overflow data-dump where at the time I was using a Core 2 Duo at 2.53 Ghz to run PLINQ vs LINQ speed-up tests.

So before the Core i7 speed up took place I ran the PLINQ queries and took down the results, now after the overclock I have some even more improved times.

Note: The standard LINQ queries also take advantage of a faster CPU even when using only a single core.

As an up front summary the speed-up on the PLINQ query over the LINQ query for the 4 physical cores on the Core i7 is averaging out to 3.79 factor speed-up.

This is a great value, since it’s reasonably close to 4, with some overheads not letting it reach any closer to 4. The overheads obviously prevent us from obtaining a pure number-of-cores performance multiplier.

The key experiment of data processing I’m performing here is on approximately 265,000 rows of Stack Overflow question data. After being extracted out and stored in memory some kind of data manipulation is being run (one that ideally can benefit from being distributed across several cores). Referring back to the original post it’s really just a trivial calculation of number of tags on the question that are also listed in the question body text.

So I ran LINQ then the PLINQ queries on the approximately 265,000 rows of data and averaged the results. Mind you the results are fairly consistent, almost always under 500 millisecond variations. I quickly whipped up an Excel chart to visual sumarise the time taken to complete the LINQ and PLINQ queries.

Core i7 PLINQ Timings

Core i7 PLINQ Timings (times are in Seconds)

To summarise; what was already known before we began (based on my dual core tests in September 2009)

  • More cores = better PLINQ execution time.
  • And higher core speed = better execution time.
  • Combining the two is even better.