Life should be a Picnic

Today I officially became part of a new Melbourne based software company called Picnic Software.

Nick Josevski Picnic Software

We’re all about building high quality software here. Stay tuned for more exciting things I’ll now have a chance to talk about.

Follow the expansion of Picnic on Twitter and on the Picnic blog.

We’ll have some hopefully interesting things up in our Picnic Basket on GitHub soon too.

Web Directions Code Melbourne 2012 – Day 2

web-directions-code-logo

After a great 1st day at #WDC12. The small end-of-day-one-party was hosted at LaDiDa, with some booked dinners around Melbourne with some of the locals (which sadly I wasn’t able to attend). None the less we got right into it with an interesting start to day two.

Dmitry Baranovsky
JavaScript: enter the dragon
This was quite an eye opening and scarily entertaining motivational address by Dmitry. The phrase ‘You Suck’ was uttered the right number of times to motivate an audience full of developers to strive to be better at JavaScript, software development in general and even physically fitter.

The forced take away was to be aware of the intricacies of JavaScript by actually reading the language specification PDF link and (annotated here) and to build your own JavaScript six-pack:

  • Types and type coercion
  • Operators + ==
  • Objects and primatives
  • Functions and constructors
  • Closures
  • Prototype

Jed Schmidt
NPM: Node’s personal manservant
For those familiar with the .NET world, Jed is a Hanselman grade presenter, with great delivery of a comedy element to deliver a presentation as funny as it is educational. Jed introduced many concepts around package management for node (NPM) he built a small demonstration framework to walk us through various concepts, the readme file contains a complete list of everything he covered.

dinkumizer

Jared Wyles
Removing the gag from your browser
Jared delivered a very usefully technical presentation around effectively using the Chrome Developer Tools to trouble shoot, analyse and track site performance. The most important take away was being aware of all the network timing elements for your site when it’s served to a user for the first time, and ensuring items are cached correctly for subsequent visits. He covered using the memory and CPU snapshot and measurement tools to trace any memory leaks and code inefficiencies in particular around interrogating/traversing the dom.

Anette Bergo
Truthiness, falsiness, and other JavaScript gotchas
Anette took the audience through some of the stranger parts of the JavaScript language where it’s likely anyone who hasn’t experienced any of those particular bug prone approaches may run in to trouble. Some key ones to be wary of that you may expect to not really cause problems were:

  • ParseInt()
  • Operators and coercion

Damon Oehlman
The main event: beyond event listeners.
Damon gave us an introduction to eve – an eventing library, just check it out.

Mark Dalgleish
Getting closure
Mark covered “Immediately Invoked Function Expressions” and some of the benefits like protecting against unwanted global variables, and ensuring scope, along with explaining the closure concept. His detailed slides are up on his blog.

Ryan Seddon
Debugging secrets for lazy developers
Ryan‘s theme was automation, get as much of your repeatable tasks scripted. He walked through using headless browsers via Travis-CI, but reminded us that will only get you so far you need to test in real browsers too. An exciting little project of his is a port of the Yahoo Yeti tool, to work without the YUI test runner, his is called Bunyip and should be available soon.

Tony Milne
Party like it’s 1999, write JavaScript like it’s 2012!
Tony covered an issue with dependencies in JavaScript when your chain of references gets larger, and how ideally the responsibility to link required JavaScript files should exist in a better place than just the html files. He mentioned Require.js is great for in browser use, but the really great ones exist for server side JS.

Tony Milne 2012 style JS

Tim Oxley
Clientside templates for reactive UI
Tim was a another entertaining presenter with some choice phrases to compare and contrast developers he admires and frameworks that support development of thick-clients. Tim had a sweet spot for 3 templating frameworks Dot, Jade and Handlebars.js (where Handlebars > Hogan > Mustache )

Rob Hawkes
HTML5 technologies and game development

Rob stole the show in terms of general inspiration and being uplifting with his love of games and how it helps build better online experiences in particular in browser technologies. The vision he presented was a world where the browser platform, in particular on mobiles extended the gaming experience from the desktop world instead of only partially emulating it. Rob mentioned a few interesting APIs/concepts/products worth checking out like; WebWorkers and PointerLockAPI and TinkerCAD. Rob works for Mozilla the not-for-profit software foundation that gets so much amazing stuff done with only 600 employees only half of which are developers, so if you want to see what’s coming up check out Firefox Aurora, or what’s being worked on right now Firefox Nightly and if you want to get in touch with anyone at Mozilla find them on IRC.

The conference wrapped up at The Carlton down Bourke Street in an awesome after party where beer fueled discussions could run rampart.

Web Directions Code Melbourne 2012 – Day 1

I spent today (23rd May 2012) at Web Directions Code first time attendee of the Web Directions conferences, and this is the first Web Directions Code (at least in Australia or Melbourne).

web-directions-code-logo

It was a great day, a combination of great speakers, face chocolates (see below) and a single track taking out the stress of selecting which presentation to go and see. It’s a 2 day event so I’m quite excited to be going back tomorrow. (Here’s my Day 2 wrap up).

Face Chocolates & Eat Play Code

Faruk Ates
The Web’s 3rd Decade

Key take-away is this slide, with the important message being better tools are clearly what’s missing right now that’s making web development not productive as it should be, we should be further along in terms of getting more of the basics done. This comes about from achieving the other 2 items – more involvement along with making it clearer how to use and integrate existing frameworks and or tool sets.

Dave Johnson
Device APIs: closing the gap between native and web

Dave spoke about technical challenges in building PhoneGap. The most challenging being those around; security, sand-boxing, privacy permissions, and performance. Basically its hard work, many devices many lacking features, in particular around new HTML based video and audio.

Damon Oehlman
HTML5 Messaging

Damon covered off a lot of technical details around messaging in HTML 5, listing two main types: ‘Post Messaging’ and ‘Channel Messaging’ stating that post messaging is simpler to get going with. He went further to discuss web-sockets and some example frameworks like Socket.IO and Sock.js. He demonstrated posting messages via his presentation and created a web socket connection to twitter to receive messages as soon as they arrived at twitter. Then briefly touching on Web Intents.

Andrew Fisher
Getting all touch feeling with the mobile web

Walked us through the basics of touch, with some nice demonstrations and a walk-through of how common touch mechanics we’re all familiar with work.

Silvia Pfeiffer
Implementing video conferencing in HTML5

Had a very impressive setup of some node.js server logic coordinating 2 browsers connecting to each other to perform a video conferencing call on stage between her and the audience.

Anson Parker
The HTML5 History API: PushState or bust

Gave us a neat little round up of the History API and how it works, Anson took the opportunity to remind us that companies like twitter with their hash-bang urls are breaking the expected behaviour of the web. He gave an example of how much data is delivered regardless if you’re requesting to view a single 140k tweet versus their tweet stream (typically paged at 10-20 tweets). He wrapped up with a cool demonstration of a site he’s in the process of building, that looks very promising kahzu.com

excessive twitter download

Tammy Butow
Fantastic forms for mobile web

Tammy walked us through the basics of building suitable input forms for mobile devices, taking advantages of types such as ‘tel’ to bring up only the numeric keypad for input, all the tips are up on slide share.

Max Wheeler
Drag and drop and give me twenty
Walked the audience through drag and drop concepts.

John Allsop
Getting off(line)

John overwhelmed with a flood of information about the complexities and pitfalls of working with appcache. He walked us through sessionStorage and localStorage and the tradeoffs of what you can store, how much space is available, and security concerns (in particular when browsers crash). He covered a great deal so here’s a link to an older version of his presentation from a previous web direction conference (I’ll update with a new one when I find it).

Developing with AppCache

Divya Manain
Designing in the browser

Divya made a very clear and strong case for having a process that involves designers writing code, and building prototypes sooner rather than later (or not at all). Advocating moving from very initial rough paper sketches straight to prototyping using a fair few useful tools (note this is still rough notes, I’ll follow each up and link directly).

Tools / resources for prototyping

designing in the browser

To top off a great day we got a gift each from Blackberry a PlayBook.

blackberry playbook

One last thing the graphic of the day to get the most reaction (excluding courage wolf ones).

eating clag

CoffeeScript, Jasmine tests with Cassette and Mindscape Web Workbench Visual Studio Extension

Wow that’s one hell of a title, I couldn’t make it any shorter, but that’s everything we’re dealing with in this post.

Some Background

We’ve got an ASP.NET MVC 4 web application and we’re using Cassette to bundle and minify the JavaScript and CSS files, the reason is that the new Beta 2 MVC 4 bundlers didn’t work right away, and we already had Cassette all configured.

Cassette further solidified its place as a tool of choice for us when it introduced Jasmine bundling. We can just point a Cassette bundler at a location in the project that contains the spec files (jasmine tests) and be done, not have to worry about wiring up any spec helper files with paths, etc. The current version (1.2.0) which is great, but does not handle CoffeeScript compilation in an efficient way, yet…

Only have to remember to do the standard approach of using

/// <reference path="your-js-file.js"> 

Wanting to use CoffeeScript

Now we’re also jumping on the CoffeeScript bandwagon *because it’s just JavaScript*(^TM), and we want our code to be neat, elegant and tested, and CoffeeScript looks like it will help out. The last thing that was holding me back on going all in on CoffeeScript was getting it smoothly into the Visual Studio or our build process (if also necessary).

Turns out that is very easy if you just use Web Workbench from Mindscape. It’s a free extension for Visual Studio. Go here for their getting started guide.

How we’re using Web Workbench

Our primary desire is just to have the CoffeeScript compiled for use without fuss. Including speed. So Web Workbench helps in this department, also the creation of the .js files nested under the CoffeeScript files. We are just writing our client side code in CoffeeScript and want the process as simple as possible.

The highlighting will be helpful if still in Visual Studio, and will be helpful to us when it’s working in the Visual Studio 11. We’re spending a reasonable amount of time in VS11 already.

As I mentioned earlier Cassette takes care of “packaging up” all our JavaScript files already, so to combine our newly automatically compiled CoffeeScript logic it was as simple as setting up a Cassette bundle pointing to the location of the Jasmine spec files.

Download the Web Workbench here.

The Cassette side of the story
The most basic configuration to get the Jasmine tests bundling and set up to run with the least effort is to just tell Cassette where they are wit this line:

   bundles.Add("JavaScript/Specs");

That snippet lives inside the CassetteConfiguration.cs file installed by its NuGet package.

public class CassetteConfiguration : ICassetteConfiguration
{
   public void Configure(BundleCollection bundles, CassetteSettings settings)
   {
      //all the other cassette configuration code
   }
   bundles.Add("JavaScript/Specs");
}

Putting it all together

So here’s what the CoffeeScript code looks like inside Visual Studio (v11).

The actual function we’ll be testing:

The test:

Here’s how Web Workbench presents the CoffeeScript files with their compiled JS:

The test runner, and the url path, this is what Cassette helps make simple:

Summary

That’s it. So we’ve written some logic in CoffeeScript, and then unit test it with a Jasmine spec also written in CoffeeScript, Web Workbench handles the compilation as we type, and Cassette puts it all together to display and run in the browser via the bundle url.

If you want to see the Jasmine side of things in action (the repository doesn’t have the CoffeeScript changes finalised yet), check out the sample code up on this GitHub project.

Error message 800704a6 as part of creating an instance of the COM component

This will be a very short post, and comes from rage against Windows Server and the Windows update system. Because there were outstanding Windows Updates requiring a reboot, a particular build script of ours was falling over without suitable information as to why exactly.

The cryptic error was:

Creating an instance of the COM component with CLSID {0002DF01-0000-0000-C000-000000000046} from the IClassFactory failed due to the following error: 800704a6.

The answer (at least for us):

Reboot!

There are windows updates getting in the way of instantiating new COM objects.

You’ll see a lot of people having a similar problem, not sure if reboot is the answer for them all, but make it your first step, and memorise the error code 800704a6. Unable to verify this but it looks to relate to the text code of ERROR_SHUTDOWN_IS_SCHEDULED.

The ServerFault post that helped most:
serverfault.com/q/ie8-script-error-800704a6

Stackoverflow Questions

stackoverflow.com/watin-nunit-and-cruisecontrol-net-error-message-800704a6
stackoverflow.com/tests-fail-sporadically-using-cruisecontrol-net-with-nunit-error-800704a6
stackoverflow.com/setup-method-failed-while-running-tests-from-teamcity
stackoverflow.com/tests-fail-sporadically-using-cruisecontrol-net-with-nunit-error-800704a6
stackoverflow.com/failed-due-to-the-following-error-800704a6-while-trying-to-read-data-from-a-text-file

Unit Testing JavaScript methods that contain jQuery ajax calls

Objective

Unit test a JavaScript method which contains a $.ajax() call. Using QUnit.

Details

This was supposed to be a simple task, and if I didn’t have a few (now) obvious faults in my JavaScript code, it would have been completed more quickly. I have gone through several answering my own StackOverflow questions on the next day here’s the one in question for the curious ones.

I will hide behind the excuse of it being the early days of the new year and still finding my coding groove after almost two weeks on holiday…

So in this post I’ll summarise the two approaches I came out with in the end. It’s two because neither worked for me first time, and in my attempt to solve the first discovered the second. In the end resolving both. I won’t bother going into much detail about QUnit, as there are many posts out there about it; here, and here and here and official documentation here, and other interesting things to do with it.

As a quick aside we use the NuGet package NQUnit.NUnit to help us integrate QUnit into our Visual Studio projects.

Basic Approach

Solution 1 – basic way as is shown on this StackOverflow answer

// Arrange
    var options,
        jsonText = '{"id": "123", "fieldName": "fbs", "data": "fbs-text"}',

// Set up to 'capture' the ajax call (this forms the mock)
$.ajax = function (param) {
    options = param;
};

// Act - call the method which is 'under test'
/* ... */

// Call the success (or failure) method to complete the mock of handling of the 'response'
options.success(expectedData);

// Assert - verify state of system is as expected
/* ... */

Alternate Approach MockJax

Solution 2 – using the MockJax library with a great walk-through on how to use it here.

There are several advantages in using MockJax, that can be summarised as just having more control over mocking the ajax portion of the method under test, including but not limited to; timeouts, introducing latency, returning HTTP status codes.

After having included MockJax in your project, the solution 1 code gets replaced with 1 method call to $.mockjax() and looks like this.


// Arrange 
    var jsonText = '{"id": "123", "fieldName": "fbs", "data": "fbs-text"}';

// The call to mockjax replaces the need to overwrite the jQuery ajax() method
$.mockjax({
    url: '/Your/Regular/Action',
    dataType: 'json',
    contentType: 'application/json',
    responseText: jsonText
    }
});

// Act - call the method which is 'under test'
/* ... */

//perform test assert state of system is as expected
/* ... */

Demonstration

Full QUnit test code is up as a Gist.

Find a working copy of the code here in it’s simplest form: jsfiddle.net/NickJosevski/9ZZmc/4/

Tweaking a VS2010 plugin to run JSLint in the command line

We’ve gone to some lengths at work to automate, and have a continuous delivery style pipeline for all things build and deployment related. It’s well on its way, but not ‘perfect’ yet. Maybe perfection isn’t attainable, or maybe when you have a red button on the desk that when pushed does *everything*. None-the-less aiming for perfection will continue to drive us to improve.

So here I wanted to discuss what steps I took to get a nice JSLint Visual Studio plugin to form part of our build process. It was a bit of fussing about getting it to work, and it’s an example of still being imperfect, but for now it serves the build pipeline well enough.

If you don’t use JSLint for your JavaScript code, you probably should. It’s a static analysis tool to analyse and report on broken coding rules for JavaScript. Try it out on the creator (Douglas Crockford’s) site JSLint.com.

There is an easily accessible Visual Studio plugin called exactly what you would expect “JSLint for Visual Studio 2010” and here is the direct link for it on the VS Gallery for your installation pleasure.

If you do anything and stop reading here you’ve done well, install it and happy JavaScripting.

JSLint for Visual Studio 2010 Extension

But what about continuous integration?

For us it wasn’t enough, we wanted our build process which runs as psake scripts to fail if JSLint rules were broken.

Just for a bit more completeness on the psake digression and how the command line tool will execute under MSBuild, here’s a snippet of the ‘targets’ code that is referenced by the .csproj files that contain JavaScript. The input parameter on the executable being the directory containing the script files relative to .csproj file.

<Target Name="AfterBuild"> 
    <Message Text="Running jslint..." Importance="Normal" /> 
    <Exec Command="&quot;..\jslint\JSLint.CLI.exe&quot; .\Scripts\ " /> 
</Target>

So the search for command lines tools began. I found some existing command line tools, some which had more complex dependencies, others which seemed to be more ‘primitive’, i.e. did not report on errors the IDE based JSLint plugin was reporting.

There were 2 main objectives driving the choice of the tool

  1. Similarity and accuracy to JSLint.com and the Visual Studio extension
  2. Ease of setup

There was some discussion happening on StackOverflow, here and here but nothing I tried digging deeper into seemed suitable.

I then had an idea…

Take the core logic of the Visual Studio extension and wrap it in a very simple console application to execute as part of the build process.

With the approach I took, it seems even option 2 was difficult to achieve (consumed some time), but at least it had the least external dependencies to the other options out there.

The very first step was to obtain and install the VS2010 SDK, as this was an project which referenced many interop assemblies for interacting with the IDE. It needs to be the Service Pack 1 SDK in fact, and here’s a direct link. Once I was able to compile the extension it was then a matter of understanding how it operates and how to access a method to perform the ‘linting‘.

There were 2 major hacks to get access to some inner workings of the extension to operate:

  1. Making some ‘protected/internal’ methods and properties public
  2. Modifying where the JSLint logic obtained the settings file from (JSLintOptions.xml).

Locating JSLintOptions.xml proved somewhat difficult at first, as it was tucked away hiding in the Roaming section of my user folder on Windows (\Users\*\AppData\Roaming\). These hacks could greatly benefit from some re-factoring effort if I have ever have the time, or someone else is so inclined. But after an initial attempt to refactor out the most core of logic things fell apart in the land of SDK dependencies, so I rolled back and opted for a less elegant approach which was the 2 hacks listed above.

The logic for the console application is then trivially simple:

  1. Read .js files.
  2. Using the JSLinter class method Lint() supply the .js file content/
  3. Write errors to console
  4. Return error code, 0 (Success), 1 (JavaScript warnings), etc.

Show me the code!

If you want to see the hacks I had to make to get this to work head on over to this git repository on BitBucket. I offer no warranty it may be fragile so manipulate with caution.

If you just want the command line executable, then it’s built and stored in the repository in the /output folder, if I make any updates to the executable, I’ll also update that executable.

Areas for improvement in the console app:

  • Taking path locations as input parameters/external file of locations
  • Taking alternate settings files for rules to ignore/include
  • General tidyup

Outstanding issue

The final hiccup is not directly related to the use of the command line tool itself, but building the command line tool from source as part of a more comprehensive-all-inclusive build process. Adding commands in the .csproj file post-build settings was not sufficient. The build and copy of the executable needs run as a directive in the targets ‘afterbuild’ section of the .csproj file.

CSProject Settings Dialog - Post Build Events

Not suitable to place actions in 'post build' section

This led to a conflict between MSBuild and the VS2010 SDK, which was very frustrating and currently isn’t solved yet. The question is up on StackOverflow.

Generic Personality Tests For Software Engineers

I rarely rant on my blog, but when a friend of mine brought up that he had to take a personality test as part of the interview process (a step prior to being given an offer), it just frustrated me so much. So at this point dear reader you may move along if this isn’t of interest as this is all my opinion on personality tests for software engineers.

Summary and Disclaimer
Unless your organisation has thought long and hard about designing a personality test specific to software engineers, don’t subject candidates to a generic personality tests. It reflects poorly on your understanding of software engineers.

If you use a generic personality test…

Q: Do you care about hiring and retaining the best software engineers?
A: test A resounding – No.

Q: Do you understand what it takes to be a good software engineers?
A: A clear – No.

Q: Does your organisation care about the previous two questions?
A: If no, then fair enough. Continue using the test, and move along.

Otherwise:

You have to be kidding me. When a professional organisation subjects any reasonably qualified software engineer to the same type of tests they subject potential employees in their specific field to then they are not concerned about hiring and retaining the best, or they have been ill-informed on how to recruit top software engineers. What got me fired up about this information was that the type of questions he described. They were so broad and irrelevant to what it would take to do his job. This frustration was further compounded when it was apparent that this test carried some significant weight in the recruitment process. They didn’t even bother to undertake more suitable programming/problem solving exercises that an Engineer would actually do in day to day activities in this job role.

At this point you may counter my argument with, a general statement such as “why not just use any and all tools available to help make a decision“. To this I answer: such questions are not relevant enough to accurately judge a good software engineer, and do more harm than good.

Here are some categories we derived from the discussion after he sat the test, and we analysed the questions that invoked the anger that’s feeding this post.

  1. Questioning the norm.
  2. Long standing ideas.
  3. Repetitive routines.
  4. Breaking rules.
  5. Data analysis.
  6. Being creative.

Here we are both speculating a bit, but the type of questions that represented the first 4 categories seemed to attempt to discover candidates that would be deemed as rebellious. As a software developer answering these questions you have to completely put aside what makes you a good software developer. I would give credit if the organisation was actively seeking ‘rebellious’ software developers ready to challenge the norm and bring improvements. But based on their sector, and other information this seems highly unlikely. The questions that matched the last 2 categories seemed reasonable.

A very brief search uncovered this reasearch paper; very SDLC and waterfall focused analysis of personalities (pdf), there didn’t seem to be any concept of applying Computer Science knowledge/research into this personality test. In fact several questions were difficult even to interpret for your typical software engineer.

If I was subjected to such a test as part of an application where the test wasn’t clearly justified as relevant to engineers, it would be safe to say right there an then to avoid stress I would decline and withdraw my candidacy.

Take away: treat your engineers with a bit more respect.

Perfect Password Paragraphs

Over the last few months at least in the streams of information I typically consume, direct issues: Security Now topic of Password Haystacks, xkcs’s comic, Coding Horror, and indirect: Scott Hanselman one and two. Have all commented on the issue around passwords and strength and the need for better passwords.

In this post I am putting forward a novel approach: which as an homage to GRC’s Perfect Paper Passwords and accordingly have titled my approach:

When high entropy 16, 32, 64 or even 128 character passwords are just not secure enough!

Let’s jump right in with a sample, here I’ve mocked up the very familiar facebook interface with a nice large textbox to put in your Perfect Password Paragraphs™.

Perfect Password Paragraphs facebook log in modified

Disclaimer: if you’ve gotten this far and haven’t begun to appreciate the humour I’m so sorry, please don’t send me hate mail.

Features:

  • A big text area where with probable difficulty you have to type 100+ words to authenticate.
  • Typographical errors are ok as long as they are consistent for you.
  • A flow of sentences following a theme/style just needs to sound like the individual attempting to gain access.
  • “Sound Like” is a trademark (patent indefinitely pending) of Josevski Research Corp, is the flux capacitor grade specialty of this authentication system.

Comparison metrics:

  • Writing style
  • Choice of punctuation, frequency of commas, periods, ect.
  • Grammar choice.
  • spelling (American vs British English).
  • Consistency of spelling errors.
  • Choice of tense (present, past, and future)

Future Features based on demand:

  • International support.
  • 1337 sp34k.
  • Baby talk.
  • Obscure localised slang.
  • Pig Latin.
  • iOS, Windows Phone 7 and Android Support.

Alpha product coming online in 6-8 weeks 😉

Getting to know your machine, by building it

Back in September of 2009 after a fair few months of thinking about what to put in to a new PC, the stars aligned and along with some friends of mine decided to take on a small project of building our own PCs. At that point I had never done the activity on my own, as in assembling everything from scratch, but was well across how the hardware did connect, and what the steps were.

The systems back there were Core i7 920s @ 2.6 Ghz, with 12Gb of RAM (6x2gb), 2x 1TB drives, 2x NVidia GTX 275s 892MB. At this point it’s obvious these machines were to be gaming rigs with some nice SLI action in the GFX department. These machines are CPU overclocked to run at 3.5Ghz and have been running well for just over 2 years now.

Fast forward to August 2011. Almost 2 years on, and I’m at it again, but this time to build our primary software development machines at work.

This system consists of Core i7 980 @ 3.2Ghz, with 24Gb of RAM (6x4gb), OCZ Vertex 3 SSD, 1x NVidia GTX 560 1Gb.

The following is just a summary of my experience, and those who took part in the builds with me, your millage may vary and in no way is this a definitive guide, there may be some useful tips and insights, either way this is to be an entertainment post, full of hardware pictures.

Choice of parts
“Bang for buck” is obviously your best bet, which translates into purchase what is important in your budget. This being 2011 very much the year of the SSD, this is where your first bit of attention should be.

In 2009, an SSD was well out of our price range, in particular with 32Gb models being the most popular and still not very affordable or even reliable yet. In 2011 this is not the case, and with budgeting to ensure a great SSD we ended up with 240Gb OCZ Vertex 3 Max IOPS drives. These were of the top end of the SSD spectrum, the only thing typically more expensive was larger capacity 500+Gb which had just come out, and some other specialised SSDs.

ocz vertex box and on case tray

Ensuring part capability
This just takes research as there isn’t much that can go wrong. The main concerns typically revolve around the capabilities of the motherboard, after you clearly select the motherboard that’s appropriate for your CPU chipset. Also of concern to double check on is physical space inside the chosen system case. This did bite us when early 2010 we decided to build a similar spec’d machine as the 2009 machine, but chose a newer and physically larger GFX card which did not fit the 2009 cases.

2009 Case -> Cooler Master CMStorm Scout
2011 Case -> Corsair Graphite 600 (black)

I found just searching for reviews and combinations people are bench-marking online make a good guide to compatible choices.

motherboard Gigabyte G1.Guerrilla

Purchasing
Shop around for any discounts if buying a larger quantity, combined order. We went as far as to choose to shop at multiple locations to obtain the best prices, and to ensure all the parts we wanted could be acquired exactly when build time came around. This may be a negative as if there are issues with multiple parts then you have to deal with 2 or more business for parts exchange. Luckily in both the 2009 build and the recent build there were no issues requiring travelling back to the store.

graphics card gtx 560 1gb

The Build
A quick checklist of the order of configuration, in both times, even trying both combinations, I found that at least for our cases, it was easier to assemble almost everything on the motherboard with it on the build table, and not in the case. This is the order that worked well for us.

  1. Install the RAM onto the motherboard.
  2. Lock in CPU with thermal paste.
  3. Add your CPU heat sink, plus any rear of motherboard mounts.
  4. Wire up CPU fans into motherboard.
  5. Clip on RAM cooling fans (wasn’t a feature on my 2009 machine).
  6. Screw-in feet for motherboard into case.
  7. Bolt motherboard down into case.
  8. Insert GFX card.
  9. Run SATA cables for SSD/HDDs, BluRay/DVD roms
  10. Link up any case based connections (HDD lights, fans, power button, etc)
  11. Tidy up cables now and as you go.

Cable Management
I’m of the opinion you should attempt to get all the internal cables tucked away as neatly as possible to help with airflow, and general appearance, in my machines a strong touch of OCD helped get them in a near perfect state of “out of the way”. Zip ties, twisty ties and making use of the case are your best friends here.

Cooling
Fans, fans and more fans. Here I don’t believe it’s necessary to go that overboard with advanced alternate tech cooling. But to each his own.

In both the 2009 and 2011 systems, we chose to go with an aftermarket heat sink for the CPU, this was clearly the right choice in 2009, but when we saw the stock fan that came with the Core i7 980 it looked like it would do a good job. None-the-less dual fans and larger grill section on the Noctua NH-U12P is what’s cooling the overclocked 920s and 980s.

noctual cpu fan, and corsair ram fan

Testing the configuration
Double check nothing is out of place, and turn it on. Good luck! 😉

case assembly complete

Play
Start using the system. After you have your Operating System installed that is.

monitors