PLINQ “Grok Talk” at Developer Developer Developer Melbourne

I did a very quick and choc-full of ramblings talk summarising Parallel LINQ (PLINQ) at the weekends Developer Developer Developer Melbourne.

First up, DDD Melbourne was great, thanks to all the sponsors (NAB, Readify, DevExpress, Pluralsight, JetBrains, Redgate), the presenters and key organisers Alex, Mahesh and others.

The message I wanted to get across was have a look at the Parallel Extensions in the Task Parallel Library of .NET, it can help speed up a few longer running tasks that might exist in your application and that it’s easy. Check out the parallel extension teams MSDN blog for the latest stuff.

The intent of this quick post is to clarify what I was rambling on about, and to offer some links to old posts, my PowerPoint slides that would have made my talk go a little smoother.

*Note: This is in fact demo-ware just to perform PLINQ benchmarks.

PLINQ on the StackOverflow Data-Dump Series

I have an ongoing long standing side project of applying PLINQ performance tests on the Stack Overflow data-dump.

Here’s just an up to date list of those blog posts:

The source code for the demo app is available up on GitHub.

Overclocking PLINQ

Today I finally got around to tweaking my new PCs settings to achieve a small CPU overclock. I have an Intel Core i7 920, that by default runs at 2.66 Ghz, it’s now running at just over 3.00 Ghz, what’s this got to do with .NET you ask?

Well I thought I would post a simple follow up to my Sep 2009 entry about PLINQ and the Stack Overflow data-dump where at the time I was using a Core 2 Duo at 2.53 Ghz to run PLINQ vs LINQ speed-up tests.

So before the Core i7 speed up took place I ran the PLINQ queries and took down the results, now after the overclock I have some even more improved times.

Note: The standard LINQ queries also take advantage of a faster CPU even when using only a single core.

As an up front summary the speed-up on the PLINQ query over the LINQ query for the 4 physical cores on the Core i7 is averaging out to 3.79 factor speed-up.

This is a great value, since it’s reasonably close to 4, with some overheads not letting it reach any closer to 4. The overheads obviously prevent us from obtaining a pure number-of-cores performance multiplier.

The key experiment of data processing I’m performing here is on approximately 265,000 rows of Stack Overflow question data. After being extracted out and stored in memory some kind of data manipulation is being run (one that ideally can benefit from being distributed across several cores). Referring back to the original post it’s really just a trivial calculation of number of tags on the question that are also listed in the question body text.

So I ran LINQ then the PLINQ queries on the approximately 265,000 rows of data and averaged the results. Mind you the results are fairly consistent, almost always under 500 millisecond variations. I quickly whipped up an Excel chart to visual sumarise the time taken to complete the LINQ and PLINQ queries.

Core i7 PLINQ Timings

Core i7 PLINQ Timings (times are in Seconds)

To summarise; what was already known before we began (based on my dual core tests in September 2009)

  • More cores = better PLINQ execution time.
  • And higher core speed = better execution time.
  • Combining the two is even better.

Exceptions in (my) LINQ (presentation)

Last night I presented to the Melbourne Patterns & Practices group, thanks to my audience for paying attention, having great input and asking interesting questions. I would like to clarify some things I glossed over in the powerpoint slides, and explain why some of the simple extension methods didn’t execute in the live code demos. I also have posted the pptx file here.

The first thing that I did not give a detailed enough explanation about was the Exception Handling slide where I was using the ‘let’ keyword in a LINQ statement. The question was along the lines of the benefit of the exception handling offered up by making use of let in a LINQ query. To clarify this, the let keyword is used to create a contextual keyword as part of the LINQ query (a Range Variable). This range variable can then be used to create an anonymous type using projection. I incorrectly tied the explanation of let to the point I was trying to make about handling exceptions. The key take-away is that because the query is a deferred execution any exception handling needs to be wrapped around the code that performs the execution. So have the Try {} Catch (E ex) {} surround the processing code not the query definition.

There’s a great post about using ‘Let’ in a LINQ Query by Greg Beech that goes into greater detail. This topic lead to a question about what would happen to processing when an exception did occur.

Another issue that came up during live code tweaking was making use of certain extensions, in particular .Reverse() didn’t seem to compile. I am unable to recreate the issue quite possible some weird state in Visual Studio 2010 Beta 1, if that was the case then the clean and rebuild was the solution.

But here is the final very simple code that reads a directory and outputs the file names in a reverse order:

var xmlFilesQuery = 
      from fileInfo in 
      where fileInfo.Contains(".xml")
      select fileInfo;

foreach (var fileName in xmlFilesQuery.Reverse())

The last clarification point was a scenario where the PLINQ execution of a task compared to it’s LINQ execution offered up a speed up of 2.18 times. Not sure what state the application was in to allow that. I’ll do some investigation and based on how complex the cause was either update here or create a new post.

Greater Than 2x Speed Up (On Dual Core Machine)

Greater Than 2x Speed Up (On Dual Core Machine)

Playing with PLINQ Performance using the StackOverflow Data Dump

Not having made use of PLINQ in an actual product yet, I decided to have a play with how it works, and to try and obtain my own small metrics on it’s performance benefits. PLINQ is part of a larger push from the .NET teams at Microsoft to get concurrent/parallel processing out of the box in your C# and VB.NET code. As for performance analysis there are already some great posts out there, not just from the Parallel Team at MS but also from great breakdowns with nice charts such as this.

Right off the bat, I’d like to stress that adding a .AsParallel() to your code won’t magically speed it up. Knowing this I still had unrealistic expectations when I began creating a demo to specifically show performance improvements. Often enough the level of processing I was performing (even on larger sets of data), did not benefit from being made concurrent across 2 cores. A level of variation in my results, leads me to believe part of the issue is also the ability to obtain enough resources to make effective use of 2+ cores. For example running out of the 4GB ram I have available, interference from other processes on the machine (Firefox, TweetDeck, virus scanner).

In my attempts at re-creating the “Baby Names” demo Scott Hanselman previewed at the 2009 NDC Conference in his great presentation: “Whirlwind Tour of .NET 4“. I first got a hold of the preview code samples back from 2008 for PLINQ that were part of the Parallel Extenstions CTP.

I then went on to from scratch create my own simple PLINQ – Windows Presentation Foundation (WPF) application.

I chose WPF to test a small feature I hadn’t made use of yet only because I happened to stumble upon it on that day; Routed Events see this StackOverflow question.

Once I completed my take on a LINQ processing demo based on 2 minutes of video showing the operation of ‘Baby Names’, I discovered (by accident*) the Visual Studio 2010 and .NET Framework 4 Training Kit – May Preview, which contains the demo code for what I was trying to re-create.

*The accident in which I discovered the Training Kit, was I actually performed a google image search on the term ‘PLINQ’ to see what came up for ideas for a graphic to add to this post. The 11th image (centre screen) was the baby name graph displayed in theWhirlwind Tour of .NET 4 presentation. The post that had the image was from Bruno Terkaly, the post was about the tool kit, great!

VS 2010 Training Kit May Preview

VS 2010 Training Kit May Preview

None the less, my not-as polished demo application, makes use of the StackOverflow creative commons data dump (actually the Sep 09 drop).

Some background: I grabbed the StackOverflow data dump via the LegalTorrents link, then I followed this great post from Brent Ozar, where he supplies code for 5 stored procedures to create a table schema and import the XML data into SQL Server. It was as simple as running them, and then writing 5 exec statements and waiting the ~1 hour to load the data (resulting for me in a 2.5 gig DB).

The way I structured a lengthy processing task that can benefit from parallel processing, is by making use of the Posts data (questions and answers), in particular questions with an accepted answer. I make an attempt through a repetitive simple string comparison process to determine how valid the tags on the question are, by scanning the question text for the tags, and counting frequency. Then timing the processing of sequential operation vs the parallel operation as I pipe varying levels of data into the function.

First I extract the data into memory from SQL Server (using LINQ to SQL Entities). Just a note on the specifics of the SO Data Dump structure; ‘Score’ is a nullable int so just to keep the data set down in volume I select posts that have some score and greater than a selected input (usually 10+ at least 1 person liked it), same with a reasonable amount of views (on average 200+).

private IEnumerable<Post> GetPosts(int score, int views)
   var posts = from p in db.Posts
           where (p.Score ?? 0) > score
           && p.ViewCount > views
           select p;

   return posts.ToList();

The next step was to create a function that would take some time to process, and of course potentially benefit from being run in parallel. Each post and it’s tags are operated in isolation, so this is clearly prime for separation over multiple cores. Sadly my development laptop only has 2 cores.

private bool IsDescriptive(Post p)
   //lengthy boring code
   //pseudocode instead:

   var words = extract_all_unique_words_from_the_post();
     //excluding punctuation
     //and other formatting details (markup).

   var tags = extract_tags_from_post();

   return were_the_tags_used_enough_in_post(words, tags);

Note: A more sophisticated algorithm here could help actually determine (and recommend) more appropriate tags based on word frequencies, but that’s beyond what I have time to implement for performance testing purposes. It would need to know to avoid common used words such as ‘the’, ‘code’, ‘error’, ‘problem’, ‘unsure’, etc (you get the point). It would then need to go further and know what words actually make sense to describe the technology (language/environment) the stack overflow question is about.

The Parallel operation is applied to a ‘Where’ filtering of data and this is where the timing and the reporting of the performance is based on. Making use of System.Diagnostics.StopWatch.

//running sequentially:
posts.Where(p => IsDescriptive(p));

// vs making use of parallel processing:
posts.AsParallel().Where(p => IsDescriptive(p));

On average this function making use of .AsParallel(), for varying records quantities from 100k to 300k would result in a 1.75 times speed up over the function operating sequentially on a single core. Which is what I was hoping to see.

All this was performed on a boot from VHD instance of Windows 7 (a great setup guide by Scott Hanselman here) with Visual Studio 2010 Beta 1 and SQL Server 2008, so I do understand there was some performance hit (both running as a VHD and having SQL on the same machine) but on average for effective PLINQ setup functions there was at least a 1.6 times factor speed up.

It’s that simple; it doesn’t do much yet, but there is potential for improved/more-interesting data analysis and performance measuring of it too. I will make time to clean up the demo application and post the solution files in a future post, so stay tuned for that. When I get a chance I’ll also try to investigate more of the data manipulations people are performing via data mining techniques and attempt to re-create them just for more performance tests. When I do I’ll be starting here.

That’s it, I’ll have a follow up post with some more details in particular the types of queries I had that did not benefit from PLINQ once I get a chance to determine how they were flawed or if they simply just run better on a single thread/core.

The source code for the demo app is available up on GitHub.