Pivot a research product currently available from GetPivot.com from Microsoft Live Labs. It uses technology from Seadragon – Deep Zoom technology also of MS Live Labs.
A quote from the Pivot team:
“We tried to step back and design an interaction model that accommodates the complexity and scale of information rather than the traditional structure of the Web.”
Which translates into the ability to aggregate and interpolate large data sets.
If I’m losing you, this is a must see 5 minute TED 2010 video presented by Gary Flake.
Pivot itself is an advanced type of web browser that facilitates this, but…
It is many things:
- “The ability to slice and dice data.”
- “Allows for the whole to be greater than the sum of the parts.”
- “Lets you move from the tiniest detail, to the full scope of the entire set of the information.”
- “Build a connection to the data, and be immersed.”
- “Making large sets of data more approachable.”
- “Empowering people to create new types of collections, and new types of experience from the interaction.”
That last point struck a chord with me, and I will attempt to create a simple collection. A collection in this case is a set of items with common attributes, ranging from hundreds to thousands to millions of items.
Pivot Collection Types
Steps to building a collection:
- Select your content; determine your linking level (diagram above).
- Expose your data in a way you can build the collection. OData service is an option.
- Build the cXML (Collection XML file).
- Navigate to the CXML file.
- Or navigate to the hosted version of the data set. (See Pivot API for .NET)
Resources and sources of quotes/images:
Just as a quick update post on my on going series of posts on using PLINQ on Stack Overflow data-dump.
In my initial post where the core of what I was doing was outlined, at the time the popular (and quickly found) option was to use a series of stored procedures made available by Brent Ozar to import the XML data into a SQL database.
Brent recently replied back on the original post tipping me off to an easier more convenient way to get the data into SQL.
… There’s an even faster way to import the XML files now using Sam’s SoSlow.exe tool. You give it a connection string (including the database name) and it’ll create the tables and import the data. Just FYI – it doesn’t warn you, but it does delete and recreate the import tables every time. It’s dramatically faster too.
I’m all for an “easier” and “better” approach, so I gave it a try.
The first step was to get a copy from Sam Saffron‘s GitHub respoistory
It is a small C# WinForms application with 3 buttons, so the use of it very simple and suits well with the also simple layout of my PLINQ demo application.
In under 15 minutes all the data was imported (results will vary depending on your machine configuration). This will help out keeping the data more up to date when the next public release of the data is made available.
I have an ongoing long standing side project of applying PLINQ performance tests on the Stack Overflow data-dump.
Here’s just an up to date list of those blog posts:
The source code for the demo app is available up on GitHub.