This article originally appeared on the BeyeNETWORK.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
You remember clickstream data? It was the data that was generated by the movement of the cursor or the selection of a new page on the Internet. Not so long ago, we were told that clickstream data was the key to the future. With clickstream data, you could tell what consumers were thinking. In our recent past, there was much ado about clickstream data, but what do you hear today? Absolutely nothing. Clickstream is stone-cold dead.
There were always problems with clickstream data. The first problem with the data was that it was too granular. In order to find out anything useful using clickstream data, you had to sift through mountains of useless data. It was estimated that at best 10% of clickstream data was ever useful (and that was stretching it).
The second problem with clickstream data was that there was so much of it. One anecdote had it that at Christmas time – the busy season for retailers – for one retailer, there was the generation of from 4 to 5 terabytes of clickstream data per hour. That is a lot of data by anyone’s measure.
But even so, if the promise of clickstream data was that great, then there should have been solutions to these technological barriers to its usage.
At the very heart of the dot-com boom, when wild-eyed Harvard and Stanford MBAs ran around telling people that they just didn’t “get it,” declaring the new economic order to be their private understanding and domain, even in these times, clickstream data still wasn’t making it. In order to fulfill the dreams of the first generation of dot-commers, it was necessary that clickstream processing should become a large industry unto itself. It was only through the secrets that lay in clickstream data that the prophecies of the dot-commers had any chance to become reality.
There once was a company that had a curious piece of software called a “granularity manager.” The granularity manager swept through clickstream data and conditioned the clickstream data for further processing. The granularity manager deleted extraneous data (of which, in the clickstream, there was an abundance). The granularity manager summarized fields. The granularity manager aggregated data. In a word, the granularity manager intelligently condensed massive amounts of clickstream data and made clickstream data fit processing.
And right in the middle of the dot-com mania, this software company could not sell – could not even give away – their product.
In perhaps its own unique way, the failure of the granularity manager was the death knell for the dot-com era. If people were not taking clickstream data seriously, then there was no chance that they were going to be able to make the dot-com world real.
Maybe there is a second generation clickstream renaissance. Those secrets are still there, if anyone can wade through all of the clickstream data to unlock them. Maybe today there is a lot more rationality about the positioning of clickstream data and what it can and cannot do for you. There’s still gold in the clickstream creek. Maybe setting the expectations as to what can be mined out and the resources it will take for mining are more rational and realistic today in the post dot-com world.