This article originally appeared on the BeyeNETWORK.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
How do we do database design? We learned a long time ago about normalization from Ted Codd and Chris Date. Or perhaps we sat at the knee of James Martin and learned about high performance and streamlined, denormalized structures. However we did it, we learned about database design.
And I’ll bet that when we learned database design, no one had even heard about database design according to the separation of semantically temporal and static data. That just was on no one’s radar when the basics of database design were laid out.
But it should have been.
First, what are semantically temporal and semantically static data? Semantically static data is data whose elements are not liable to change frequently. Sales data is notoriously semantically stable. When it comes to measuring and capturing sales information, sales data doesn’t change over time. Merchants at the bazaar in Cairo two thousand years ago were collecting the same types of information about sales as Wal-Mart collects today. And it is a good bet that in 2100, when our grandkids are running things, the same type of sales data will be collected.
Semantically temporal data is data whose structure changes frequently. Try doing a data model for an organization structure and see how long that lasts. Try doing a data model for a sales territory and see what happens next year. Or try modeling federal or state legislation. Guess what happens when there is another election? So some data is inherently semantically stable while other data is semantically unstable.
When we did classical database design, was the semantic stability of the design an issue? Unless you had the prescience of Galileo, it wasn’t.
What would have happened if we had had the audacity to physically separate semantically static and semantically temporal data at the point of design? Something almost magical would have happened. We would have found that our database design could easily withstand the change of business requirements that periodically occurs.
By separating semantically static data from semantically temporal data, our systems could have gracefully accommodated change. And how does that happen?
When business requirements change, semantically static data doesn’t change. That is simply the nature of semantically static data. So business requirement changes have no impact on semantically static data. But what about semantically temporal data? Semantically temporal data changes every time business requirements change. If we had been smart enough to wrap semantically temporal data in the form of key structures with DATE and TO DATE parameters (which is a very natural thing to do), then every time a change came to the business, a new set of snapshots for semantically temporal data would have had to be made. And how difficult is it to make a new set of snapshots? Why it is….a snap! We don’t have to go backward in time and change any old semantically temporal data. We simply create a new set of snapshots. And voila … we can accommodate change.
Now does this mean that we need to go back and redesign all old databases? Realistically, that simply isn’t going to happen. Look at the upheaval of Y2K. Redesigning old databases and rewriting code is simply out of the question. But designing new databases going into the future – that’s a different story altogether.
About the author:
Bill is universally recognized as the father of the data warehouse. He has more than 36 years of database technology management experience and data warehouse design expertise. He has published more than 40 books and 1,000 articles on data warehousing and data management, and his books have been translated into nine languages. He is known globally for his data warehouse development seminars and has been a keynote speaker for many major computing associations.
How cognitive analytics help find data patterns