Technology is ever-changing. But perhaps the one thing IT teams can count on is that the amount of data coming their way to manage will only continue to increase. The numbers can be staggering: In a report published last December, market research company IDC estimated that the total count of data created or replicated worldwide in 2012 would add up to 2.8 zettabytes (ZB). For the uninitiated, a zettabyte is 1,000 exabytes, 1 million petabytes or 1 billion terabytes -- or, in more informal terms, a whole lotta data. By 2020, IDC expects the annual data-creation total to reach 40 ZB, which would amount to a 50-fold increase from where things stood at the start of 2010.
Corporate data expansion often starts with higher and higher volumes of transaction data. But in many organizations, unstructured and semi-structured information -- the hallmark of big data environments -- is taking things to a new level altogether. And because such data typically isn't a good fit for relational databases and comes partly from external sources, big data growth also adds to the data integration workload -- and challenges -- for IT managers and their staffs.
SearchDataManagement recently published several articles that provide insight and practical advice on managing integration projects involving large amounts of data in general and big data in particular. In one, we look at the key considerations that need to be taken into account on big data integration initiatives. In another, we cover the complexities of integrating streaming sets of big data. And in an interview, Glasshouse Technologies consultant Jim Damoulakis offers tips on planning and implementing integration efforts that can accommodate big data growth.
Craig Stedman is executive editor of SearchDataManagement. Email him at email@example.com.
Follow us on Twitter: @sDataManagement.
This was first published in April 2013