Data integration technologies unify multiple data stores

Manual data integration is holding back data pros

Data integration is a major part of any enterprise data management strategy to reduce the amount of siloed and duplicate data and create cohesive data sets. An ongoing, intensive data integration process maintains a comprehensive and updated view of organizational data in a data repository. But even though data integration technologies have evolved in recent years to automate the process, some companies still stand by a manual strategy.

Manual data integration is prevalent because it keeps costs down in the short term. For small and medium-sized businesses, a one-time manual data integration project can be less costly, but maintenance costs can be expensive for a recurring process. Manual integration also gives a data manager more control over the data integration process, but often limits the project's scalability. In addition, maintaining a manual integration code can consume time that could be better spent developing other projects.

According to a survey by machine learning software maker Figure Eight, 30% of data scientists said they spend one-half to three-quarters of their working time on research and development as opposed to the time putting new models into production. With increased automation, data professionals can spend less time manually collecting, cleaning, transforming and storing data, allowing them more time to interpret the data and take action on insights.

This handbook on data integration technologies examines the challenges data managers and scientists encounter throughout the integration process. We look at issues surrounding trusted data and the key role data integration tools play in a strong data governance strategy, the distinctions between data integration and extract, transform and load strategies, and integrating ERP systems to improve data access and streamline business processes.