Definition

DataOps (data operations)

Contributor(s): Jack Vaughan

DataOps (data operations) is an Agile approach to designing, implementing and maintaining a distributed data architecture that will support a wide range of open source tools and frameworks in production. Inspired by the DevOps movement, the DataOps strategy strives to speed the production of applications running on big data processing frameworks. Additionally, DataOps seeks to break down silos across IT operations and software development teams, encouraging line-of-business stakeholders to also work with data engineersdata scientists and analysts. This helps to ensure that the organization’s data can be used in the most flexible, effective manner possible to achieve positive business outcomes.

Since it incorporates so many elements from the data lifecycle, DataOps spans a number of information technology disciplines, including data development, data transformation, data extraction, data quality, data governance, data access control, data center capacity planning and system operations. DataOps teams are often managed by an organization’s chief data scientist or chief analytics officer and supported by employees like data engineers or data analysts.

As with DevOps, there are no “DataOps” specific software tools; there are only frameworks and related toolsets that support a DataOps approach to collaboration and increased agility. Such tools include ETL/ELT tools, data curation and cataloging tools, log analyzers and systems monitors. Tools that support microservices architectures, as well as open source software that lets applications blend structured and unstructured data, are also associated with the DataOps movement. Such software can include MapReduce, HDFS, Kafka, Hive and Spark.

How DataOps works

The goal of DataOps is to combine DevOps and Agile methodologies to manage data in alignment with business goals. For example, if the goal is to raise lead conversion rate, DataOps would position data to make recommendations for marketing products better, thus converting more leads. Agile processes are used for data governance and analytics development while DevOps processes are used for optimizing code, product builds and delivery.

Building new code is only one part of DataOps as streamlining and improving the data warehouse is equally as important. Similar to the process of lean manufacturing, DataOps uses statistical process control (SPC) to monitor and verify the data analytics pipeline consistently. SPC makes sure that statistics remain within feasible ranges, advances data processing efficiency and raises data quality. If an anomaly or error occurs, SPC helps to alert data analysts immediately for a response.

How to implement DataOps

As the volume of data is estimated to continue to grow exponentially, implementing a DataOps strategy has become crucial. The first step to DataOps involves cleaning raw data and developing an infrastructure that makes it readily available for use, typically in a self-service model. Once data is made accessible, software, platforms and tools should be developed or deployed that orchestrate data and integrate with current systems. These components will then continuously process new data, monitor performance and produce real-time insights.

A few best practices associated with implementing a DataOps strategy include:

  • Establish progress benchmarks and performance measurements at every stage of the data lifecycle.
  • Define semantic rules for data and metadata early on.
  • Incorporate feedback loops to validate the data.
  • Use data science tools and business intelligence data platforms to automate as much of the process as possible.
  • Optimize processes for dealing with bottlenecks and data silos; this typically involves software automation of some sort.
  • Design for growth, evolution and scalability.
  • Use disposable environments that mimic the real production environment for experimentation.
  • Create a DataOps team with a variety of technical skills and backgrounds.
  • Treat DataOps like lean manufacturing by focusing on continuous improvements to efficiency.

Benefits of DataOps

Transitioning to a DataOps strategy can bring an organization the following benefits:

  • Provides real-time data insights.
  • Reduces cycle time of data science applications.
  • Enables better communication and collaboration between teams and team members.
  • Increases transparency by using data analytics to predict all possible scenarios.
  • Processes are built to be reproducible and reuse code whenever possible.
  • Ensures higher data quality.
  • Creates a unified, interoperable data hub.
This was last updated in May 2019

Continue Reading About DataOps (data operations)

Dig Deeper on Data stewardship

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

[bias alert, I work for DataKitchen, a DataOps Company] We think that inflexibility, poor quality, and other obstacles hinder the successful production of analytics for data-driven organizations. Other types of organizations have faced similar challenges and the lessons learned in these other domains can be applied in data analytics. In software development, both Agile Development and DevOps have led to a major transformation in the speed and quality of code creation. In manufacturing, statistical process controls (SPC) assure quality and provide early feedback on non-conformances. Applying these methods to data analytics is called DataOps. DataOps is a combination of tools and process improvements that enable rapid-response data analytics at a high level of quality. DataOps adapts more easily to user requirements, even as they evolve, and ultimately supports improved data-driven decision-making.
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

SearchBusinessAnalytics

SearchAWS

SearchContentManagement

SearchOracle

SearchSAP

SearchSQLServer

Close