News Stay informed about the latest enterprise technology news and product updates.

The Economics of Information Quality

Proactive process improvement is the heart of a sound quality environment.

This article originally appeared on the BeyeNETWORK.

The Value Proposition of Information Quality Management
“Data cleansing” is a key term in most data quality methods or initiatives.  However, data cleansing is not the goal of a sound quality management system.  When you have to correct defective data, you are actually performing “information scrap and rework,” just like manufacturing scrap and rework.  If you find a defective product, you must rework it to bring it up to specs or you must scrap it if it cannot be repaired or corrected.

It is true that if you have defective information, you must correct the known defects so knowledge workers can renew their trust.  However, the only sound quality approach is to conduct data correction as a one-time activity for a given database, coupled with a process improvement initiative to prevent recurrence of defective data.  This minimizes the need for future correction costs and the process failure the defective information causes.

The real value proposition for information quality is when we prevent the creation of defective information that causes processes to fail. The problem is not the fact of this failure, but rather the costs of failure caused by defective information. This includes all the time knowledge workers spend to recover from the failure, verify and correct defects, perform work-arounds when they do not have all the information they need, as well as compensation to unhappy customers and the fines and legal costs of non-compliance.

Measuring the Costs of Poor Quality Information: Order Fulfillment Example
The business case for information quality is found in measuring the costs of poor quality information, and the value basis of IQ is in process improvement to prevent defects that eliminate or significantly reduce those costs of poor quality.  When done properly, an IQ management function becomes a profit center.

An order fulfillment process produces about one million customer, order and order item records per year. The overall average time to take an order is 4.6 minutes per order. The cost of an order taker’s time is $65 per hour.  This makes the annual costs of creating orders approximately $5 million per year.

The error rate is slightly over 20%; that is, 20% of the records are “defective,” having one or more errors.  The total cost of failure, including people time, return handling, error correction and customer compensation, equipment and space is a little more than $12.50 per defective record.

This equates to $2.5 million per year in “information scrap and rework,” the costs of poor quality information.  Because 10% of the errors are not corrected for various reasons,  there is a compounded, residual cost of poor quality in subsequent years of about 10% of the first-year costs.  These costs for a five-year time period are illustrated in Figure 1.  The numbers are rounded for illustration purposes.


 

Figure 1: Costs of Ownership of Defective Process

If someone were to ask about the costs of the order-taking process, they would be told it is $5 million per year.  However, this does not represent the costs of “ownership” caused by the 20% defective rate of the broken order-taking process.  You must factor in the cost of failure and “information scrap and rework” caused to all the downstream processes that depend on the customer, order and order item information.  The real costs of taking orders at a 20% defective rate are $7.75 million, more than 50% higher, growing to 75% higher in year five because of the residual costs of poor quality!  The aggregated five-year costs of order taking are more than $41 million, compared to $25 million if the process performed at a zero-defect rate.

A 20% error rate is not unusual in today’s uncontrolled information environments.  I have seen defect rates of 25% to more than 35% in unimproved processes (Larry English, The Gift that Keeps on Giving, DM Review, December 2003).

Does Data Cleansing Reduce Costs?
Many organizations attack IQ problems with a reactive data cleansing (correction) approach.  What impact does data correction have on the costs of poor quality information?  Data cleansing software rarely is able to correct more than 80% of defects in a collection of information.  They often miss many accuracy errors that are not able to be identified electronically. However, for argument’s sake, let us assume we can accurately correct 90% of the defective records – at the source – using both software and human intervention.  In this example, no process improvement is performed.  The five-year cost of ownership of data cleansing only is illustrated in Figure 2.

Figure 2: Costs of Ownership of Defective Process with Data Cleansing

Few people would have guessed that the five-year costs of ownership after a one-time data correction event would be higher than the unchanged, defective process! The reason is mathematical – data cleansing addresses yesterday’s problem data.  Without understanding that it is the process that is broken, today’s reactive problem-solving (fire-fighting) does not prevent tomorrow’s defects.  This is more expensive because you have now added the costs of correction software and its maintenance without reducing the number of defective records being produced on a daily basis.

At best, a data-cleansing initiative may provide temporary improvement in process quality, but this will quickly fade away as information producers fall back into old habits to meet their quotas on which their performance reviews are based.

Process Improvement Is the Real Solution
Dr. Joseph Juran frequently stated that fire-fighting is not improvement. (Joseph Juran, Planning for Quality, New York: The Free Press, 1988, pp. 11-12.)  Improvement is analyzing the causes of fires and error-proofing the structure to prevent fires from happening in the first place.

Process improvement provides real benefit in that it attacks the root causes of defective information processes.  Figure 3 illustrates the return on investment (ROI) than can be gained time and again with a proactive use of the Shewhart Cycle, Plan-Do-Study-Act (or Plan-Do-Check-Act).

 

Figure 3: Costs of Ownership of Improved Processes

 

The process improvement initiative cited in Figure 3 had a one-time cost of $250,000 – one-fourth of the one-time costs of the data cleansing project.  The improvement required a 10% increase in costs of the process, adding $500,000 for additional edit and validation and double checking critical data. This improvement reduced the incidence of errors by an average of 90%, from 200,000 to 20,000 defective records per year. This reduced the costs of poor quality from $250,000 to $25,000 per year. This improvement also reduced the residual costs of poor quality as well through increased training and empowerment of people to correct errors they found.

Process improvement costs less than corrective maintenance because it focuses on tomorrow’s results. By error-proofing and preventing defects, you eliminate the costs of process failure and the information scrap and rework. 

The $30.3 million costs of ownership, including a one-time data correction initiative, resulted in savings in the first year, with a five-year savings of $11 million over the initial state and $12 million over data cleansing alone.

To learn how to perform process improvements, see Chapter 9:  “Improving Information Process Quality:  Data Defect Prevention” in Improving Data Warehouse and Business Information Quality: Methods for Reducing Costs and Increasing Profits  (Larry English, Improving Data Warehouse and Business Information Quality, New York: John Wiley & Sons, 1999, pp. 285-310).  To learn how to measure costs of poor information quality, see Chapter 7 “Measuring Nonquality Information Costs” (Ibid., pp. 199-235).

Proactive Process Improvement
Without process improvement, data cleansing is merely information scrap and rework.  If an organization only performs data cleansing, it should not use the term quality in the name of the function or initiative.

Proactive process improvement is the heart of a sound quality environment. Process improvement is a core competency for world-class companies. This is where the economics of information quality management lie.

What do you think?  Let me hear from you at Larry.English@infoimpact.com.

Dig Deeper on Data quality techniques and best practices

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchAWS

SearchContentManagement

SearchOracle

SearchSAP

SearchSQLServer

Close