DB2 compression -- we run a fairly large SAP R/3 system currently just over 1TB on the DB2 universal database. We are thinking about implementing DB2 compression on some of the larger tables (tests indicate a space saving of 60%-75%).
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Is there anything you would advise we watch out for to prevent performance degradation.
My first thoughts are to compress the tables that are non-volatile, i.e. master data and/or tables predominantly updated via batch or SAP asynchronous transactions, e.g. FI/CO functions.
DB2 compression works very well and can result in not just space saving but sometimes performance gains. With compression you are trading an increase in CPU cost (to compress and decompress the data) for a reduction in I/O cost (because the rows are smaller, more can fit on a single page).
Consider compressing your larger tables that are frequently accessed sequentially. When data is accessed sequentially compression can actually improve performance because more rows are read with each I/O. Of course, you will need to factor in the cost of compression and decompression so very volatile tables may not be good candidates.
Also, a very small table is a poor candidate for compression because the compressed table may actually be larger than the noncompressed table. DB2 adds a compression dictionary at the beginning of the table space to guide the compression and if the table is small enough, the additional dictionary pages might increase the size of the table space.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our