Making sure your I/O routine doesn't degrade performance
Our company supports and maintains mainframe financial systems. In order to maintain a common set of source code, we use a concept of I/O routines that are passed function codes by calling programs (batch and CICS) to perform all type of I/O to the database. We currently support three databases, ADABASE, DB2 and VSAM.
One of our clients is insisting that we break up our DB2 I/O routines into smaller executables because the number of key columns has a direct effect on the number of cursors we have for reading prior and next. Some of the larger I/O routines exceed 200K in size.
Is there any benefit (CPU utilization, storage consumption, response time, etc.) to breaking these I/O routines into smaller chunks?
Well, you probably won't like my answer. I/O routines and "black boxes" are detrimental to DB2 performance. By funneling all requests to a centralized I/O routine performance suffers for any number of reasons. Some include:
* Not using relational features like joins, table expressions, and
* Filtering data in the program instead of in SQL predicates
* Requiring additional instructions which increases CPU time
In general, putting as much work as possible into the SQL is the way to go for performance. So, your I/O routine may make it easier for you to support multiple DBMS products and file formats but at the expense of degraded performance.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our
This was first published in September 2003