Manage Learn to apply best practices and optimize your operations.

The new need for speed to value: How DB2 plays a role

Today, as never before, IT and ISVs can cut the time to create a high-value application from 2 years (or never!) to 6-9 months, simply by choosing a different development solution.

Today, as never before, IT and ISVs can cut the time to create a high-value application from 2 years (or never!)...

to 6-9 months, simply by choosing a different development solution.

At the same time, the value of that additional 1-1 ½ years to the organization has skyrocketed, because computer applications are now a major source of competitive advantage, and competitors can copy that advantage in shorter and shorter periods of time. In other words, when product turnover is 3 years or less, being alone in the market for half that time is worth far more than in the old days, when IT mattered less to competitive advantage and long product lifecycles meant that competitive advantage lasted for less of the product's life span. However, the short answer does injustice to the full sweep of the new trend. This is about speed to value, not just speed to market. Speed to value involves two other key criteria for the organization:

  1. Cost
  2. Risk

That is, an application may be valuable not merely because it gives competitive advantage, but also because it cuts costs, or mitigates corporate risk.

Obviously, an application can be aimed directly at cutting costs or implementing business continuity. However, it is also becoming increasingly clear that the same tools that speed development also cut application cost of ownership (which is a significant corporate cost), and reduce the risks that the application will fail (which is a major factor in business continuity).

Infostructure Associates research shows that three things are especially important to achieving speed to value:

  1. A highly programmer-productive toolset. Specifically, the development toolset should support higher-level programming that drastically simplifies data access, and generation of production code directly from visual designs.
  2. A high-level framework, with extensive business-logic components, preferably vertical-industry-specific ones, plus a service-oriented architecture.
  3. A powerful, low-administrative-cost database.

Other useful tools include development scalability technologies, cross-database APIs, legacy-application upgrade support (especially Web-enabling and Web-servicizing existing software), design- and requirements-driven development tools, refactoring support, and application-lifecycle tools.

The role of a database in delivering speed to value

The experience of developing for the Web has shown that application scaling can happen exceptionally rapidly -- from zero to 100,000 end users in a few weeks -- and the same is increasingly true of Web services. Infostructure Associates research shows that organizations that succeed in achieving this kind of application scalability are especially effective at optimizing coding of transactions against a highly scalable database. Therefore, if a database that allows optimization of transactions (e.g., via stored procedures and effective cost-based query optimization technologies) can have a major effect on application scalability.

Moreover, research suggests that up to 30-40 % of time-to-production of software can be taken up with unnecessary transactional coding. Table 1 shows that effective transactional coding can achieve better speed-to-value than most other development methodologies and technologies

Table 1: Programmer-productivity effects of today's techniques:

Technique Design Coding Testing/Deployment Likely Maximum Time Saved
Generic development lifecycle 25% 45% 20%/10% (n.a.)
Object-oriented programming 30-35% 25-35% 20%/10% 10%-20%
Automated deployment 25% 45% 20%/0%-1% 10%
Higher-level coding:
- transactional

- VPE

- standards-based


10%-20%

10%-20%

20%-25%


10%-30%

20%-40%

35%-40%


10%-15%/10%

15%-10%

15%-20%/10%


40%-60%

30%-50%

10%-20%
Reusability 20%-25% 35%-45% 15%-20%/10% 5%-15%
Open-source programming 25% 25%-35% 15%-20%/10% 5%-15%
Components/Frameworks 20%-30% 25%-35% 15%-20%/10% 5%-15%
Infrastructure solutions 10%-20% 5%-15% 15%-20%/10% 50%-70%
Outsourcing (including offshore) 30%-35% 35%-55% 15%-20%/10% 0%
Extreme programming 10%-30% 25%-35% 10%-25%/10% 10%-30%

Source: Infostructure Associates, April 2006

Note that infrastructure solutions (providing most of the code, with the developer "customizing" the application by supplying the remainder) effectively also does most transactional coding for the developer.

Supporting the right transaction stream

Broadly speaking, today's typical transaction stream consists of reads, inserts, deletes, updates, batching (sequential access to a full database, primarily consisting of reads, inserts, and updates), and querying (sequential or random reads of a large collection of data). In turn, the typical large-enterprise application handles three types of transaction streams:

  1. The OLTP (Online Transaction Processing) stream. This mostly consists of updates. A typical example is an order entry system such as an airline reservations system.
  2. The decision support stream. This mostly consists of queries. A typical example is a data warehouse.
  3. The application or "mixed" stream. This consists of a significant number of updates, queries, and reads applied in varying and changing proportions. One common example is a packaged application such as SAP's or Peoplesoft's ERP (Enterprise Resource Planning) applications. Another is a Web site, where reads predominate and much of the data is "rich media" such as text and graphics.

Different databases are optimized for different transaction streams, and can scale more effectively on particular streams. For example, major relational enterprise databases such as Oracle 9i and 10g and IBM DB2 are particularly strong in handling decision support streams and data warehousing at the terabyte level; their past history as key OLTP system underpinnings make them also the vendors of first resort for OLTP applications. Recent addition of XML storage and XQuery capabilities also make enterprise databases better at handling Web-site applications. However, smaller databases such as IBM DB2 Express continue to do well in applications where low administrative costs, rapid database design time, or support for "mass-deployment" architectures (many loosely-linked local sites) are key.

Scaling an application with one back-end database is scaling vertically; users may also need to scale horizontally, by creating multiple copies of the application and/or multiple copies of the database. An application server allows a load of transactions to be shared or "balanced" across multiple front-end copies of an application, each usually with its own database; a TP (transaction processing) monitor allows a load of transactions to be shared or "balanced" across multiple front-end copies of an application, each accessing the same shared database. Typically, TP-monitor-type load balancing is preferable, because having multiple back-end databases usually requires keeping multiple copies of some data in each database. This, in turn, requires that changes to one data copy are "replicated" to all other data copies.

Integrating objects and databases

We have already noted that much of a programmer's time when creating enterprise applications is spent coding data access, and that much of that time is wasted. Some of that wasted time comes from "reinventing the wheel" -- writing at a lower level (the 3GL level, available in the C and Java programming languages), instead of at a higher level. Some of the wasted time also comes from what Infostructure Associates calls the "object-relational mismatch."

Most of today's new programs, and especially Web services, are written using object-oriented languages. These languages produce code that consists of small "object classes" or slightly larger "components", each with the data it uses typically incorporated in the code. Most data, however, lies in relational databases that do not contain the code that accesses the data. As a result, developers must translate in their code between the data-invocation format of an object class and the data structures of a relational database  and the task is rarely easy. Of the 40-50% of coding effort devoted to data access in today's applications, as much as 80% can be devoted to bridging the object-relational divide.

To simplify the developer's task, today's development solutions offer several approaches:

  1. Place data access within components (e.g., EJBs) that the developer can use without needing to know the details of data-access coding.
  2. Store data in the database as objects (i.e., with code attached), and use an object-oriented standard transactional "language" such as XQuery to access these objects.
  3. Use a standard relational transactional "language" (typically SQL or ODBC) and leave the rest to the developer.

Approach 1 has several problems. For one, EJBs are typically placed in a separate task than the database, and are written to apply to a wide variety of business cases; as a result, users of EJBs may often run into performance problems. For another, in order to customize an EJB, the developer must have deep knowledge of the EJB not readily available from the development solution. While SQL and ODBC/JDBC are well suited to communications between procedural code and relational databases, Approach 2 means that, for the most part, developers must continue to do all the work of solving the object-relational mismatch. As for Approach 3, early experience with existing databases that have incorporated object support shows that administrators are hesitant to implement these data types, and not well trained at balancing loads that are partly object-access and partly relational-data-access.

Infostructure Associates recommends that IT buyers look for development solutions that offer higher-level transactional programming plus integration with the database being used for an application. Infostructure Associates qualitative TCO/ROI studies on SMB (small- to medium-sized business) and SMB ISV development suggests that such solutions can reduce the amount of data-access coding dramatically and have a 40-50% impact on speed-to-value.

IBM DB2's speed-to-value support

DB2 UDB provides support for development solutions for all major programming languages:

  • Microsoft Visual Studio .NET.
  • IBM WebSphere Studio Application Developer (with J2EE, JDBC, SQLJ, and Java UDF support) and IBM/Rational design/project management tools.
  • Borland Kylix, Delphi, C++ Builder, and C# Builder.
  • DB2 Development Center, with Stored Procedure Builder for creating complex server-side Web code and a Java toolset.
  • The IBM Eclipse open-source framework.
  • Zend Core for IBM for Web application development using the PHP open-source programming language, and invoking DB2 and the open source Cloudscape/Derby DBMS.

WebSphere Studio Application Developer offers toolsets for Java, Smalltalk, C++, and REXX; compilers for C and C++; and stored-procedure support. Both the Java and stored-procedure support allow developers to minimize client-to-server network traffic by placing more code on the server.

Other key features of IBM DB2 for speed to value include strong cost-based optimization technology support, as demonstrated by past performance in TPC-C and TPC-D benchmarks, XML and object storage and XQuery operation support to handle the object-relational mismatch, and Information Integrator to extend querying optimization across multiple databases and simplify the job of the programmer attempting write cross-database queries.

Conclusions

For much of the last decade, IT and the enterprise as a whole have underestimated the importance of development, and the effectiveness of the right development toolset and database in speeding development of valuable applications. At the same time, the positive effects of the right development-tool/database choice continue to increase.

Key factors in choosing the right database include its scalability (and the ease with which the developer can leverage that scalability), its integration with a powerful toolset or toolsets, its ability to handle the "object-relational mismatch," and its strengths in handling the type of transaction stream that the application demands. In all of these areas, IBM DB2 has strong capabilities. Large organizations, especially, attempting to achieve speed-to-value should consider IBM DB2 as part of their short list of development-tool/database pairs.


Infostructure Associates is an affiliate of Valley View Ventures that aims to provide thought leadership and sound advice to both vendors and users of information technology. This document is the result of Infostructure Associates sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication. This document is subject to copyright. No part of this publication may be reproduced by any method whatsoever without the prior written consent of Infostructure Associates. All trademarks are the property of their respective owners. While every care has been taken during the preparation of this document to ensure accurate information, the publishers cannot accept responsibility for any errors or omissions.

This was last published in April 2006

Dig Deeper on IBM DB2 management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchAWS

SearchContentManagement

SearchOracle

SearchSAP

SearchSQLServer

Close