News Stay informed about the latest enterprise technology news and product updates.

Designing XML messages for standards

Organizations must create a data model before they implement XML.

This article originally appeared on the BeyeNETWORK.

This is the second in a series of articles that explores the issues regarding the usage of XML to develop data standards in the insurance industry. This article will address how to design XML messages for standards purposes, as well as various approaches for maximizing your implementation of these standards.

In December, I discussed the benefits of XML and the difference it makes for exchanging information today. However, the real benefit can only be met if the XML messages are designed well in the first place. Although this sounds obvious, I am frequently faced with XML attempts that can simply be extended for additional data, expanded for a second usage, too complicated to be understood or too cumbersome to process. Either archaic or overly advanced principles are selected, semantics are misunderstood or a flat file is simply converted to XML. This essentially leaves the XML flat. Thus, all the benefits I discussed in the last article cannot be realized.

The critical design criterion for XML is that you can extend it beyond the initial use. This is directly related to the versioning issue that I discussed last month.

Conceptually, we can deal with versioning issues in XML. In practice, however, it can only be realized if you consider where the XML may be heading. We can again consider the rules for maintaining backward compatibility between versions:

  • You cannot delete an existing element.
  • You cannot rename an existing element.
  • You cannot move an existing element.
  • You cannot make an existing optional element required.
  • You cannot make something that is repeating and make it not repeat.

Basically, you can only add new elements.

This can be fairly restrictive, particularly if your XML design is not well conceived in the first place.

In reality, the person designing the XML is typically only concerned with the problems directly at hand. Perhaps he needs to get policy information to the rating system or commission details to an agent. In many cases, the data may already be sent to that source via another means, whether it is an EDI file or a document that is printed and snail-mailed. The first source of information one considers is the structure of the information that is currently being sent. The easiest approach is to directly translate the existing structure to XML and get the application running. It is by far the quickest and cheapest with regard to the initial investment. However, if the design cannot support additional requirements or other uses, the long-term costs skyrocket. Each interface and version must be maintained. The need to reduce the costs of these one-to-one implementations is why we are looking at XML.

How do we design good XML, XML that can maintain a certain level of stability?

First and foremost, you need to model the data. MODEL! MODEL!! MODEL!!!

This cannot be emphasized enough. A well designed data model is an absolute precursor to developing XML messages.

Remember when companies began implementing enterprise data teams in the 1990s? Historically, these companies developed enterprise data models based on the corporate assets and would advise other teams on how to design relational data stores. Despite this, they were not given any authority and had negligible influence or impact. Thus, the data models were created and the pictures hung on the walls for all to admire; but each project would ignore this effort. Many project teams would build their own data models anyway.

XML changes this picture. Suddenly, the role of the enterprise data modeler is pushed into the forefront and becomes mission-critical to the long-term viability of any XML implementation. The modeling work must be done before the XML can be designed and data feeds can be created. This is a necessary evil.

By creating a data model first and XML second, companies can begin thinking beyond the point-to-point messaging approach. Instead of creating a one-time feed, a single feed can be used to populate multiple systems. A superset XML file can be created against the legacy systems containing the data for multiple downstream processes. This approach makes the XML development process much more manageable, and the cost of extracting the data is significantly reduced.

When I use the term data modeling, I mean it in the most traditional sense. These are the steps that must be taken:

  • Define your entities. We have people and policies.
  • Define the data attributes that are needed. A person has a gender, a birth date and a social security number.
  • Determine how the entities relate to relate to one another. A person can be insured for a policy, the writing agent for a policy or for a beneficiary.
  • Apply object-oriented principles, such as inheritance, to the modeling approach. A party can be a person or an organization. Similarly, a policy can be a life policy or an auto policy.
  • Develop naming conventions. Should you put the data type as a suffix on a name? What are acceptable abbreviations?

Most importantly:

  • There is often a need to support processes. Some of these processes include underwriting, commissions and claims.

Companies that have maintained their investments in their enterprise data efforts are best poised to see the maximum benefits of XML if they apply these principles to their XML work. Companies that abandoned the whole enterprise data group concept because of insufficient funding and support must now scramble to put together its equivalent. This must be done so these companies can realize some benefit from their XML investment.

There is, of course, a tradeoff to this approach. Creating an enterprise data model that stands up to the data requirements across a corporation is a costly, lengthy endeavor. Because of this, companies are tempted to skip this step to streamline the process of getting XML-based processes up and running. Standards can help in this area if the standard is developed based on a model. This is where ACORD XML for Life Insurance (XMLife) is gaining significant traction.

This standard is developed around a data model. In fact, the data model was developed first and implemented in a Microsoft COM environment before XML itself was a standard. When XML came about in 1998, ACORD simply published the data model in XML and focused on new implementations and uses of the standard, which is now in XML. ACORD has now published version 2.14 of this standard, and ACORD has never broken backward compatibility since before the first XML implementation. The model has stayed intact and has only expanded over time. This approach has kept the XML structure remarkably stable.

The use of XMLife as an enterprise standard is particularly compelling to companies that do not have an accepted enterprise data model. Perhaps the model was designed but never created, or maybe the effort was entirely abandoned before it was finished. In these cases, the XMLife standard gives these companies an easy, viable solution. The standard acts as their corporate data model; and they focus on understanding and mapping it to their existing systems, rather than first designing something comparable.

This standard, though, does not solve every problem. Because XMLife has been around for so long and hit the market so early in the XML cycle, there are many things we would change and do differently if given the chance. But XMLife cannot be dismissed; it gives companies the basis they need to use for XML messaging. This is because it is a model-based standard. It can be applied across the business domain and is easily extended to support new requirements. If it was not model-based, the standard would have been versioned many times in its life. This would be done to support the expansion of requirements that the industry has demanded from XML.

You can consider the XML message only after you have implemented a data model. Certain XML message-design principles must also be developed and applied consistently in an enterprise through the use of elements versus attributes. This can be accomplished from message identification.

When designing an XML structure based on a data model, you must determine how to make the leap from the model to an actual XML message. This should be your next priority. Multiple approaches work, each with its unique pros and cons. I will explore this topic in my next article.

Dig Deeper on Data modeling tools and techniques

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchAWS

SearchContentManagement

SearchOracle

SearchSAP

SearchSQLServer

Close