This article originally appeared on the BeyeNETWORK.
In February’s article, we started to look at a high level view of the use of metadata as a control mechanism for managing master data management (MDM), and suggested that there were seven levels associated with a metadata stack supporting the transition to a master data environment. This article focuses on one layer in this metadata abstraction – the metadata associated with services. We positioned “service metadata” as describing the abstract functionality embedded within and used by business applications, and the degree to which those functions can be described as stand-alone services, along with the mapping from service to client applications.
Master data management is largely seen as providing value to client applications by virtue of providing access to a high quality data asset of uniquely identifiable master objects synchronized across the enterprise. However, it turns out that master service consolidation is a strong motivating factor for MDM, even (at times) trumping the value of the consolidated data asset. The process of analyzing the use of master data objects exposes the ways in which different applications create, access, modify, and retire similar objects; and this analysis helps in determining which data sets represent recognized master object types. The byproduct of this analysis is not just knowledge of the master objects, but also knowledge about the functionality applied to those objects.
The upshot is that as consolidated multiple master object views are aggregated into a single master model, the functionality associated with the life cycle of master objects can also be consolidated as well – there is no need to have three or four processes for creating a new customer or product when one will suffice. This becomes particularly valuable when add-on software applications are integrated into the environment. The licensing, maintenance, and operations costs of those add-on software applications can be reduced when the data sets they were intended to support become reduced into a single master view.
There will be two collections of services. The first is an enumeration of the essential services employed by client business applications at a conceptual level, such as “create a customer” or “update a telephone number.” Master services can be segmented as well into core object services that address typical data life cycle events, such as create or modify an object, or business services applied as part of the business process workflow, such as “generate invoice” or “initiate product shipment.”
The second collection is a current view of the (possibly multiple) ways that each conceptual master service is actually deployed within the current environment. This is intended to assist in the development of an implementation road map by identifying the functional components to be ultimately replaced that will require an engineered wrapper during the migration process.
In addition to documenting the list of services and the way each is currently deployed, the services metadata layer will also list the clients of the services. This is a list of both automated and individual clients that invoke the functionality that will ultimately be implemented within the set of enumerated services. This inverse mapping from service to client is also used in impact analysis and migration planning, both for determination of risk during the transition from the legacy framework to the MDM environment and for ongoing management, maintenance and improvement of master data services.
There is metadata representing the different types of services and the users of those services. What is left will comprise the third component of master data services metadata, which captures the interfaces used by the clients (both automated and human) to invoke those services. Consolidating functionality into services must ensure that the newly created services support the application’s current functional requirements, and that includes details about the different ways the services must be invoked and, consequently, any necessary parameterization or customization for the service layer. Alternatively, as functions are evaluated and their invocation methods reviewed, it may become apparent that even though the functionality appears to be the same across a set of applications, the ways that the functionality is invoked may signal discrete differences in the effects intended to occur. Capturing this information interface layer will help analysts in this assessment.
In many metadata environments, the knowledge that is captured and managed within the repository often is considered complete with data dictionary data, data models and usage mappings into the application space. However, since there is an opportunity for adding value through service consolidation, managing the information about services as if they were first class master objects enables the business subject-matter experts and the data analysts to gain additional insight into the “mastering” process.
David is the President of Knowledge Integrity, Inc., a consulting and development company focusing on customized information management solutions including information quality solutions consulting, information quality training and business rules solutions. Loshin is the author of The Practitioner's Guide to Data Quality Improvement, Master Data Management, Enterprise Knowledge Management:The Data Quality Approach and Business Intelligence: The Savvy Manager's Guide. He is a frequent speaker on maximizing the value of information. David can be reached at firstname.lastname@example.org or at (301) 754-6350.