Published in DM Review in October 2003.|
Printed from DMReview.com
Meta Data & Knowledge Management: Designing the Optimal Meta Data Tool, Part 1by David Marco
Many government agencies and corporations are currently examining meta data tools on the market to decide which of these, if any, meet the requirements for their meta data management solutions. These organizations want to know what types of functionality and features they should be looking for in this tool category. Unfortunately, this question becomes very complicated as each tool vendor has its own personalized "marketing spin" as to which functions and features are really the most advantageous. This leaves the consumer with a very difficult task, especially when it seems as though none of the vendors' tools fully fit the consumer's meta data management requirements.
I would like to take this opportunity to play software designer and present my optimal meta data tool's key functionality. One of the challenges with this exercise is that meta data functionality has a great deal of depth and breath. Therefore, in order to properly categorize the optional tool's functionality, I will use the six major components of a managed meta data environment (MME): meta data sourcing and meta data integration layers, meta data repository, meta data management layer, meta data marts and meta data delivery layer.
Meta Data Sourcing and Meta Data Integration Layers
The goal of the meta data sourcing and integration layers is to extract the meta data from its source, integrate it where necessary and bring it into the meta data repository.
It is important for the meta data sourcing technology to be able to work on mainframe applications, distributed systems and from files (databases, files, spreadsheets, etc.) on a network. These functions would have to be able to run on each of these environments so that the meta data could be brought into the repository. I did not include AS/400 environments in my list of platforms because of its fairly sparse use; however, if your information technology shop's preferred application platform is AS/400, clearly your optimal meta data tool would work on that platform.
Many of the current meta data integration tools come with a series of prebuilt meta data integration bridges. The optimal meta data tool would also have these prebuilt bridges. Where our optimal tool would differ from the vendor tools is that this tool would have bridges to all of the major relational database management systems (e.g., Oracle, DB2, SQL Server, Informix, Sybase and Teradata), the most common vendor packages (e.g., Siebel, SAP, PeopleSoft, Oracle, etc.), several code parsers (COBOL, JCL, C+, SQL, XML, etc.), key data modeling tools (ERwin, Designer, Rational Rose, etc.), top extract, transform and load (ETL) tools (e.g., Informatica, Ascential) and the major front-end tools (e.g., Business Objects, Cognos, Hyperion, etc.).
As much as is possible, I would want my meta data tool to use utilize extensible markup language (XML) as the transport mechanism for the meta data. While XML cannot directly interface with all meta data sources, it would cover a great number of them.
These meta data bridges would not only bring meta data from its source and load it into the repository, they would be bidirectional and allow meta data to be extracted from the meta data repository and brought back into the tool.
Lastly, these meta data bridges wouldn't just be extraction processes. They would also have the ability to act as "pointers" to where the meta data is located. It is very important for a repository to have this distributed meta data capability.
Error-Checking and Restart
Any high-quality meta data tool would have an extensive error-checking capability built into the sourcing and integration layers. Meta data in an MME, like data in a data warehouse, must be of high quality or it will have little value. This error- checking facility would check the meta data it is reading for errors and then capture any statistics on the errors that the process is experiencing (meta meta data). In addition, the tool would have error levels of the meta data. For example, it would give the tool administrator the ability to configure the actions based on the error that occurred in the process and decide if the meta data should be 1) flagged with an informational/error message; 2) flagged as an error and then not loaded into the repository; or 3) flagged as a critical error at which time the entire meta data integration process is stopped.
Also, this process would have checkpoints that would allow the tool administrator to restart the process. These checkpoints would be placed to ensure that the process could be restarted with the least degree of impact on the meta data itself and on its sourcing locations.
Next month I will continue designing the optimal meta data management tool by presenting its key functionality in the meta data repository and meta data management layers of a managed meta data environment.
David Marco is an internationally recognized expert in the fields of enterprise architecture, data warehousing and business intelligence and is the world's foremost authority on meta data. He is the author of Universal Meta Data Models (Wiley, 2004) and Building and Managing the Meta Data Repository: A Full Life-Cycle Guide (Wiley, 2000). Marco has taught at the University of Chicago and DePaul University, and in 2004 he was selected to the prestigious Crain's Chicago Business "Top 40 Under 40." He is the founder and president of Enterprise Warehousing Solutions, Inc., a GSA schedule and Chicago-headquartered strategic partner and systems integrator dedicated to providing companies and large government agencies with best-in-class business intelligence solutions using data warehousing and meta data repository technologies. He may be reached at (866) EWS-1100 or via e-mail at DMarco@EWSolutions.com.
Copyright 2005, SourceMedia and DM Review.