||View Job Listings|
||Post a job|
Knowledge: The Essence of Meta Data:
Engagement Processes Revealed
In other articles, I have discussed the concepts of the consumer, producer and broker where meta data services acts as the broker between those that produce information and those that use it. The producer is usually the more difficult nut to crack since it's the consumer that actually gets the value from the meta data collection. Obviously, the idea is that producers are also consumers and they will see the benefit from playing both roles. I have also talked at length on the benefits of applying usability principles to the consumer, but what about the producer? What value-add elements could be addressed for the producer of meta data information. The most frequent request is that we make the process of engagement easy, simple and, if possible, automated. That is, of course, a very large order for any group beginning at ground zero; but for organizations that have well-defined processes, the opportunity exists to move toward a simple procurement model. This model is based on a design that is found in just about every grocery store. My local Kroger has three methods of product procurement (excluding illegal purchases). The first is self-service where I simply scan my own products, bag them and pay via cash or credit card with an ATM-like device. The second option is the 10 items or less option; as long as I follow the rules then I can have much of the work done for me. The final area is the full service where everyone in the store will wait on my every desire. OK, it's not really like that, but it would be nice. The meta data services group can also build a procurement model based on these three layers: self-service, guided service and full service.
While it may sound like a pipe dream, ideally we want the customer to own and manage the data. We, inside meta data, don't want to be data stewards, database administrators, data analysts or data modelers. Our job is simple: collect data from the producers, apply information systems best practices and deliver value to the consumer. We perform the other jobs due to the lack of quality and processes within the environment. If we can provide a rich set of tools that the end user can use to mange their own data then we can reduce the expense of the meta data services group. For example, suppose we manage the corporate search engine where meta data is collected and services are provided to the end consumer via Intranet search engine. Ideally, the customer can simply fill out an online form that instructs the spider (a search technology that assembles the meta data) to actually search the new site collection. No manual labor would be required or better stated, no one would need to call our organization. Is this a good thing or a bad thing? If all producers could load their data without our help then why would we need to exist? Remember, long-term success of meta data not only depends on content but usage as well. The more content we can add, by any means necessary, will lead to more utilization of the information. Imagine a full system development life cycle which includes front door activities, requirements, design, procurement and ongoing maintenance where most of the activities are automated. This would be the ideal environment for all parties: consumers, producers and brokers.
Guided Service Engagement
The self-service model sounds really nice and certainly worth striving for in the future. However, most engagements are not going to be self-service, at least not in the beginning. Guided service builds off the idea that common processes based on templates will enable a more effective and efficient environment. This includes the concepts of standard inputs, pushed through standard processes which deliver standard outputs. More importantly, this standardization will create an environment where time, costs and effort can be predicted within a 10 percent variance. The best example is the normal process of scanning a logical model, physical model and the actual database. When the logical model is well designed and maps to the physical model in a way where the database matches the physical model DDS specifications (yes, it does happen), then we have a process that works fairly smooth. Not only is the process simple, the time required becomes very predictable. There are some assumptions here that help us define the standards within the data space. We assume that the logical model is an ER diagram utilizing the one and only tool approved by the architecture. We also assume the database management system is also an approved standard. Standard environments allow us to move in a predictable manner. Yes, standard processes are boring but they are effective and efficient at increasing the content and usage.
Full Service Engagement
The full service moves beyond the standard process definition or templates as described in the previous paragraph. If you think about the high-level processes of the meta data services group, you see processes defined for bringing data into the collection (content), providing value and utility with the data (usage) and the internal activates of the group itself. Clearly, if all three elements are automated then we have a self-service environment. If at least one of the three elements is driven by a template then we have a guided service environment. Full service indicates at least one of the elements requires a customized approach.
For example, suppose a new data modeler is hired to model the customer database and they like to utilize their own proprietary tool. A new scanner must be developed in order to extract the meta data and load into the standard meta-model where the standard consumption can take place. In other cases, customers may want a customized interface into the meta data repository where active based usage can be received. In both cases, a customized solution was required.
The overarching theme is obvious, the business of meta data should follow the advice of grocery stores and fast food restaurants. Move as much as you possibly can to the template and self-service forms of engagement. Limit the number of full service engagements through the architecture definitions, business rules or simple automation of workflow.
Now a question for to the reader, is this designation and delineation of an engagement process a big thing or a small thing? While you might be inclined to say "no big deal," let's peel back the onion to reveal some underpinning elements. First, the framework forces you to have a clearly defined metric strategy on how data is loaded and utilized within the environment. When you drive toward a goal of growth at 10-20 percent per year, you understand the importance of having streamlined processes on both sides of the producer and consumer equation. By focusing on these metrics of content and usage, you are forced to be creative in how data is loaded and managed through out the life cycle. The second reason is that process of engagement is easily explained to business and technical people. Using the shopping metaphor, many of the complex elements are hidden from plain site of the customer. Yes, meta data is a complex function but the customer doesn't need to know this. Finally, these systems or automated functions are the corner stone for providing quality meta data. Think about the McDonalds french fries and the entire process of delivery. The process begins with the type of soil where the potatoes are grown where the company ensures that the acidity of the soil meets standard guidelines. McDonalds ensures that the method of product delivery, storage, processing and presentation are consistent. They understand that in order to deliver quality, you must have systems in place that automate or monitor each and every step. The same is true within the meta data environment; sour systems and processes are the real intellectual property of meta data services group.
For more information on related topics visit the following related portals...
R. Todd Stephens, Ph.D. is the director of Meta Data Services Group for the BellSouth Corporation, located in Atlanta, Georgia. He has more than 20 years of experience in information technology and speaks around the world on meta data, data architecture and information technology. Stephens recently earned his Ph.D. in information systems and has more than 70 publications in the academic, professional and patent arena. You can reach him via e-mail at Todd@rtodd.com or to learn more visit http://www.rtodd.com/.
Provided by IndustryBrains
|Design Databases with ER/Studio: Free Trial|
ER/Studio delivers next-generation data modeling. Multiple, distinct physical models based on a single logical model give you the tools you need to manage complex database environments and critical metadata in an intuitive user interface.
|Data Quality Tools, Affordable and Accurate|
Protect against fraud, waste and excess marketing costs by cleaning your customer database of inaccurate, incomplete or undeliverable addresses. Add on phone check, name parsing and geo-coding as needed. FREE trial of Data Quality dev tools here.
|Free EII Buyer's Guide|
Understand EII - Trends. Tech. Apps. Calculate ROI. Download Now.
|cost-effective Web server security|
dotDefender protects sites against DoS, SQL Injection, Cross-site Scripting, Cookie Tampering, Path Traversal and Session Hijacking. It is available for a 30-day evaluation period. It supports Apache, IIS and iPlanet Web servers and all Linux OS's.
|Click here to advertise in this space|