Portals eNewsletters Web Seminars dataWarehouse.com DM Review Magazine
DM Review | Covering Business Intelligence, Integration & Analytics
   Covering Business Intelligence, Integration & Analytics Advanced Search
advertisement

RESOURCE PORTALS
View all Portals

WEB SEMINARS
Scheduled Events

RESEARCH VAULT
White Paper Library
Research Papers

CAREERZONE
View Job Listings
Post a job

Advertisement

INFORMATION CENTER
DM Review Home
Newsletters
Current Magazine Issue
Magazine Archives
Online Columnists
Ask the Experts
Industry News
Search DM Review

GENERAL RESOURCES
Bookstore
Buyer's Guide
Glossary
Industry Events Calendar
Monthly Product Guides
Software Demo Lab
Vendor Listings

DM REVIEW
About Us
Press Releases
Awards
Advertising/Media Kit
Reprints
Magazine Subscriptions
Editorial Calendar
Contact Us
Customer Service

Server Virtualization

  Article published in DM Direct Newsletter
March 25, 2005 Issue
 
  By Jaime Gmach and Todd Holcomb

The Dream

Most of us wouldn't dream of using a sports car to haul a boat up a hill, down a hill or even back it into a boat launch. A more suitable truck for this purpose, however, can prove in everyday driving conditions to require significantly more fuel and maintenance investments.

Unfortunately, only in a dream world can sports cars magically morph to the size and towing power necessary to effectively haul a boat to its final destination. And in these same dreams, powerful trucks magically morph down to the size of sports cars, saving drivers unnecessary fuel expenses in ordinary driving conditions.

IT professionals dream of robust networking environments that exist in this same dynamically expanding and contracting dream world. They want their networking environments to be capable of processing weekly payroll, end of month commissions, end of year accounting - A/R, A/P, general ledger close outs and, at the same time, be able to maintain their daily ERP, CRM and e-mail systems. Most servers, even in  extreme conditions, rarely reach maximum processing power. In fact, in a typical work day environment, most servers (particularly Windows) rarely surpass 10 percent utilization rate.

The Reality

Luckily, at least for IT professionals, the dream world of server "morphing" - or virtualization in the real world setting - is becoming a reality.

Although most companies are not taking advantage of virtual server expansion and contraction capabilities today, it is possible to "borrow" CPU and/or memory capacity from other servers, which are currently not being "taxed" and then return that same CPU and/or memory capacity back to their original "owners"- in their original state. Imagine servers being spoofed to think they have unlimited CPU and memory capacity and subsequently never go beyond processing/workload thresholds again!

There are predictions that by the end of 2004-early 2005 servers that auto-monitor and auto-adjust for data on-demand requirements will be appearing frequently in larger IT shops. Servers able to auto-adjust to continuously changing CPU and memory needs will become as widely accepted as the current "cascading servers" methodology. More than simply a foray into virtualization; this is a complete leap into autonomic computing.

Local Server Virtualization

Imagine employees accessing large files or applications such as Visio or AutoCAD from a local server. Processing power needed for multiple employees to open large files located on a single server can push CPUs and/or memory past predefined thresholds that are typically set at 70-80 percent. When they exceed their thresholds, the lack of processing power drastically inhibits data/document retrieval speeds across your LANs and WANs. This often results in hard dollar costs - stemming from replacing smaller servers with larger ones or on clustering the existing servers - and soft dollar costs in the form of a loss in employee productivity. Grow this scenario into an online transaction processing (OLTP) environment and watch hard dollars disappear in the same way baseball caps fly from open convertibles.

Take for example "Local Books," a small fictional company that sells books written by local authors from their bookstore on Main Street. The first day they launched their online shopping venue they received 30,000 hits and hundreds of attempted transactions. Because they had not effectively planned for this activity, they found their OLTP and back-end database server(s) being significantly taxed.

Wait cycles increased because the CPUs and/or memory were functioning constantly beyond an 80 percent utilization threshold. Spikes in wait times meant Web site visitors and online buyers were negatively affected. All of this happened while their SQL, File and Print, and Exchange servers were running idle at less than 10 percent utilization.

Unfortunately this type of scenario is typical within many IT shops. While generally planning for system failure, they often forget to plan for success and system scalability. If Local Books had a plan in place to handle additional on-demand ordering, their systems would have been ready for the drastic increase in online orders and would not have dropped/lost any of the transactions.

Had Local Books set up a virtualized server environment, utilizing products such as VMware and/or IBM's Orchestrater, their OLTP server never would have reached the processing threshold of 70-80 percent. The server would have dynamically accessed any of the available resources from the SQL, File and Print, and/or the Exchange servers to temporarily borrow processing power to complete and book order transactions during peak ordering periods - thus eliminating wait times. After the capacity was no longer needed, the OLTP server would have politely returned the capacity back to the respective servers. The Local Books brand equity would have remained in tact and a hefty profit would have been made on the opening day of the online store.

In terms of our automotive analogy, a proper server virtualization environment would have allowed the Local Books OLTP sever to virtually grow or morph from a two-seater to a four-seater, from a four-seater to a station wagon, and - if needed - from a station wagon to a more powerful truck. And when the extra capacity was no longer needed, the truck would simply shrink back down to a two-seater again.

Remote Server Virtualization

Assume Local Books grew to become National Books, but this time they had a plan for exponential growth. They implemented a virtualized server environment, reduced wait times and, as a result, successfully processed more online orders then they could initially fathom. Now the National Books Web site receives millions of hits and processes tens of thousands of online transactions and book orders each day.

Without a hardware resource virtualized environment, each time order processing reached it's capacity, it would either slow down process requests, create significant time- out errors or, worst of all, halt the National Books Web site altogether. The additional "unplanned" traffic on their server could have lead to data corruption, lost sales and diminished credibility of their company brand.

But because National Books chose to implement a virtualized server environment, their primary applications could share resources with other (secondary) applications such as: Exchange with J.D. Edwards, SQL with Siebel, SAP with Tivoli, and so forth. Sales and online Web site transactions would be conducted without slowing down the network resulting in an increase in per transaction profitability and brand awareness.

What this means is that National Books would not have to add servers each time they run a special promotion or have a new best-selling author book released. As a result, they would be able to save substantial dollars because a virtualized server environment would enable them to increase their on-demand CPU and memory resources without having to spend additional hard dollars. National Books  processing horsepower would be guaranteed no matter how large the demand.

Server Virtualization - Why Not Now?

Many of you IT professionals may be wondering, if server virtualization is available today, why aren't more IT shops taking advantage of this type of money-saving/resource-sharing solution? Because it is as new a concept now as hybrid vehicles were 10 years ago. Ten years from now, hybrid vehicles will no doubt be commonplace; however, many if not most of you don't want to wait 10 to 20 years to virtualize your IT environment. The following three steps are designed to get your company driving in the direction of autonomic computing:

Server Virtualization - The First Steps

Step 1 - Assess and Validate. Conduct an environmental assessment to define each department's server processing needs. Deploy custom configured resource/environmental auditing agents to poll all servers to identify current totals of: CPU, memory, adaptors and file/system capacity and total used and unallocated disk space (be sure to account for all archive file space as it often takes up 30-40 percent of all data storage - much of it in duplicate and triplicate form). During this same assessment you would also identify CPU, memory and adaptor usage peaks, read, write and wait cycle peaks, and identify all data that has not been accessed over extended periods of time.

Step 2 - Rationalize and Critique. Critique your current server environment. Identify and consolidate processing-compatible applications to single servers, or you can virtualize your existing multi-server environment to share processing attributes from a common pool. Only the second choice will aid you in the reduction of purchasing new servers for every new application. As a result you would increase utilization of your existing servers from a typical 10-20 percent to a more effective and efficient 40-50 percent. More importantly, you drastically decrease your "unexpected" outages while turning your one-to-one, limited-growth environment into a completely flexible and scalable solution without throwing out your existing investment.

Identify all mission critical servers. Leave those servers in a one-to-one relationship for your heavy-hitting applications such as SAP, PeopleSoft, Siebel and large OLTP databases (such as Oracle). Then, consolidate your non heavy-hitting applications (File and Print, Exchange, SQL, etc.) and virtualize the remaining servers to form a common pool of hardware resources. Finally, configure the above-mentioned CPU, memory and adaptor resource pool to be shared with the heavy hitting servers/ applications - whenever it is needed.

Step 3 - Stop Investing. Look around. Imagine the amount of gas that would be saved if we would all carpool with at least one more person. Stop thinking the only solution is to buy another server; chances are you are not taxing the existing servers you already have. Start "carpooling" your data and available resources!

Tap into your existing hardware pool and reduce the number of servers you feel you have to buy simply to increase on-demand processing capacity. Odds are high that you don't need to add a server to increase your CPU and/or memory horsepower. In fact, if your IT environment is typical, you not only may not need to add to your existing server pool, but chances are you would be positioned to cascade much of your existing servers and reduce your related server budget for years to come ... starting today!

Autonomic Computing

In the very near future, many of today's production-level servers will not only be virtualized, but will be configured for and capable of performing internal performance audits or "automated health checks" (from I/O processing needs at the CPU and memory level to page and buffer credit settings at the kernel level). They will automatically adjust and/or reconfigure themselves according to their immediate system needs and  be able to virtually morph - growing and contracting at will - to meet almost all on-demand needs - all with either predesigned human involvement (decision-making points - particularly when you are just starting your deployment) or, eventually, without any human intervention at all.

Virtualizing your servers will enable them to identify their own CPU, memory and adaptor requirements. They will reach out to idle servers and borrow capacity in order to complete immediate tasks. Then, without human prompting, these virtualized servers will return the capacity when it is no longer needed.

The ultimate goal of server virtualization is autonomic computing; capacity on-demand that provides an effective road map for managing your information systems ... regardless of size, processing demands, resource needs, time of day or night or human availability. Autonomic computing may not be the solution to every problem from "soup to nuts," but it certainly is a solution for most server environments from "coupe to trucks."

...............................................................................

For more information on related topics visit the following related portals...
DW Servers, Enterprise Achitecture and Grid Computing.

Jaime Gmach cofounded Evolving Solutions in January of 1996 after spending the previous 10 years sharpening his entrepreneurial skills in various elements of the technology industry. He previously served in roles ranging from customer engineer and director of technical services to sales manager and finally to president of Evolving Solutions. Gmach's strong technical perspective comes from years of face-to-face interaction with clients to design and implement their desired business solutions.

Todd Holcomb, director of Professional Services, Evolving Solutions,has led emerging technology initiatives such as server virtualization at the enterprise level for nearly 20 years. He has acquired a deep understanding of mass storage (SAN, NAS and CAS) environments from former employers including EMC, Sylvan Prometric, IBM Global Services and three years running a IT start-up company specializing in data management and on-the-road/on-the-fly order processing. You can contact him at todd.h@evolvingsol.com or (763) 516-6500.

Solutions Marketplace
Provided by IndustryBrains

Design Databases with ER/Studio: Free Trial
ER/Studio delivers next-generation data modeling. Multiple, distinct physical models based on a single logical model give you the tools you need to manage complex database environments and critical metadata in an intuitive user interface.

Validate Data at Entry. Free Trial of Web Tools
Protect against fraud, waste and excess marketing costs by cleaning your customer database of inaccurate, incomplete or undeliverable addresses. Add on phone check, name parsing and geo-coding as needed. FREE trial of Data Quality dev tools here.

Data Mining: Levels I, II & III
Learn how experts build and deploy predictive models by attending The Modeling Agency's vendor-neutral courses. Leverage valuable information hidden within your data through predictive analytics. Click through to view upcoming events.

Free EII Buyer's Guide
Understand EII - Trends. Tech. Apps. Calculate ROI. Download Now.

DeZign for Databases - Database Design Made Easy
Create, design & reverse engineer databases with DeZign for Databases, a database design tool for developers and DBA's with support for Oracle, MySQL, MS SQL, MS Access, DB2, PostgreSQL, InterBase, Firebird, NexusDB, dBase and Pervasive.

Click here to advertise in this space


E-mail This Article E-Mail This Article
Printer Friendly Version Printer-Friendly Version
Related Content Related Content
Request Reprints Request Reprints
advertisement
Site Map Terms of Use Privacy Policy
SourceMedia (c) 2006 DM Review and SourceMedia, Inc. All rights reserved.
SourceMedia is an Investcorp company.
Use, duplication, or sale of this service, or data contained herein, is strictly prohibited.