Portals eNewsletters Web Seminars dataWarehouse.com DM Review Magazine
DM Review | Covering Business Intelligence, Integration & Analytics
   Covering Business Intelligence, Integration & Analytics Advanced Search

View all Portals

Scheduled Events
Archived Events

White Paper Library

View Job Listings
Post a job


DM Review Home
Current Magazine Issue
Magazine Archives
Online Columnists
Ask the Experts
Industry News
Search DM Review

Buyer's Guide
Industry Events Calendar
Monthly Product Guides
Software Demo Lab
Vendor Listings

About Us
Press Releases
Advertising/Media Kit
Magazine Subscriptions
Editorial Calendar
Contact Us
Customer Service

Improving Business Continuity Through Distributed Data Management

  Article published in DM Review Magazine
May 2002 Issue
  By Paul Cortese

2002 opened with information technology (IT) professionals outside of the enterprise becoming more proficient with words and phrases such as replication, disaster recovery (DR), DR planning, business continuity, failover, backup site and the like. More and more small- to medium-sized business are beginning to realize the real financial impact of disasters, unscheduled downtime and data loss; and they are eager to make disaster recovery planning part of the IT budget. Businesses are searching for vendors who can design DR strategies, recommend and implement hardware and software, and provide secure data center facilities that will protect business critical data from loss or destruction.

A company approached a tier 1 managed hosting/IT outsourcing provider to design an infrastructure that would ensure application availability and data integrity. At the time, the application resided in the company's data center facility. The company was bound by contracts with their own customers to ensure data availability and application uptime. Even though hosting in a secure, state-of-the-art data center would protect the infrastructure from the majority of possible disasters, a business requirement dictated that the data reside in different geographic locations. The application itself would be made highly available through duplicating the Web and middle-tier servers at each of the locations. The sites would then act as primary and secondary. Out-of-band access to the data, high availability and performance were essential as well and were ranked as close seconds behind protecting the data from a disaster. It was clear that an intelligent storage and replication infrastructure was at the heart of the solution: an infrastructure that would replicate data at the block level independent of format and application.

Exploring storage and replication options was a major task and the outsourcing provider began evaluating each of the major types of storage architectures to determine which could deliver the optimal solution based on the customer's business requirement. Direct attached storage (DAS), present in most infrastructures, was quickly ruled out as an option. Although it is relatively inexpensive, easy to configure and capable of supporting redundant arrays of independent disks (RAID), it has limited scalability and limited support in clustered environments. The storage itself cannot grow independently of the server architecture and cannot be considered a separate entity. Management of the storage infrastructure is distributed and depending on the hardware platform could require many technicians of various skill levels and technology focus. Upgrading DAS storage is disruptive to the system; expansion cannot be performed without some downtime and without server hardware reconfiguration. DAS could not provide a vehicle for replication on its own and would rely on less proven software replication technologies that are not available on many platforms and not always compatible with most mainstream database applications.

The need for centralized storage was evident after ruling out DAS. The next option, network attached storage (NAS) was reviewed. Much more scalable than direct attached storage, NAS provides centralized management and administration. Its ability to use Ethernet and TCP/IP and its support of the common Internet file system (CIFS) and network file system (NFS) protocols make it widely available to a multitude of hardware and software platforms -- all over the network at up to gigabit speeds. Most NAS servers offer high availability configurations to ensure data access and integrity and can provide storage on demand. Upgrades in the NAS environment can have little to no effect on the host systems attached to it. Additional storage can be made available to the host system without disruption to the application or the system hardware. On the flip side, however, after considering all of the strong points of NAS, it has yet to receive certification from the mainstream database and application vendors. Few NAS vendors are able to support clustered host connectivity. Most application vendors currently prefer to manage and manipulate disks at the block level, and there is little to no support for NAS technologies from most relational database management system (RDBMS) vendors. Replication technologies within NAS systems are still in their infancy, and most still rely on the host system to manage the replication. In the future, however, NAS will win more application and database recognition. Additionally, as the speed of the network continues to increase and after many operating code revisions, NAS will most certainly offer sound replication technologies and other capabilities that will be welcome in the enterprise.

The design team also considered leading technologies such as iSCSI and InfiniBand. Although very promising for the future, for many reasons these technologies will most likely not receive widespread acceptance for quite some time and, therefore, were not applicable for this mission-critical project. iSCSI is a protocol that enables SCSI command sets to be transported over TCP/IP. The SCSI technologies present in direct attached storage can be transported from host system to storage device over the Internet which expands block-level data access beyond the SCSI parallel environment, outside of the storage network and into the TCP/IP network. This expansion eliminates the distance limitations present in the storage network environment. iSCSI will find practical application in remote storage infrastructures, disaster recovery sites, data mirroring and content distribution. Unfortunately, iSCSI heavily relies on the TCP/IP network. The TCP/IP network is inherently unreliable, may drop packets under congested conditions and can have highly inconsistent variable bandwidth. In contrast, iSCSI demands stability, data integrity and, in current implementations, expects high bandwidth on demand.1 Also, because of the various methods of implementing security around iSCSI, it has not yet been adopted as a widely accepted standard.

InfiniBand, considered by many as the next-generation server input/output (I/O) and inter- server communications architecture, promises to drastically change the way processors are connected to storage and/or any I/O devices. This architecture is expected by many to make a significant impact on the way computer systems of any size are going to be built in the future. The InfiniBand Trade Association (www.infinibandta.org) describes the new InfiniBand "bus" as an "I/O network" and views the bus itself as a "switch." Although it is expected to reduce the size of servers to save space, save on power, connect CPU with memory and storage, and allow administrators to connect multiple servers to function as one, it is still too difficult to predict how the industry will accept this emerging technology. Interoperability issues are giving each of the InfiniBand vendors quite a bit of discretion in communication methods and architecture. However, there is optimism within the InfiniBand community, and some specialists predict CIOs and IT professionals will accept InfiniBand the same way they accepted peripheral component interconnect (PCI) in place of the industry standard architecture (ISA).2

Ultimately, the decision was made to implement a storage area network (SAN) solution. The storage area network provided fast, reliable, industry standard storage and storage management technologies. The options available within the SAN infrastructure gave the design team enough flexibility to factor in features and capabilities. A fully redundant, intelligent storage array and redundant Fibre Channel switch configurations were at the heart of the solution. A redundant Fibre Channel switch fabric and host-based multipath I/O technologies provided the clustered hosts connected to the SAN storage access in a no-single-point-of-failure environment.

There are numerous advantages to implementing a SAN to enhance business continuity in today's networks. Storage provisioning and expansion within the SAN is secure and virtually transparent to the host systems attached to it -- an example of storage on demand. Connecting host servers to a SAN over Fibre Channel is the most widely certified and recognized high-performance storage solution for application and database vendors. SAN storage is, for some application and database vendors, the preferred shared storage option in clustered configurations. Outside of the most basic of the SAN technologies are the management capabilities and value-added storage functions. SAN-to-SAN replication is available from many of today's storage-array vendors. Proven technologies allow for replication over dark fiber using dense wavelength division multiplexing (dWDM) technologies. Other technologies available today allow for SAN-to-SAN replication over TCP/IP networks using data converters and routers. Additional key features of storage arrays within the SAN are data mirroring technologies, copy or snapshot technologies, out-of-band backups and restore from snapshot.

The design presented to the customer was a SAN- to-SAN replication implementation as illustrated in Figure 1.

Figure 1: Business Continuity Using SAN

The primary site consisting of the Web and middle-tier servers along with the clustered database servers and a SAN would reside in the hosting provider's data center. The clustered database servers took full advantage of a no-single-point-of-failure storage solution comprised of redundant switch fabric components and multipath I/O technologies. The customer's data center, which would act as the secondary site, housed replicas of the Web and middle-tier servers sitting in cold standby. Finally, the second SAN implementation and replicas of the clustered database server environment would reside at the secondary data center as well. Though SAN-to-SAN replication ships data changes in real time, the Web and middle-tier servers at the secondary site will be reconfigured as primary in the event of a failover. A dedicated telecommunications circuit and data routers provided the vehicle for the SAN-to-SAN replication. The replication technology will ship changes in the database from the primary to the secondary site. Should there be a need to fail over to the secondary site, the databases will be in sync and ready to service the redundant application and Web servers. The SAN at the secondary site also provides out-of-band access to the data, allowing the customer to perform various queries, reporting and other database activities without disrupting production service. The SAN-to-SAN replication solution met all the customer requirements: disaster recovery, performance, management and out-of-band access.

Future implementations of high availability, distance tolerant, intelligent storage will most likely use some or all the technologies discussed in this article. The landscape of the SAN is changing due to the advances in NAS storage over IP technologies. However, the industry should expect future enhancements and revisions in SAN technologies that will keep the SAN around for quite some time. In contrast, significant progress was made releasing the first iSCSI devices into the marketplace. Though not yet governed by industry standards, there are offerings from several of today's mainstream technology, storage and networking companies. Looking a few years ahead, one will certainly find InfiniBand there, which will most definitely affect the way data arrays will be implemented in the more distant future.


1. Storage Network Industry Association (SNIA), IP Storage Forum white paper, "iSCSI Technical Overview."

2. Edwards, John. "To Inifiniband and Beyond." CIO Magazine. September 2001.


For more information on related topics visit the following related portals...
DW Administration, Mgmt., Performance and Storage.

Paul Cortese is the director of technical architecture for Cervalis, Inc., a next-generation provider of outsourced IT infrastructure and managed hosting services, based in Stamford, Connecticut. Cortese was formerly a consultant for Fairfax, Virginia-based Headstrong, Inc. (previously James Martin and Company), a global IT consulting organization, as well as a senior analyst for Deloitte Touche LLP. Cortese can be contacted at pcortese@cervalis.com.

Solutions Marketplace
Provided by IndustryBrains

SAP Software Migration for Customers
If your current applications are at risk, SAP Safe Passage provides a clear roadmap for solution migration with maintenance support & integration technology. View free demos now!

Dedicated Server Hosting: High Speed, Low Cost
Outsource your web site and application hosting to ServePath, the largest dedicated server specialist on the West Coast. Enjoy better reliability and performance with our screaming-fast network and 99.999% uptime guarantee. Custom built in 24 hours.

Design Databases with ER/Studio: Free Trial
ER/Studio delivers next-generation data modeling. Multiple, distinct physical models based on a single logical model give you the tools you need to manage complex database environments and critical metadata in an intuitive user interface.

Data Quality Tools, Affordable and Accurate
Protect against fraud, waste and excess marketing costs by cleaning your customer database of inaccurate, incomplete or undeliverable addresses. Add on phone check, name parsing and geo-coding as needed. FREE trial of Data Quality dev tools here.

Data Mining: Strategy, Methods & Practice
Learn how experts build and deploy predictive models by attending The Modeling Agency's vendor-neutral courses. Leverage valuable information hidden within your data through predictive analytics. Click through to view upcoming events.

Click here to advertise in this space

View Full Issue View Full Magazine Issue
E-mail This Article E-Mail This Article
Printer Friendly Version Printer-Friendly Version
Related Content Related Content
Request Reprints Request Reprints
Site Map Terms of Use Privacy Policy
SourceMedia (c) 2005 DM Review and SourceMedia, Inc. All rights reserved.
Use, duplication, or sale of this service, or data contained herein, is strictly prohibited.