When the amount of data was small, it was possible to simply query it in the production systems. But then data volumes grew, turning previously fast, efficient queries into monsters that consumed inordinate system resources, not just because of the quantity of data, but also because it was optimized for operational rather than query and analysis requirements. As data grew and it was needed in a format that afforded easier access, data warehouses were developed. Nightly processes extracted data from production servers, transformed it to the needed format, and then loaded it into the data warehouse using either in-house developed code or a type of software known as Extract, Transform, and Load (ETL).
Data volumes and schema optimization were not the only problems that organizations faced as their business intelligence requirements matured. Information silos sprang up because of the use of independent, best-of-breed applications that ran on incompatible platforms, or because corporate mergers created conglomerations of diverse hardware and software in the merged enterprise, or for a host of other reasons. Consequently, those ETL programs had to be capable of extracting data from a variety of platforms, transforming it, and loading it into a single data warehouse or operational data store. (There are some differences between the two, but, for brevity, the remainder of this article will use "data warehouse" to refer to an operational data store as well.)
As the volume and complexity of the data swelled and the demands for the precise bits of information required to make more informed business decisions accelerated, the ETL processes became increasingly intricate, leading to a large increase in the time required to extract data from the production systems. This resulted in a high impact on the systems that were intended to run the business, and, therefore, it threatened the organization's ability to support the needed 24x7 high-performance applications.
What's more, the nightly extracts resulted in data warehouses that contained day-old data that was inadequate for certain types of decision-making. Hence, business executives and managers began to demand real-time, or what some industry analysts have called "right-time," data for their business intelligence purposes.
To address this requirement, a new generation of data sharing software, which provides functionality that goes by the name of "data replication," entered the market. This class of software replicates in real-time just the data needed, and only incremental changes to that data, from the production server to the data warehouse. Thus, data replication virtually eliminates the need for high-volume data extracts. The transfer of only incremental data also allows for much faster data transformation and load times. Thus, the technology of real-time Change Data Capture (CDC) has solved the problem of having too much data that wasn't available in an easily accessible format for timely business analysis.
Most DBMS products include data replication features, but they typically support replication to only a database managed by the same brand of DBMS. This does nothing to fulfill the need to share data among or merge it from unlike platforms. For that, custom-build or, more likely, third-party replication software is required.
Change-based replication functionality, which is also an integral part of high availability (HA) software, provides an added benefit beyond the loading of data warehouses. Because it creates a replica, usually on another server, backups can be taken from the replica rather than the production servers, thereby removing the impact that backups have on production applications. This provides a true 24x7 environment for production servers, while also reducing the time needed to populate business intelligence systems.
Functionality Requirements
When looking for a solution to provide real-time replication, obvious considerations come into play. These include the four Cs: cost, complexity, compatibility, and company. For the solution to be a good fit in your environment and provide the greatest value, all four of these issues need to be addressed. They will be discussed briefly below. However, before evaluating those other factors, you must first ensure that the product itself will meet a core set of criteria before you can consider using it to manage your mission-critical data. The following features are a benchmark for a quality replication solution:
Data Capture
At a minimum, every replication solution, regardless of whether it performs bulk extracts or change-based replication, must be capable of capturing the required data on the source system. In the past, and possibly still today, the most common data capture approach was to have the local database administrator hand-code SQL routines that extract, at predefined intervals, the data required by business users. The extracted data was then sent across to the target system, where it could be used for business intelligence purposes.
As the familiarity with and demand for business intelligence grows, more data is needed and more people want it, causing the extract and transmit processes to expand until they become unwieldy. When this happens, change-based replication using database triggers, journaling, or some other change capture method that is tied closely to the database becomes necessary.
Graphical Interface
The simplicity of using a GUI to create the data mappings between the source and target databases and to define the data flow requests significantly lowers the costs of the initial setup, maintenance, and learning curve compared to the hard-coding of separate programs to fulfill each specific data requirement.
At a minimum, the interface should include the following capabilities:
- Allow administrators to make changes to the replication environment while replication is active.
- Allow administrators to start and stop the replication processes.
- Provide version control.
- Provide a scripting facility that allows custom-developed transformation routines to be applied to the data as it is copied from the source to the target in order to meet unique enterprise information requirements.
- For security and regulatory requirements, it should be password protected and have one level of access for developers and another for operational staff.
- Most of all, it should be intuitive as to how to create the relationship between the source table structures and the desired target schema.
Enterprise Support
Before choosing a solution, be sure that it supports the hardware, operating systems, and databases you currently run, as well as those that figure into your future plans. Because the future usually cannot be accurately foreseen, it is best to be sure the software supports all of the major database vendors. Also make sure the vendor you are considering is stable and invests in the continued development that will be necessary to support future technologies.
Data Transformation
The needs of business intelligence are often best served by data that is not in the same format as in the operational databases. For example, in global companies, the same data items may be stored differently in different local databases, such as American versus metric measurements and product names that are in different languages in different countries. All of this data must be homogenized as it is combined into an integrated data warehouse. Therefore, the product must accommodate all such data transformations and any others that will be needed to support business intelligence requirements.
Schema Flexibility
The nature of the business intelligence process is vastly different from other business applications. Operational transactions usually access one row or just a few rows of data in a limited number of tables. Business intelligence and reporting applications, on the other hand, typically need to access all or a large portion of the rows in many joined tables. In addition, online operational applications are typically update-intensive, whereas business intelligence applications usually perform read-only tasks.
Because of these differences, when business intelligence applications run against databases designed for operational purposes, the results can be disastrous. Response times and the strain on system resources are often grossly unacceptable because the operational database is not designed for this type of functionality.
The replication solution must, therefore, support data movement between databases that use different schemas in the operational and data warehouse databases. You might for example, intentionally denormalize the data warehouse in order to eliminate the need for the business intelligence applications to perform frequent, costly table joins.
Heterogeneous Database Support
Just as the database format and schema might vary between the operational databases and the data warehouse, so might the database engine itself. The best database engine for data warehousing is not always the best engine for operational processing. What's more, your operational systems might not all run on the same database, so the replication solution must be able to transfer data between differing DBMSs.
Automation
Finally, the importance of business intelligence databases has become too great to rely solely on manually executed data extraction processes. Doing so imposes an unacceptable burden on the IT department. Instead, the solution must be capable of running without manual intervention, whether that is the periodic running of extract and load operations or the continuous replication of changes.
Cost
Cost is a factor in every software acquisition. It may be difficult to assess the value of the features and functions of the various vendors' offerings, but at least the price tags are usually clear. That's not true with in-house developed code. The preliminary cost analysis of this option compared to the price tags of vendors' offerings may make the idea of in-house development seductive, but the analysis is often deceptive.
What matters most is not the sticker price, but rather the total cost of ownership. In-house development incurs a number of costs, as follows, that are not incurred when buying a vendor's offering:
- The necessary data extraction skills must be acquired through either training or hiring. Employee attrition turns these into recurring costs.
- Business requirements analysis is required whether buying or building a solution, but the analysis, design, development, testing, debugging, and documentation of the programs that will fulfill those requirements is necessary only when developing software in-house.
- The programs must be supported on an ongoing basis.
- Business benefits are deferred until the software is developed and put into production. Consequently, the organization incurs opportunity costs because the investment begins to generate value much later than if pre-built software was used.
When evaluated in full, including the lifetime cost of maintaining and supporting the replication software, in-house development typically turns out to be much more costly than a purchased turnkey solution.
Complexity
Part of the high cost of in-house development derives from the inevitable complexity of the solution. The software has to be capable of transforming data and possibly transferring it between different database schemas, including differing table and column names and even between differently structured tables. Thus, when evaluating software, consider how easily—preferably through an intuitive, point-and-click GUI—it accommodates this complexity.
Compatibility
Most organizations support multiple DBMSs running on a variety of hardware platforms and operating systems for their operational environment. All of these diverse platforms must feed the data warehouse, which may run on yet another platform. What's more, the technologies will undoubtedly continue to evolve. The extensiveness of the software's platform support and the ease with which it can incorporate new platforms into its runtime environment is an important consideration when evaluating replication software.
Company
You will be betting your company's long-term success on the infrastructure supporting your business intelligence systems. Thus, don't look only at the features and functions of the various software offerings in your consideration set. Consider also the companies behind them. Because the solution will be providing the timely data needed to make critical business decisions, make sure that the vendor provides support 24 hours a day, seven days a week.
You also need to be confident that the vendor you choose will be there in the future to provide support, upgrade its software, and accommodate any important new technologies that come along. There are no foolproof tests of permanence, but the vendor's track record and its financial stability are good indicators of its ability to continue to develop, maintain, and support its products.
LATEST COMMENTS
MC Press Online