As I explained in my article last week, IBM sees utility-based computing--which it calls e-business on demand--as its long-term strategy for delivering IT solutions to mid-market organizations and other enterprises. This does not mean that IBM is giving up other strategies it is pursuing, such as Linux, Java, or Web services. Indeed, IBM is partly promoting these strategies because they are enabling technologies for the utility-based computing future that it believes its customers will welcome.
To make computing capacity flow to customers over the Internet like electricity flows over power grids, IBM will need many enabling technologies. This becomes clear when one considers what a power grid does. In a grid, power plants that may be hundreds of miles apart generate electricity for the grid. This power flows through thousands of transformation substations to millions of customers. At the same time, central offices manage every device in the grid to assure that the power stays on and to measure how much gets consumed by each user.
If you apply these basic concepts to computing grids, you can see how complicated they will get. Here are just some of the types of technologies that computing utilities will need.
- Physical interconnects--It will take thousands of servers, storage arrays, and network devices to provide thousands of companies with adequate computing capacity. To make these devices as efficient as possible, utility providers will need to put most of them in hosting facilities and connect them at bandwidth levels that approach those found within a single server.
- Grid interfaces--Besides being fast, computing grids will have to be extremely intelligent. Each device on the grid will need firmware or system software that interfaces with an overarching "grid operating system." This software will enable each device to provide its resources to the grid, find and communicate with other devices, and work collectively with other devices to deliver computing capacity.
- Grid management--It will be impossible for humans to manage grids composed of thousands of devices so that they meet the service level and security needs of customers. As a consequence, computing utilities will require very sophisticated firmware and software that enable grids to monitor themselves and correct problems without human intervention.
- Data and application management--To deliver computing capacity as cost-effectively as possible, grid operators will need to put the applications and data of multiple customers on massive servers and storage arrays. Technologies will be needed to host multiple customers on a single server or storage array while protecting the integrity and security of their IT assets.
- Application and object communications--In addition, applications must be adapted to communicate across Internet-based grids. This presents issues with which distributed computing architects are all too familiar, such as developing standards for two applications to call and present data to each other. On an Internet-based grid, however, these issues will be even more complicated, as applications and objects spread across hundreds or even thousands of systems communicate on an "any to any" basis over a loosely coupled network.
When you look at IBM's hodgepodge of strategies in the light of the above requirements, it is fascinating to see how those strategies suddenly make sense. Let's take a quick look at these strategies and see how they fit into the utility-based computing puzzle.
- Physical interconnects--For years, IBM has been a leader at clustering large servers, such as its zSeries and RS/6000 SP, for scientific and technical computing workloads. Over the last several months, however, IBM has declared that clustering will become common for e-business and commercial workloads. As a result, the computer giant is repackaging its clustering technologies so they can link together hundreds of commodity servers. IBM now offers two eServer clustering packages: the eServer Cluster 1300 for Intel servers running Linux and the eServer Cluster 1600 for POWER servers running AIX. Both offerings can cluster systems using network fabrics that reach speeds measured in hundreds of megabytes per second.
- Grid interfaces--IBM has a history of designing supercomputing grids for the academic community. Last February, the company joined with the Globus Project--a group that develops grid technologies--to announce specifications for commercial computing grids. IBM also announced that it will embed these specifications, known as the Open Grid Services Architecture (OGSA), in all of its servers and major software products. This makes OGSA the "grid operating system" that will control how IBM's products interact across grids.
- Grid management--The next time you read about IBM's Project eLiza, think of it as an initiative to create self-managing grids. As you probably know, eLiza is IBM's effort to make IT systems self-managing, self-healing, and self-protecting. However, IBM's vision for eLiza stretches beyond today's individual systems to tomorrow's computing grids. That's why IBM is making OGSA a foundation technology within Project eLiza, allowing it to monitor and correct problems within computing utilities.
- Data and application management--IBM is rapidly advancing technologies that let customers split servers and storage devices into dozens or even hundreds of virtual devices. On the server front, the computer giant is implementing logical partitioning facilities similar to those found on the iSeries across its entire product line. In the storage arena, IBM is incorporating virtualization technologies in several products, including Tivoli's Open SmS and Storage Network Manager products. The company has also announced a storage virtualization engine that will let servers share storage devices at the block level as well as Storage Tank, a distributed file management system that lets servers share storage at the file level.
- Application and object communications--How can applications and objects communicate and share data over an Internet-based grid as if they were on the same server? IBM's answer is Web services technologies such as XML, SOAP, UDDI, and the specifications it is developing with the Web Services Interoperability Organization and other industry consortiums. IBM sees Web services as "middleware for grids," so it is embedding Web services in all of its middleware products and pushing the Java community to adopt Web services standards.
IBM's Linux initiatives also play a role in its vision of utility-based computing. Linux is a low-cost operating system that IBM, in partnership with a worldwide developer community, can rapidly modify and extend in almost any direction. This makes it the ideal platform for "skunk works" projects such as grid computing. As a result, more grid computing projects are running on Linux today than on any other operating system. This includes IBM's Linux Virtual Services offering that I discussed last week.
Of course, IBM is also championing the above technologies because they help the company meet more immediate goals than utility-based computing. As the computer giant looks out several years, however, it sees these technologies converging in ways that will transform the IT industry. IBM believes that mid-market organizations and other enterprises will want to get their computing capacity from the utilities that spring from this convergence. This fuels its efforts to advance and integrate all of these technologies.
Since these technologies are so strategic for IBM, I'll be examining the company's plans for them in future articles in this series. Stay tuned to learn about the implications of these technologies for your company.
Lee Kroon is a Senior Industry Analyst for Andrews Consulting Group, a firm that helps mid-sized companies manage business transformation through technology. You can reach him at
LATEST COMMENTS
MC Press Online