Over the last several years, companies that use IBM's midrange products have gotten plenty of mixed messages from the computer giant about its strategy. First, there was Java. Next, there was network computing. Then came e-business, WebSphere, Linux, and Web services. Along the way, IBM regularly reaffirmed its commitment to the AS/400 and iSeries, but it never backed its words with any serious marketing muscle.
Barraged by buzzwords, mid-market organizations are confused...very confused. That is why I've decided to write a series of articles that explain what IBM's strategy for mid-market organizations really is, how it plans to execute that strategy, and why it has bewildered so many customers in the process of developing it. Along the way, I'll examine current news events and show how they fit into IBM's strategy. I'll also consider the implications of the strategy for your organization and for the other IT vendors with which you do business. So without further ado, let's dive in.
Does IBM Really Have a Mid-Market Strategy?
Since IBM has pointed the mid-market in multiple directions, it makes sense to doubt that the company has any consistent strategy for these firms. However, this simply reflects the fact that, while IBM is following the general direction it thinks commercial computing is going, it does not have a precise heading. And while this may surprise you, part of IBM's strategy is to never offer such a heading.
Back in the 1980s, IBM did more than offer a direction for commercial computing; the computer giant tried to dictate it. In an arrogance born of decades of industry dominance, IBM thought it could set standards rather than follow them. That's why the open computing movement and the client/server revolution blindsided the company in the early 1990s, leaving it a crippled giant that needed an outsider--Lou Gerstner--to rescue it.
Soon after Gerstner took over, he made the now-famous statement that he had no specific strategy for IBM. While this shocked the press and the stock market, Gerstner knew what he was doing. He was setting out to transform a company that still thought it could lead the market into one that would detect where the market was heading, then get out in front of it. He needed a company that would look to the market for a strategy rather than build its own from scratch.
That's what IBM has been doing for most of the last decade...following the IT market and occasionally jumping out in front to take a leadership role. Since the market never marches in a straight line, neither does IBM. This does not mean that IBM lacks a strategy. It means that IBM regularly drops back into the middle of the parade, finds out where the troops want to march, influences the direction where it can, then heads back to the front when it knows it can be a leader rather than a loner.
Utility-Based Computing: The Sum of All Strategies
At this moment, IBM is more confident about the market's direction and its ability to lead it than at any time during the last 10 years. It knows that enterprises--and in particular, small and mid-market organizations--are sick and tired of the complexity, unreliability, and sheer cost of the distributed systems they deployed over the last decade. These systems evolved and proliferated far more rapidly than the technologies needed to manage them effectively, much less to make them interoperate seamlessly across networks. Now, companies everywhere are looking for solutions to a problem that has gotten way out of hand.
Of course, IBM knew about distributed computing's problems years ago. However, like every other IT vendor, it could only offer partial solutions based on the technologies available at the time, such as Simple Network Management Protocol (SNMP) for managing distributed systems or Java for making them interoperate. Today, by contrast, IBM believes that systems management and interoperability technologies have matured to the point where they could, if combined, create a total solution to the problem. That solution, as IBM envisions it, is utility-based computing.
What is utility-based computing? In this series of articles, I'll say plenty about what it is and what it means for mid-market organizations. For now, let's just say that utility-based computing is a paradigm under which all the complexity of buying, deploying, and supporting IT infrastructure will get outsourced to providers who rent IT capacity back to multiple companies across virtual computing grids. These grids will operate in much the same way as the grids that deliver electricity to homes and businesses. Electricity consumers don't need to bother with any of the complexities of generating power; they just flip the switch and pay at the end of the month.
As you will see in future articles, all of the technologies that IBM has championed over the last several years will play a role in making utility-based computing a possibility. As the new paradigm catches on, the computer giant is betting that many or even most mid-market organizations will be happy to rent their processor cycles, disk storage, and network bandwidth from a utility provider. Moreover, IBM intends to be the provider of choice to those organizations. This will allow IBM to guarantee itself an ongoing and highly predictable revenue stream, generate additional cash flow from software and services it can sell over its utilities, and exercise significant control over the IT plans of thousands or even millions of companies.
The very fact that IBM has this vision for mid-market organizations is one reason that it has neither marketed the iSeries aggressively to mid-market organizations nor proposed a clear alternative to the server. IBM believes that, in the next three to five years, iSeries customers will increasingly see a plug in the wall rather than a box on the floor as their server of choice. Of course, the move to utility-based computing (if it does occur) will take place over many years. During that period, most iSeries customers will probably keep a few servers in-house, especially those running legacy databases and RPG applications. However, IBM believes that most mid-market organizations will be happy to rent their front-office infrastructure--such as file and print servers--as well as their e-business infrastructure from a utility.
Linux Virtual Services: The Future Starts Here
If you want a concrete example of what I'm talking about, you need look no further than a new managed hosting service that IBM announced last Monday. The offering, known as Linux Virtual Services, allows companies to rent "service units" on a Linux partition within an IBM zSeries mainframe running in an IBM hosting center. For a monthly price that ranges from $252 to $315 per service unit, the customer gets a "Linux server instance" that includes basic networking services. The customer can rent multiple server instances, configure them with different numbers of service units, add storage capacity and networking bandwidth to each instance, and run just about anything on those instances that Linux supports. The customer needs no mainframe knowledge to use the service; all it needs are some basic Linux skills and expertise in the application it is running. IBM manages everything else.
Of course, IBM has a long history of taking over infrastructure management for companies. After all, it signed the first large-scale outsourcing contract with Kodak back in the 1980s. However, Linux Virtual Services represents the first time it is giving customers the means to access CPU cycles, storage capacity, and network bandwidth over the Internet. This makes the new service a prototype for the computing utility grids that are to come.
Before those utilities become a reality, however, IBM and other IT vendors will have to integrate many technologies into the grids over which they will operate. I'll have more to say about those technologies in my next installment in this series.
Lee Kroon is a Senior Industry Analyst for Andrews Consulting Group, a firm that helps mid-sized companies manage business transformation through technology. You can reach him at
LATEST COMMENTS
MC Press Online