Throughout most of the computer industry’s history, data has been tied very tightly to the computer that used it, and to a large extent, this is still the case. Look at all the connectivity and data transformation programs AS/400 shops have to use. If they want client software to be able to use data stored in AS/400s or vice versa, they need to buy a PC connectivity program from IBM, Wall Data, WRQ, Attachmate, or others (yes, there are many others). If they want AS/400s to be able to share data files with a UNIX or Windows NT server or vice versa, they have to go to Lakeview Technology, Vision Solutions, or DataMirror to buy data transformation products that suck information out of DB2/400 databases and massage it so it looks like Oracle, SQL Server, Informix, Sybase, or other database formats that are popular on UNIX and NT servers. Even on Integrated Netfinity Servers hiding under the skins of modern AS/400s, information that is used by those NT server feature cards is stored on partitions within AS/400 disk arrays and is separate from the AS/400’s own data files. And even customers who want one AS/400 to share information in real time with another AS/400 have to resort to one of various high-availability products from IBM Business Partners to do this.
Both storage and server vendors agree on one thing: The tight link between servers and their data storage devices has to be broken. The tightly coupled server-storage link was fine when companies had one host system and that was pretty much it. And it was good enough even for small businesses, which tend to have one production server and a couple file and print servers attached to it. But this tight coupling doesn’t cut it in the heterogeneous computing environment that is typical at medium and large enterprises. These companies, who constitute only a third or less of the computing sites in the world but nonetheless spend more than half the IT budget dollars, are up to their necks in complexity and want a better—and faster—way to let any server get to any data it needs. That’s where storage area networks, or SANs, come in.
The conceptual model that all servers—whether they are AS/400 servers, UNIX boxes, or NT toys—use in establishing how they talk to their data storage has its heritage in the IBM mainframe world, which was also picked up by the PC environment 20 years ago. Basically, one client or server gets its own storage, and it literally owns it. Nothing can get to that storage without going through the operating system in charge of that machine and without using up processing cycles on that machine. What this means is that in a
complex, multisystem environment, a lot of the aggregate processing power in the server network or in the larger client/server network is dedicated to just moving data around from one machine to another. In a SAN setup, the goal—and I say “goal” because today’s SAN technology falls far short of this—is to cut the dependency that storage devices like disk arrays and tape libraries have on their servers and their respective operating systems and replace it with a centralized storage cluster that employs a high-speed fabric of switches, hubs, and interface cards to give all servers access to all data. Conceptually, this model bears some resemblance to the Internet, in which any client or server can go through TCP/IP networking to find any file on any server and do whatever it wants with it, provided it has authority.
IBM believes that, by 2002, upwards of 70 percent of the data in use at medium and large enterprises will be stored on SANs, so it comes as no surprise that Big Blue is throwing its weight into the SAN concept alongside EMC Corp., Compaq, and Sun Microsystems’ Encore division, IBM’s main competition in the open systems disk market. Hewlett-Packard, a big server rival, resells disk arrays from Data General and Hitachi, and until recently was one of EMC’s biggest OEM partners.) IBM claims that its ESCON fiber optic connectivity for mainframe disk arrays, announced in 1991, laid the groundwork for SANs. While that is not technically true—IBM didn’t create ESCON as a means for connecting UNIX and NT servers to mainframes, but rather as a way to speed up and lengthen links between disks and servers—IBM certainly has learned a thing or two about the complexities of building central data repositories using ESCON because market conditions have forced IBM to extend ESCON to UNIX and NT servers.
IBM has had a lot of help from its SAN hardware Business Partners: Brocade Communication Systems, Computer Network Technology, LSI Logic, QLogic, McDATA, Pathway, and Emulex Network Systems. These companies are actually building the hubs, switches, and I/O cards that adhere to the Fibre Channel standard that has been championed by server vendors Sun Microsystems and Hewlett-Packard and disk maker Seagate Technology for the past several years. Fibre Channel, having effectively stomped on IBM’s competing Serial Storage Architecture (SSA), is now the technology that everyone agrees will be the basis of storage area networks. They don’t agree on much else, however.
Building SANs will involve more than replacing short and slow small computer system interface (SCSI) copper wires with long and fast fiber optic cables. For one thing, the industry needs a SAN standard, and as is usually the case early in a technology’s development, multiple standards are competing right now. Getting a usable SAN standard means getting the companies that create operating systems, servers, and storage devices to agree on a broad set of specifications and APIs that will allow all of their respective iron to interconnect and all that software to play nicely together.
Industry players are generating a lot of heat in the SAN standards area right now, but very little light. They all have competing interests that are difficult, if not impossible, to rectify. In July, five of the 12 founding members of the FibreAlliance consortium, established by EMC back in February 1999, walked out and formed their own consortium called the Open Standards Fabric Initiative. The members of the OSFI—Ancor Communications, Gadzoox Networks, Vixel Corp., Brocade, and McDATA—contend that the 12 agreed on how to implement Fibre Channel in host bus adapters, but couldn’t agree on standards in Fibre Channel switches. No surprises there; the switches are where the money will be made, and everybody wants to control what goes into the standard. Moreover, they said the FibreAlliance was moving too slowly. EMC is particularly keen on reining in Fibre Channel hub and switch vendors so their innovations don’t bypass or in any way dilute the value of its Symmetrix disk arrays. How long the remaining partners in the FibreAlliance—Emulex, G2 Networks, HP, JNI, QLogic, and Veritas—will stay in lockstep with EMC is unclear. HP is already on the outs with EMC since it no longer resells EMC’s disks, and the remaining companies will be under great pressure to work with whoever sets the standards.
In addition to this bickering, the Storage Networking Industry Association (SNIA), formed in January 1998, is trying to develop standards in the SAN space. With over 100 member companies, including IBM and all the major server, storage, and operating system vendors, this nonprofit organization seeks to promote cooperation among SAN vendors but has very little power to enforce any standards when and if they all ever agree on any. As in the past with other technologies, computer buyers will very likely set the standards with their budget dollars. And that is why IBM and its SAN partners should be more interested in getting you interested about SAN technology in your AS/400 shop. With over a half million servers in the field, and at least a good quarter of them candidates for SAN technologies, the high-end AS/400 base is not only pay dirt for SAN vendors but could be a vehicle with which to set SAN standards—if they would only stop discriminating against the AS/400.
Left Out Again
AS/400 customers got their first hint of IBM’s SAN plans when the company introduced the Tarpon 2105 model B09 Versatile Storage Server in June 1998. Tarpon is based on IBM’s very successful 7133 SSA disk subsystems, announced in mid-1996, for its own RS/6000 servers as well as competing UNIX servers from HP and Sun. In late 1996, IBM added 7133 drivers for OS/2 Warp, NetWare, and Windows NT. With the Tarpon array, IBM put a big disk controller in front of the 7133s, consisting of two RS/6000 F50/H50 motherboards, that allowed a rack of 7133s to support multiple and incompatible operating systems, this time including OS/400. A base Tarpon comes with 228 GB of disk capacity, expandable to 2 terabytes (TB), and 512 MB of read/write cache memory, expandable to 6 GB. Tarpon attaches to the AS/400 using the same feature 6501 disk input/output processors (IOPs) that are used to connect 9337s to D series and higher AS/400s. According to tests at IBM’s Rochester labs, the Tarpon array performs equal to or better than external 9337 arrays of equal capacity on batch and online transaction workloads. In February 1999, Tarpon was upgraded so it could use IBM’s 18-GB disks, boosting capacity from 228 GB to 456 GB in the base box; total capacity with two extra disk frames was increased to 4 TB from 2 TB. The base Tarpon, with 228 GB of disk space and 512 MB of cache memory, had a list price of $250,000 or $1.09 per MB. At the time of the Tarpon announcement, 9337-59X units sold for $.60 per MB, and internal AS/400 disk subsystems sold for $.50 to $.60 per MB.
Clearly, unless customers had a compelling need to consolidate their AS/400, NT, and UNIX storage on a single box, it was cheaper to keep data files on separate arrays than on Tarpons. As we go to press, IBM is announcing Tarpon’s replacement, the Shark Enterprise Storage Server, and Tarpon prices have fallen to about $.50 per MB. But internal AS/400 disk subsystems sell for under $.20 per MB, so there is still a big price penalty for IBM’s SAN-ready storage servers.
This price disparity between internal AS/400 disks—remember, the 9337s have been withdrawn from marketing and the only outboard AS/400 disk arrays are Tarpon and Shark—and external units will continue with the Shark arrays. As far as hardware technology goes, the Shark controller is the same as the Tarpon controller except that the Shark has 384 MB of nonvolatile cache memory to improve its reliability, it supports IBM’s 36-GB disk drives (it will support 72-GB drives when they are announced next year), and it will eventually participate in Fibre Channel SANs. The Shark array also includes lots of software functionality that currently comes only in IBM’s mainframe disk arrays, which means IBM can once again compete head-to-head with EMC in the disk array market.
Shark comes in several different configurations. The 2105-E10 and E20 models can be equipped with 420 GB of capacity using IBM’s 9-GB disks. The E20 can also have twice as many disks to boost the base capacity to 840 GB using 9-GB drives. These are dubbed “ultra high performance” configurations because the number of disk arms is
relatively high (one per 9-GB drive, in fact). The mere “high performance” E10 and E20 models have from 420 GB to 5.6 TB of storage and use IBM’s slower (yet fatter and cheaper) 18-GB disks. And the so-called “capacity” E10 and E20 configurations have from
1.7 TB to 11.2 TB of total capacity and use 36-GB drives, which are slower still. The base Shark E10 and E20 models have an initial list price of $231,300 or $.55 per MB. It’s likely that IBM will sell them for around $.45 per MB. Additional disks for the Sharks, which come in eight-packs, have list prices that range from $.16 to $.33 per MB, with the faster 9-GB disk drives having the highest price and the slower 36-GB drives having the lowest price.
In addition to having more expansion room than the Tarpons, the Sharks can be equipped with lots of supplementary software, none of which comes cheap. These features are all aimed at S/390 shops and provide functions that compete with EMC’s Symmetrix Remote Data Facility (SRDF) and TimeFinder features, which have been recently ported to the AS/400. SRDF, with a price tag of $85,000, allows AS/400 shops to store multiple copies of databases and applications on mirrored, remote Symmetrix arrays. TimeFinder, which starts at $60,000, is essentially SDRF implemented on one Symmetrix array and can be used to create two copies of data that are synchronized and can feed two separate AS/400s, such as a data warehouse and a production system. AS/400s wanting to use either of these Symmetrix functions need to buy another EMC program called CopyPoint, which costs $44,500. The IBM Shark array’s FlashCopy function, which costs from $23,700 to $180,000 depending on the capacity of the array, creates an instant snapshot of a disk volume so multiple applications can use it and gives users the same capabilities as EMC’s TimeFinder. (A disk volume is roughly analogous to the AS/400’s auxiliary storage pool, or ASP. Volumes and ASPs are virtual disks created out of software that rides on top of real disk drives). The Shark’s Peer-to-Peer Remote Copy function, which costs from $33,600 to $270,000, depending on the size of the Shark array, is similar to SRDF. These two Shark functions are available to OS/390, UNIX, and Windows NT servers, but not to AS/400 shops.
That’s right. The main external storage array that is the heart of IBM’s SAN initiative for the AS/400 doesn’t provide the advanced and competitive remote copy and snapshot functions that AS/400 customers would love to offload from their AS/400 servers to their storage servers. IBM says that its Business Partners offer a rich heritage of high availability software and that this is the path of the future. Maybe Lakeview, Vision, or DataMirror will see a business opportunity and start porting their code to run in the Shark’s controller card rather than on the AS/400 proper, thereby saving you processing power for real work. Then again, maybe they won’t. No matter what they do, IBM has made it clear that the AS/400 is not going to be put on the same level playing field as S/390, UNIX, or NT. If this doesn’t make a whole lot of sense to you, join the club. Then complain to IBM.
Does It Matter?
IBM’s disk marketeers in Rochester say that for the foreseeable future, the vast majority of AS/400 customers will be able to get by with plain old direct-attached SCSI disks. What everyone missed back in June when IBM was touting its AS/400 SAN strategy and future projects is that this strategy really comes down to creating a Fibre Channel IOP card for AS/400s and tweaking OS/400 to support it so you can attach Shark arrays to them. Nothing more. IBM’s been talking about this for years, and this is not a SAN strategy. When AS/400s can use all the functions in the Shark array that UNIX and NT servers can or are provided equivalents from AS/400 high-availability partners, maybe then IBM can call it a strategy. Not until.
That doesn’t mean that Shark and Fibre Channel will not be important advancements for high-end AS/400 shops. There are two main benefits to the forthcoming Fibre Channel support over the current UltraSCSI interconnection technology used in AS/400 servers. The first is speed. One of the big bottlenecks in AS/400 servers is I/O
bandwidth. Even if IBM can build a supercomputer-class backplane within the Northstar servers (as it has), UltraSCSI, which runs at a peak 40 megabytes per second (MBps) in burst mode, just can’t get enough data to the backplane in the ever-shortening times that faster and faster processors require. Fibre Channel links run at 100 MBps, and duplex links are expected at 200 MBps by the time the AS/400 supports Fibre Channel. With 800- MHz I-Stars coming in the second half of 2000, when IBM expects to deliver Fibre Channel support on the AS/400, I/O bandwidth will definitely be an issue. The other benefit of Fibre Channel peripheral attachment to AS/400s is distance. SCSI connections are limited to 25 meters, while OptiConnect fiber optic system bus interconnections on big AS/400 servers are limited to 500 meters. But Fibre Channel links can put peripherals (or mirrored servers) as much as 10 kilometers away from their production servers.
LATEST COMMENTS
MC Press Online