What's old is new, baby. Everything comes back into fashion. Or as one of the networks used as a catchphrase: It's new to you!
At least that seems to be what the IT industry is doing these days. An old saw goes something like this: "Every algorithm was written in the '60s, and we're just re-implementing them." While a bit of an exaggeration, there is a serious kernel of truth underlying that statement. For example, let's take "multi-tiered architectures." While all the rage nowadays, especially with the heavy emphasis on Web services, in reality this particular construct has been around almost as long as computers themselves. Of course, we didn't call it "multi-tiered architecture"; we called it by simpler, gentler terms such as "client/server" or "peer-to-peer." And it was a lot harder, too; you had to decide which communication protocol to use and which hardware to support, and then come up with your own message structure. And if the two machines weren't physically connected, one had to call the other on the telephone using this strange device called a "modem." None of this Internet and sockets and XML stuff. Yeah, I know: "Back when I was a boy, we walked five miles to school...uphill...both ways!"
But seriously, my point for all of this is that lately we've been seeing a change in computing and a trend back to an older construct that seemed to have vanished forever.
Software as a Service
So on to the topic of today's missive: Software as a Service (SaaS). Here is a perfect example of taking something old and making it new by simply slapping a new label on it.
I mean, look at it. What are we talking about with SaaS? We're talking about using someone else's hardware and software to perform your IT functions. Well, if you weren't programming back in the '70s and '80s, you might not recognize this particular concept, but back then it was all the rage. We called it a "Service Bureau," and most companies that weren't at the Fortune 500 level used it (as did many who were). The reason was quite simple: Few small businesses could afford the hardware. I don't recall the going rate for a System/3 Model 15D with a 1502 line printer, but I have to guess it was close to $100,000, and back in 1970 that was some real dough (about half a million in today's dollars).
So to have one of these newfangled computing engines, first you had to buy a hugely expensive machine that required its own power (many of these behemoths required their own air conditioners). Next, you had to find people to program it. Remember that computer programming was not a hot topic at that time, and in fact few colleges had any sort of Computer Science (CS) curriculum. Heck, in the early 1980s, the primary CS course at Illinois Institute of Technology consisted of writing FORTRAN on punched cards. And after all that expense, you still had to design and program your system, which involved somehow getting requirements from the user and translating them into something programmers could understand. As difficult as that task is to this day, imagine what a nightmare it was back then!
So instead, it was simply more cost-effective to box up your (often hand-written) paper transactions and send them to a Service Bureau, which would load up your programs, keypunch your data into your files, run your reports, and then box them up and ship them back to you. It wasn't exactly instantaneous, but it was often overnight and definitely far less expensive than the alternative.
Online Computing
One innovation that prolonged the useful life of the Service Bureau was the concept of online computing. Terminals set up at the customer's site connected remotely to the central computer in the Service Bureau. These terminals could be used to enter data and, more importantly, to get online information immediately. A whole new era of online, interactive computing came into play. But it was still all running on a single, centralized computer. And this particular model worked wonderfully through the '80s and '90s.
Viva la Revolution
Then came the revolution of the so-called personal computer, or PC, an almost inevitable extension of Moore's Law, which roughly states that the power of a PC will double every two years. Once the PC was introduced and with it the concept of cheap, affordable computing power, the scene changed seemingly forever. Not only did the hardware and software get less expensive, but at roughly the same time, languages started to show up that appealed to the new phenomenon of "non-technical" programmers. This didn't just mean programmers who weren't classically trained in Computer Science courses; the midrange world in particular is filled with programmers who originally came from the business side of the world. This is why many RPG and COBOL programmers are so effective; they are their own business analysts. No, when I say "non-technical programmers," I mean people who don't really understand how computers work but who have managed to teach themselves enough of a programming tool like Visual Basic to be able to access data and get usable information to show up. I'm not casting aspersions here; this is the same person who creates the incredibly complex spreadsheet that everybody uses but nobody can modify.
And while Moore's Law began to drop computers, especially IBM's midrange computers, into the price range of many businesses, the PC market began to expand with first the Lotus products and then the Microsoft suite (yes, there were others, but more people remember Lotus 1-2-3 than Visicalc). Combined with the explosion of programming tools for these new PCs, what ended up happening in offices around the world was the development of what I like to call the "bipolar computer user." These people spent half their time on green-screen applications, inputting and searching for business data, and the other half of their time punching that same data into ever-more-graphical applications on their ever-more-powerful workstations.
DP Is in the House! DP Is in the House!
More and more, the workstations needed access to the data on a real-time basis, and as prices continued to drop, eventually this led to more companies getting in-house Data Processing departments, typically running midrange computers. I grew up in the ERP sector, and the growth of the platform within that market was phenomenal; it seems that every manufacturer or distributor in the country needed an ERP system, and IBM hardware was the primary hardware to use—first the System/3x line and then later the AS/400.
This period, from the mid-1980s to the late 1990s, was the heyday of the midrange computer and saw the rise of a wide range of companies dedicated to the platform, from small entrepreneurial companies to consulting firms to giant midrange software developers nearing a billion dollars a year in revenue, such as System Software Associates. Semi-custom business applications were the theme of the day, and an entire industry grew up around these newly centralized businesses.
The Real Revolution
But while the midrange platform with its signature text-based screen (lovingly dubbed the green-screen, even though it was often amber or later full color) raised the productivity of companies around the world, that upstart PC was also doing its best to get a seat at the table. At first something of a novelty, the PC was used primarily as a remote terminal with a little bit more intelligence. You could write applications, but the requirement of getting data from the central computer meant that you needed "communications programmers" with "client/server" experience, and those were few and far between.
But eventually the PCs themselves became powerful enough to store and process the data for an entire business. This brought about what I consider to be the real revolution in computing. First, companies had a centralized business application computer with a powerful, business-oriented operating system surrounded by satellite workstations with less-capable but better-looking graphical-based tools. Next, those satellite workstations got together on a local area network (LAN) to share files, using a beefed up PC as the file server. And then, as these servers got more powerful and less expensive and as more and more people were able to use the simple programming languages available, these servers began to usurp the role of the midsized business application server. The simple applications such as spreadsheets and database queries began to proliferate and be shared.
Of course, users being users, they demanded more and more out of their applications. As certain dedicated applications (notably Microsoft Office, the Lotus products, and the various Integrated Development Environments) got better and better graphically, users came to expect this level of interface from all their applications, including the ones that used to run perfectly well using a green-screen. And as some vendors spent a ton of money on the graphical applications, they of course made a point of saying that it was a differentiator, that the graphics in and of themselves made the applications better, and that green-screens were bad. And while plenty of users were just fine with the green-screens and in fact knew that the screens worked as well as if not better than the newfangled GUI versions, that didn't matter. Right or wrong, GUI was here to stay.
As this trend continued, it became clear that the applications would need to be a bit more sophisticated than simple spreadsheets. Users quickly outgrew the office automation tools and demanded that their workflow be at least as integrated as their original green-screen applications, and while you might be able to let users download data into a spreadsheet, letting them upload it back into the system without some very careful editing was a recipe for disaster.
The Thick Client
For a short period in the mid-90s, we saw the thick client. A thick client was an application that typically required most of the power of the workstation and was able to communicate with the host to get data and then present it graphically to the user. All of the business logic, from the workflow to the validation, was done on the workstation. This made for some very cool graphical applications but also uncovered some interesting flaws in the architecture.
The two most noticeable issues with the thick client were the problems with deployment (just trying to keep Client Access in sync on 50 PCs was tough; keeping a massive custom multi-application graphical suite up and working was nearly impossible) and the fact that no two applications looked the same. One application needed a right click; the other needed a left click; the third needed a couple of keystrokes. And sometimes this was for different screens within the same application!
These weaknesses have relegated the client/server model to a niche role, although a few applications still use the technique. Screen scrapers are one area, since that's pretty much all they do. They convert the 5250 data stream to a graphical interface, and if they do anything more intelligent than mechanically converting the data stream to HTML, then they need some sort of thick client to contain and process the logic. But in general, it's been found that keeping business logic in a central repository is a much better solution, and thus the demise of the thick client.
But even though the thick client experiment hasn't been particularly successful, two requirements arose from this period in time. First, it was clear that we needed a common way to access data. Writing a new custom protocol every time you needed to communicate with another server created a lot of duplicated effort. As it turned out, the SQL call level interface (CLI) was available, and Microsoft released that as Open Database Connectivity (ODBC). For better or worse, databases were now pretty much open to anyone with a user ID and a copy of Visual Basic. Second, it became obvious that we needed a common user interface that everyone could agree on. The good news is that Tim Berners-Lee had just announced the concept of the World Wide Web, and the idea of the browser as the standard business application interface was all but inevitable, although you might not have thought so at the time.
The Next Trend
Anyway, what happened next was that the applications requirements continued to grow. And while the new tools made it easy to put together applications, they weren't necessarily that good at making enterprise-level applications with the sort of scalability, reliability, and auditing that modern users required. It could be done, but it required a lot more horsepower than could be easily justified on a desktop. So the applications began to migrate back to a server.
However, the server was no longer the midrange but was instead a big Windows box. More and more, we saw applications being developed for Windows machines. Some were thick client applications written in Visual Basic using SQL Server as a database; others made use of the browser but only through the Microsoft IIS Web server. At this point, glaring deficiencies in the Windows architecture became more apparent, including security, scalability, and reliability. You can argue all you want about how far Microsoft has come in those areas, but in the late '90s and the early part of this decade, Windows was not secure, did not scale, and was not reliable.
So now you needed legions of servers for reliability and user load. This was the dawn of the "server farm" and all that it entailed (and that still didn't address security). And what has been the logical extension of this trend? Blade servers! How many high-powered chunks of silicon can you slam into a small room? I don't know about you, but I find it somewhat disconcerting when the limiting point of an architecture is how much the air conditioning is going to cost.
What's Old Is New!
This brings us finally back to my original statement about Software as a Service. What's old is definitely new. Because right now we're witnessing a turning point in which companies are starting to throw out their massive in-house infrastructure of dozens or hundreds of servers and all the associated maintenance, licensing, support, and utility expense issues. And what do they want to replace it with? Companies that provide software, both canned and customized, on a monthly fee and/or per-user basis. The data is stored on the provider's hardware, and said hardware and the programs running on it are completely maintained by the provider.
Employees of the client company will use the browser as a remote access device to input data and either inquire into the database or generate reports in various formats. At this point, the only difference between SaaS and the old service bureau is that you'll be using a browser on your PC instead of a 5250 Model 12, and you'll get your reports emailed to you instead of printed, bursted, boxed, and shipped.
Coming Soon to an Application Near You
So what will happen? Well, first of all, what has gone on to this point didn't happen overnight. Evolution is continuing, and we have yet to see the end of it. For example, I don't think anyone can quite envision the effects of personal communication on our future application designs. Let's be honest: I don't think many people realized that in 15 short years the World Wide Web would become a required component of everyday business. I do know that text messaging is a fundamental concept to folks a bit younger than I am and that while I don't like instant messaging, in a very short time my aversion may result in my being considered as eccentric as someone who won't use a telephone or an automobile today (hmmm... Joe Pluta, Online Amish). At the same time, I think it's going to be a little while, at least, before every employee needs cell phone video conferencing. So I think there will be change; I just don't expect anything to happen immediately.
But I do know this: The return of the concept of centralized, multi-tenant computing (whatever its nom du jour—whether it's application service provider, on-demand computing, or Software as a Service) will require a machine designed from the ground up to handle hundreds or thousands of users simultaneously, with persistent connections and a super-fast integrated database. One with built-in languages designed to quickly develop powerful rules-based business logic and designed to interface to all popular computing environments, including Java/J2EE and Linux, not just Windows. One with a powerful, integrated, multi-language IDE with cross-platform debugging capabilities.
And I just happen to know where to find one....
Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. and has been extending the IBM midrange since the days of the IBM System/3. Joe has used WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books including E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at
LATEST COMMENTS
MC Press Online