The question of how to run fast in a slow world is still appropriate to ask about the AS/400. Machines keep getting faster, but we keep filling them up with more applications, more sophisticated uses, more volume, more overhead, and more whatever.
We've all heard the announcement from IBM. RISC is here, and the price performance sounds good. But for many of us, it's just another thing to throw at the door to keep the wolves away for a while. It won't be long before even a shiny new black-box, RISC-based AS/400 gets busy. So what's the answer? Well, maybe it's time to sit back and think about what you are doing and how you could do it smarter from a performance viewpoint.
When the AS/400 architecture (first available on S/38 in 1980) started to be installed by lots of users, it became quite clear that the way to make the architecture run fast was to write big programs that opened up the required files and kept them open. To run fast, you had to do as much work in a single program as possible. It was also quite clear that the architecture required a much bigger engine than the early S/38 models offered.
Get Ready to Run
Times have changed. The hardware has caught up and the emphasis on a single large program has been minimized to some degree with both hardware and software alternatives. However, the basic architectural comment remains true. Opening files, executing CL commands such as Override Database File (OVRDBF), calling programs?all of the "get ready to run" stuff is still relatively slow on the AS/400. There are plenty of things you should do only once if you want your application to run fast.
The Integrated Language Environment (ILE) probably will improve CALL performance in some cases. However, the concept of writing modules that get bound together is foreign to many AS/400 programmers. ILE includes the concept of service programs, which allow binding of programs at execution time. Binding will be more overhead. For example, you wouldn't want to crank up an activation group and do a lot of binding for a single transaction.
Perhaps one of the best internal designs that was developed for the AS/400 was the concept of being able to handle lots of jobs that have a brief need to run and then go back to sleep again. This is very typical of interactive use. The concept of the Process Access Group (PAG) helps make this work effectively on the AS/400. The PAG is still a mystery to many programmers. The PAG is essentially the place where the system stashes the stuff that is unique to a job. Programs are used by different users (read-only code), as are database pages and indexes, but the identification of which program is running, what instruction it is on, and what the variables are is unique to a job.
The system handles much of the PAG on a mass transfer basis (mostly a "whoosh"), which means a big glob of the PAG comes quickly into main storage when you need it (see 1).
The system handles much of the PAG on a mass transfer basis (mostly a "whoosh"), which means a big glob of the PAG comes quickly into main storage when you need it (see Figure 1).
This technique for the PAG requires lots of main memory, which was certainly not available in the early days of the S/38. Most of the other transfers in and out of main storage are done in much smaller increments, such as page sizes. This is particulary true of the "get ready to run" stuff.
So, because the system handles the PAG very efficiently but does not do an efficient job on the "get ready to run" stuff, the natural conclusion is to get everything you ever wanted to do up and ready in the PAG. That way you do it only once.
But another "gotcha" appears: you can develop a huge PAG and kill the system performance trying to move the PAG in and out of the main storage. Performance is like a balloon. If you squeeze it in one place, it tends to pop out in another place. The trick is to have it pop out where it doesn't hurt you.
Some AS/400 programmers don't know how to write a solution in which everything is open and ready to go. Some know how to do it, but don't because of the "huge PAG syndrome." As applications get more complex (and users wear more hats because they need to interrupt what they are doing to do something different), the pressure on the system to do more "get ready to run" stuff keeps increasing. The net is that most systems spend too much time doing "get ready to run" stuff and not enough time on the "do the transaction" stuff.
So, What's the Answer?
There is never one answer on performance, but let me review two underused functions that have been around for a long time, but still don't get enough play.
This isn't a full course on performance, but try to keep a few things in mind as you consider the conclusions:
? To run fast in a slow world, you have to minimize the number of times the "get ready to run" stuff is performed by the application (open files, initiate programs, run override commands). The ideal solution is once a day.
? The system is efficient with the PAG and is able to handle many jobs becoming active for short periods of time.
So here goes with two of my favorite solutions.
Group Jobs (or Workstation Key)
Yes, I know it's been around awhile and it sounds old hat, but think about group jobs for a moment. They offer the opportunity to split the PAG into mulitple chunks (a separate PAG for each group job) so you can afford to keep more programs and files open. The user selects an option or presses the Attention key to get a menu of available jobs. (For more information, see "Getting Started with Group Jobs," elsewhere in this issue of MC.)
The system operates efficiently in terms of suspending one group job and invoking another one. It's the same code that "whooshes" the PAG in and out. The user can go right to the function that is needed and is ready to go with open files, programs already initiated, and overrides set. You do have to pay a bigger price the first time, but after that, it's fat city.
The important question to ask here is, "Can the system switch to another group job faster than a single job could end its current function and do the ' get ready to run' stuff for the new function?" Remember that switching jobs is a strong suit of the architecture, and "get ready to run" is not.
A design for group jobs can vary significantly from simple to very complex. It's easy to get the users confused about where they are with a lot of options and Attention key menus. A simple approach may be more effective.
If you've never tried group jobs, you can do something very simple with the TAA Tool called ATNPGM. This tool is both in QUSRTOOL and in the TAA Productivity Tools product. It lets you flip-flop between two group jobs. Give it a try and convince yourself that the system can make the switch very quickly.
A similar alternative exists for some workstation devices (see 2). These devices allow a configuration that tells the system there is more than one device and allows a key to switch to another job. This is a slightly faster transfer, but it is more limited in number than group jobs.
A similar alternative exists for some workstation devices (see Figure 2). These devices allow a configuration that tells the system there is more than one device and allows a key to switch to another job. This is a slightly faster transfer, but it is more limited in number than group jobs.
Data Queues
I know you've heard this one before too, but the number of people who use data queues is still fairly small. Data queues offer the fastest method of communicating between two jobs. It's a way to offload work from one job and get it performed in another job. The traditional method of doing this is with the Submit Job (SBMJOB) command. But, if you want to talk about overhead, SBMJOB has a lot of it. The system has a lot to do to establish a job. Then, your application does the "get ready to run" stuff and, finally, you get to process. This cycle is repeated for every batch job.
With data queues, you can start a never-ending batch job that does the system "start a job" function and the application "get ready to run" stuff once. Then, the job waits for an entry to appear on a data queue. The simplest form of a data queue job is a one-way street in which you just keep sending requests without getting any feedback to the original application (same concept as a batch job). 3 shows this in more detail.
With data queues, you can start a never-ending batch job that does the system "start a job" function and the application "get ready to run" stuff once. Then, the job waits for an entry to appear on a data queue. The simplest form of a data queue job is a one-way street in which you just keep sending requests without getting any feedback to the original application (same concept as a batch job). Figure 3 shows this in more detail.
The more complicated form is the one in which the application wants to hear back whether the function worked or failed. This normally involves a second data queue or one for each job that is communicating to the service job.
If your application has some things it can do asynchronously, a data queue can be an excellent choice. You can offload work out of your interactive application to a single job that is dedicated to peforming a service function for you or multiple users. A single job dedicated to a function with its files open and ready to go can turn out a lot of work.
A Matter of System Direction
Good old group jobs and data queues don't sound very exciting, but they can do a lot for you. Both functions provide a way to get an application solution dedicated to solving a problem rather than mostly dedicated to causing a lot of overhead. Do the "get ready to run" stuff once and let the system concentrate on doing your applicaton function.
When you think about performance design, you also have to consider the direction the system is likely to go. If you look at the past, you can tell that the direction is toward bigger main memory and faster CPUs. Sure, disk processing has gotten faster, but not by the same magnitude that the size of main memory and raw CPU horsepower has increased. This is leading right into one of the strong suits of the AS/400, and that is the ability to handle lots of PAGs and move them in and out of the system quickly.
Jim Sloan is president of Jim Sloan, Inc., a consulting company. Now a retired IBMer, Sloan was a software planner on the S/38 when it began as a piece of paper. He also worked on the planning and early releases of the AS/400. In addition, Jim wrote the TAA tools that exist in QUSRTOOL and is now the owner of the TAAProductivity Tools product. He has been a speaker at COMMON for many years.
LATEST COMMENTS
MC Press Online