A well-running AS/400 is an AS/400 that is configured (or tuned) to support a particular job mix. We generally describe a job mix in broad terms, like what percentage of interactive and batch jobs the computer is running. Unlike other computers, the AS/400 does equally well running batch or interactive jobs. But these jobs are different by nature, so the way the AS/400 handles them is that you, the customer, make adjustments favoring one job type over another as the need arises. By definition, a job mix changes, and, as it does, the customer needs to make slight adjustments to favor the new mix.
In a typical shop, this means tuning the AS/400 early in the morning to favor interactive processing and then retuning it in the evening for nightly batch processing. This isnt difficult, but it is repetitiousthe same two changes are made every day. The smart shop manager makes life easier by automating repetitious computer tasks.
Once you get the concept, youll find you can write programs to make some really complex changes to your AS/400 (and unmake them just as easily). In this article, Ill take you from the basics to the advanced techniques of automated tuning.
I have just a couple things you should be aware of before we start. The tools used will be CL programs, so you should probably be at home with them. Also, theres no magic here. In a nutshell, were automating what you would do manually. The implication, then, is that you are knowledgeable enough with system tuning to do these things manually. You may want to review several SYSOP columns I have written in the past year about setting up and controlling subsystems, especially their memory and activity levels (see SYSOP: Memory Pools, MC, May 1998 and SYSOP: Activity Levels, MC, June 1998).
Another caveat is that these are just examples. Every AS/400 is different, and every application mix is different. Youll have to take the substance of this article and figure out how to apply it to your shop.
Enough said. Lets dive in. Ill give three examples: a basic, an intermediate, and an advanced technique.
The classic example of tuning is interactive adjustments during the day and batch adjustments at night. Heres what you really do. Figure 1 shows the result of the Work with Subsystems (WRKSBS) command on a live (although small) AS/400. It shows that subsystem QBATCH uses two memory pools, 2 (or *BASE) and 3. Subsystem QINTER uses memory pools 2 and 4.
Figure 2 shows the result of the Work with System Status (WRKSYSSTS) command on the same system. Here we see that memory pool 3 contains 1,308 KB, and memory pool 4 contains 6,548 KB. For an initial pass at an interactive/batch shift, I just use my total user memory and perform a 90/10 split on itIll refine the split later as I see how it works. For now, the two memory pools have a total of 7,856 KB, so Ill use 7,070 KB (7856 x .9) and 786 KB (7856 x .1).
Thats memory. Also note from Figure 2 the activity levels for each: QBATCH has 4, while QINTER has 6not too bad for supporting interactive, but I manipulate them along with memory.
During daytime, you probably want to favor the interactive users. To do so, assume that batch jobs can be allowed to run a little longno ones waiting for them. In other words, to keep response times less than two seconds, say some large reports will take 10- 15 minutes to run, not five. Thats rule number one of system configuration: Really great performance in one area is paid for by not-so-great performance in another.
The daytime program, lets call it DAYSET, will want to have 7,070 KB of storage and, say, an activity level of 8 in QINTERs memory pool. Lets take QBATCHs memory pools activity level from 4 to 2 and its memory to 786 KB. The command to do this is Change Subsystem Description (CHGSBSD), because the memory pools on this machine are private. I know they are private, because they are named 3 and 4. If they were named *BATCH or *INTERACT, I would use the Change Shared Pool (CHGSHRPOOL) command to change memory and activity levels. Figure 3 has the CL program DAYSET that will make those changes.
Write and compile this program, but dont run it yet. Lets move on to the nighttime adjustment to favor batch processing. Ill start by visualizing my shop around 7:00 p.m. Everyone has gone home, and the place is deserted. I have two night shift operators who will leave as soon as they finish nightly processing. My job is to help them get out early while supporting the one or two interactive users still on. The first thing Ill do is steal memory from QINTER and give it to QBATCH. In this case, Ill just reverse the 90/10 split, give QBATCH an activity level of 4, and take QINTER down to a level of 2.
I want to take from system pool 3 (interactive) and give to system pool 4 (batch), but I cant do so directly. What Ill do is take away from pool 3. When I do that, the memory I take away will automatically go into *BASE (system pool 2). Then, Ill increase pool 4, and the increase will come out of *BASE.
Ill name the program NGTSET. A copy of its source is in Figure 4. Now, I have to worry about actually running these (and making sure they run). I like things to happen automatically (mistakes dont happen as often that way). Ill start by submitting NGTSET to start at 18:55 on the current day with the Submit Job (SBMJOB) command. Nightly processing starts at 19:00, and NGTSET runs in less than a second, so that should be all right.
The batch procedures are pretty typical in that they are all bundled into a CL program. I want the program DAYSET to happen as soon as processing finishes, so Ill
Basic
embed a CALL to it at the bottom of that program. I also want to reschedule NGTSET for the next evening. Ill do that by embedding SBMJOB with a Schedule time (SCDTIME) parameter set to 18:55 at the end of the batch processing CL program.
One final consideration about running NGTSET: I may not want it to run in QBATCH at 18:55. A long batch report may already be running in QBATCH when NGTSET tries to start, preventing NGTSET from running. For this reason, I will set up a dedicated subsystem called NGTSET, the same name as the program. The only thing that runs in this subsystem is program NGTSET. When I want NGTSET to run, it will without contention.
Before I move on, consider this scenario. I made decisions based on the nature of the shop, and you have to do the same. Are your daily batch processing requirements all that low (or can they take a lower priority)? Are all your users off when batch processing starts? Hospitals and police stations need 24-hour interactive availability, so you cant take away QINTER. But you may be able to isolate the evening interactive users into their own subsystem and just keep that one up with enough resources to support sub-two-second response time.
Another consideration may be in the nature of your batch processing. Is it nicely wrapped up in a CLP so that, when the CLP ends, batch processing is complete? I was in a shop that had the CLP control program submit about 100 batch jobs and then finish. I couldnt take away batch resources and give them to interactive when the CLP finished because those 100 batch jobs were still running. I also submitted the DAYSET program to start at 6:00 a.m.
By the way, in that case, I could never be sure that the nightly batch jobs would be done at 6:00 a.m., so I submitted a series of programs that would steal memory and activity levels from batch and give them to interactive every 15 minutes. The users werent shift-based, and interactive users streamed in steadily from 6:00 a.m. till 9:00 a.m. By making an incremental transition from batch to interactive, the arriving interactive users had resources, and the remaining batch programs kept resources as long as they could. The point: Know your shop, its people, and your application when you design these tools.
Intermediate
What I described has worked great for me in many shops. After a settling-in period and some adjustments, the AS/400 displays a new perkiness. However, that technique is very day-to-day oriented. I needed more when it came to monthly batch processing. Heres what I came up with.
First, the shop: Monthly processing would take place over the weekend. It started at 5:00 p.m. Friday, when all users were kicked off the system (excuse me, were asked to sign off). It continued around the clock until the wee hours of Monday morning.
During this time, I needed lots of batch processing power and minimal interactive. In fact, the only interactive support I needed to provide was for one support programmer who might be asked to fix a problem.
Just throwing a lot of memory into batch didnt really cut it in this case. Ive advised you to know your application, and, in this case, I didnt. Of course, no one did. It was huge and undocumented. So I set about charting its programs in a flowchart, paying particular attention to programs that were dependent on other programs and programs that could run independently of everything else. I ended up with about 10 programs that had to run early on in the process and were single-threadedthat is, one didnt start until the previous one ended.
In that case, I could do what I did in the previous examplejust throw everything into QBATCH and let the 10 go. However, I needed something else for the 100 programs that could run independently.
I asked my staff to classify them as to their runtimes. This didnt have to be exact. I just started by identifying the ones that ran in less than five minutes, then the ones that took more than 10 hours to run. I made up arbitrary runtime classes between those two extremes and ended up with six classes of independent programs.
Before I ever got to month-end, I created six special month-end batch subsystems, called MONTHEND1 through MONTHEND6. Remember, subsystem descriptions have no impact on the system. Its only when you start them that they consume resources.
Each subsystem I created was dedicated to supporting a class of batch programs. MONTHEND1 handled the shortest-running programs, and MONTHEND6 handled the longest-running programs. Each subsystem could support only two jobs (MAXACTJOB = 2) and had an activity level of 2.
That made a total of 12 batch programs that could be present on the system at any one time. Rather than having lots of batch programs languishing for resources (and thrashing), I wanted them to have lots of resources so the jobs could complete (and get out of the system) as quickly as possible.
I embedded a program in the month-end CLP that ran as soon as the last of the 10 dependent programs completed. This program started the six month-end subsystems by issuing the Start Subsystem (STRSBS) command.
I changed the rest of the month-end CLP to submit all 100 batch programs (they had been coded to run single-thread, one after the other) into their respective subsystems. The command to do this is SBMJOB, and Figure 5 has a sample of it.
By the way, QBATCH was reduced to sleep mode when this happened: Its memory went down to 256 KB (the minimum), and the activity level went down to 1. Obviously, we werent going to be doing a lot in QBATCH while the six other subsystems processed month-end. Taking the 7,070 KB for QBATCH and releasing all but 256 KB of it to *BASE left me with 6,814 KB to spread to the six new subsystems. In other words, I had 1,135 KB to give to each of the six subsystems.
The first thing I noticed was that the AS/400 CPU light was pegged to a steady red. That was OK, because the second thing I noticed was that all the programs in subsystem MONTHEND1 (the shortest-running ones) finished in an hour and a half. I ended subsystem MONTHEND1 (which automatically returned its memory to *BASE), then increased the memory in MONTHEND2 through MONTHEND6 by 227 KB (1,135 KB divided by 5).
MONTHEND2 finished next. I ended it and increased MONTHEND3 through MONTHEND6 by 284 KB (1,135 KB divided by 4), and so on until MONTHEND6 (the longest-running jobs) was the only subsystem running on the system. Only two jobs were running in it, and they had all the systems batch resources. When MONTHEND6 finished, I ended it and returned QBATCH to its original 7,070 KB and activity level of 4.
The result? Month-end finished by Sunday morning at 8:00 a.m. I shuffled the operators schedules around so no one had to work graveyard (they were happy with that) and had them monitor and manipulate the six special subsystems as they finished.
Every once in a while, you get a real challenge to your career. Heres one I had and how I solved it using a version of the intermediate solution I just talked about.
Advanced
A holding company did all the processing for its chain of 20 facilities. The holding company was located on the West Coast, and the remote sites were spread out all over the United States, so the holding company had some leverage based on the time change.
However, the problem I was called in on was that the nightly processing took around 23 hours! There were no happy campers on this job.
The methods I just talked about wouldnt work, because most of this customers processing had to be single-threaded. One job had to finish before another one started. It looked like there was no way to beat this one, but heres what I did.
I created the same MONTHEND1 through MONTHEND6 I did in the previous example, but I also kept QBATCH alive and strong as a controlling subsystem. Instead of single-threading six programs, I submitted them all to the MONTHEND* subsystems. But remember, the main CL program was still running in QBATCH and couldnt continue until all six programs finished.
I fixed that by modifying the main CL program to use a data area that used numeric data internally. Each of the six programs had been called directly, but I put them into their own CL programs so they, too, could handle the data area. Before I submitted the six, I had the main CL program update the data area with the number 6. As each program finished, its CL would access the same data area and subtract 1 from it. The main program (still in QBATCH) would display the data area (RTVDTAARA) and, if it wasnt equal to 0, the program would go into a three-minute sleep courtesy of the DLYJOB (Delay Job) command. When the data area was equal to 0, the program would allow processing to continue. Figures 6 and 7 contain samples of the main CL program and one of the submitted CL programs.
By submitting the jobs into their own subsystems and allocating resources to them, all jobs will finish very quickly. By using the data area, I can retain control of the processing. This is the best of both worlds.
The result in this case is that processing completes by 8:00 a.m. every morning. Thats not as good as 6:00 a.m.; early morning interactive users are still impacted. But it is a lot better than what was going on before.
The lesson here is that you can automate just about anything you can do manually on an AS/400. First, learn what tuning is all about and try those skills out. Once you learn the intricacies of system tuning, code the same commands into a CL program and let it do your work for you.
Complex transactions can also be coded. First, figure out what to do manually, then do it in a program.
No Limits
STRSBS SBSD(MONTHEND6)
/* Set initial value of data area to 6 */
CHGDTAARA DTAARA(MTHENDBIG6) VALUE(6)
SBMJOB CMD(CALL PGM(MEPRG1)) JOBQ(MONTHEND1)
SBMJOB CMD(CALL PGM(MEPRG2)) JOBQ(MONTHEND2)
SBMJOB CMD(CALL PGM(MEPRG3)) JOBQ(MONTHEND3)
SBMJOB CMD(CALL PGM(MEPRG4)) JOBQ(MONTHEND4)
SBMJOB CMD(CALL PGM(MEPRG5)) JOBQ(MONTHEND5)
SBMJOB CMD(CALL PGM(MEPRG6)) JOBQ(MONTHEND6)
/* Check the value in data area MTHENDBIG6 */
CHK:
RTVDTAARA DTAARA(MTHENDBIG6) RTNVAR(&NUMPGM)
IF COND(&NUMPGM *NE 0) THEN(DO)
DLYJOB DLY(180)
GOTO CMDLBL(CHK)
ENDDO
/* */
/* Rest of the M.E. programs */
/* */
END:
ENDPGM
Figure 6: Main CL program to control submitted jobs
Figure 7: Sample of one submitted job that updates a data area as it completes
LATEST COMMENTS
MC Press Online