02
Sat, Nov
2 New Articles

AS/400 Basics: Job and Output Queues

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

JOB AND OUTPUT QUEUES

If you converted to the AS/400 from the System/36, you probably gave only a passing thought to job and output queues. After all, on the System/36 these things are pretty simple. Put jobs on the job queue, get printouts from the output queue. Sure, you can hold and release and cancel a submitted job or a print file, and all in all, it doesn't seem overly complicated. Until you installed your AS/400.

The "problem," if we would call it that, is that the AS/400 allows you to have as many job and output queues as you want. Indeed, the shipped system includes several queues already set up. So instead of controlling the simple S/36 queues--after all, how involved can it be to control two queues?--you must be prepared to control any number of queues. Having to confront multiple queues extends the number of "interested parties," so to speak. From the user who submits a job, to the operator controlling the work flow on the system, to the programmers deciding the processing, to the person responsible for system configuration, to the operator again for the output, and finally back to the user, there are simply more interactions for you to contend with because of the multiplicity of queues.

Follow It Through

To discuss job queues and output queues, we'll take a look at how a job moves through the system. This little overview will give you some idea of the complexities and possibilities of "work management." Needless to say, it's more involved than on the S/36.

A job queue is a type of AS/400 object. You can create a job queue with the CRTJOBQ (Create Job Queue) command. You can globally control the job queue with the HLDJOBQ (Hold Job Queue), RLSJOBQ (Release Job Queue) and CLRJOBQ (Clear Job Queue) commands. Jobs are placed on a job queue with the SBMJOB (Submit Job) command. You can specify which job queue is to hold the submitted job with this command. Once a job is on a job queue, you can hold it (HLDJOB), release it (RLSJOB) or cancel it (ENDJOB).

So now we know that we can control a job and a job queue. How does a submitted job actually "get done?"

Managing Work

To continue our job trace, we must examine the complicated, controversial and crucial subject of work management. Complicated, because there are dozens of factors involved (for example, see the chart of page 2-2 of the Work Management Guide, SC21-8078). Controversial, because this is one of those configuration subjects on which everybody has an opinion and a war story. Crucial, because misunderstanding and poor configuration of work management objects will have as great an impact on system performance as removing memory boards.

Fortunately, there really isn't a lot you have to do to start using your AS/400 right out of the box. IBM provides a complete, working configuration; all you have to do is add your devices. The part of the system that controls how things get done is called a "subsystem," which is an object type in its own right (called a subsystem description, *SBSD). Broadly speaking, a subsystem describes the amount of main storage that it wants and whether or not that storage can be shared with other subsystems. A subsystem description also tells the system the priority level that jobs running within the subsystem will use. And apropos of our topic, a subsystem description names the job queues that can supply it with work to do.

Usually, the first step after creating a job queue is to assign it to a subsystem with the ADDJOBQE (Add Job Queue Entry) command. You can add as many job queue entries as you want to a particular subsystem description. If you wanted to, you could have multiple job queues supplying jobs to a subsystem simultaneously (kind of like running a lot of S/36 EVOKEs). Worse yet, you can specify that multiple jobs are allowed to start from each job queue. So picture the scenario: a batch subsystem, with three job queue entries, each allowing two jobs at a time to be active from the job queue. This means that you could have six jobs active at once in a subsystem. You can also have as many subsystems as you want active at once, all of them with their own job queue entry schemes. It is possible to, quite innocently, bring the machine to an absolute standstill.

There are many theories about how this sort of thing should be handled. The elegant, mathematically based theory is called "queuing theory," and pokes into interesting, practical problems. For example, if you are in a grocery store, is the checkout service quicker if there are four clerks supplied by only one line, or if there is a separate line for each clerk? The other theory about how to handle multiple queues is the "seat of the pants" theory. I know, from personal observation, that the SOTP theory is closely aligned with the principle of physics which states that an observer, by the act of observing, alters the behavior of the act being observed. I have proved this many times: at one-o-clock in the morning, wanting a machine to work through the thirty-seven jobs remaining in one job queue, I have started new subsystems, moved some jobs to other job queues, and started as many batch jobs running at a time as I thought I could get away with. Everything ran faster, and took less time than the stodgy, one-job-at-a-time method. Or so I thought.

But you should not take the subject of work management and concurrent processing lightly. You will become particularly sensitized to the subtleties of work management if you experiment during the daytime, when you have many interactive users. They will complain about response time. My "gut feeling," to put it simply, is that until the AS/400 becomes a multiprocessor computer, you will probably find that you get the best overall response time, for all jobs, by limiting the number of batch jobs that execute concurrently.

Implications

Well then, why has IBM given us all of these opportunities to create multiple job queues (sources of work) and subsystems (work processors), if all we can do is get into trouble?

I feel that there are many options available, because IBM wants us to be able to have complete control over how and when work gets done. Rather than constrain us to the simple S/36 one-job-queue--one-output-queue scheme (with no real control of how the system processes work, beyond a simple priority scheme and caching), IBM allows us to set up pretty much any configuration we believe gives us the most control of our system. Granted, many shops will never need to venture beyond the default configuration. The system performs quite well by submitting all batch jobs to the default QBATCH subsystem, letting them run one at a time. But there are situations where it is useful to submit jobs to another job queue, accumulating work to be done, rather than scheduling that work by operator or programmer control. For example, during month or quarter end time, it might be useful to submit all end-time processing and reports to a separate job queue, accumulating them for later processing. That way, you can continue to run your day-to-day work through the standard job queue; end-time jobs can be placed into another job queue. The other job queue can be put on hold, or associated with another active or inactive subsystem.

The "small print" at this point is, how do you put a job onto one job queue or another? As you might imagine, you can move a job from one job queue to another, with the CHGJOB (Change Job) command. I must mention that this is one of the few instances where I prefer using the IBM-supplied control display to hard-keying the command. That is, the WRKJOBQ (Work with Job Queue) command displays a list of jobs in a job queue, with number options to change, cancel, hold and release entries (jobs) in the job queue.

Obviously, you won't use the CHGJOB or WRKJOBQ commands as a general practice to move submitted jobs from one queue to the one you really wanted it to go to. No, the solution is to tell the system which job queue you want when you submit the job. Using the SBMJOB (Submit Job) command, there are two ways you can do this.

The first and simplest method of directing a job to a specific job queue is to simply specify the job queue in the JOBQ parameter of the SBMJOB command. You can either hard-code this parameter, or if the SBMJOB is in a CL program, use CL variables to assign the JOBQ values. This method is OK, but is specific to each usage of the SBMJOB command.

The second, more general (and more involved) method is to use the JOBD (job description) parameter with SBMJOB. A "job description" is another type of object. With a job description, you can specify the job queue to use, the priority of the submitted job or the spooled output, the initial library list to use with the job, the logging level for the job log (which is very useful when you need to debug a batch job), the output queue to use, whether or not the job should be held on the job queue, and the switch settings.

Job descriptions are the preferred method for specifying parameters for a submitted job, especially if you expect to run the submitted job on a regular basis. The reason for preferring job descriptions is that most of the options that you are interested in are in one place. For example, if you need to run batch accounting reports, you can create a job description called ACCOUNTING and put everything together in one place: the job queue, the library list, and the output queue. Later, if you want to alter any of those selections, you simply change the job description. The next submission of a job that specifies the altered job description will use the newly altered specifications of the job description. This method is simpler than hard coding the parameters on the SBMJOB command, as you would have to research and change all of the SBMJOBs if you make a change.

In the Middle

So we now have a job, submitted to a job queue connected to an active subsystem. The job is running. At this point, you can hold the job, cancel it, transfer it to another subsystem, or just wait for it to finish. You can use the WRKJOB (Work with Job) command to observe it while it runs; you may have recourse to this command if you suspect the batch job is looping. One of the more useful options I have found on the WRKJOB display is the option to display the list of open files. That list shows the number of I/O operations against a file; if you are processing a large master file and have an idea of the number of records in it, you can monitor the progress of the job by seeing how many records it has processed. Kind of a "poor man's" way of estimating how long a job will take.

But for now, we'll leave the jobs chugging away, and look instead at what happens at the other end: your spooled output.

Many to Many

As with job queues, you can have many output queues. An output queue is a type of object; you make new output queues with the CRTOUTQ (Create Output Queue) command. If you don't create any output queues, the system puts your spool files into the supplied queues. As with job queues, you can hold, release or clear the entire queue, and hold, release or delete individual entries within the queue. You can also move an entry from one queue to another. I use the IBM-supplied WRKOUTQ (Work with Output Queue) display to display entries in an output queue, then type in numbered options to control or dispose of entries in the queue.

The "processor" of an output queue, so to speak, is called a "writer." When dealing with printed output, a writer, for all practical purposes, is the physical printer itself. You use the STRPRTWTR (Start Print Writer) command to tell a printer to start printing, using the spool files from one output queue. You can only have one output queue "feeding" a writer at one time. If you have another output queue accumulating reports, you can either move those entries to the "active" output queue, or end the writer with the ENDWTR command and use STRPRTWTR again to start it printing from the other output queue. You can have as many print writers active as you want (and your budget can afford), each servicing an output queue.

All of this is probably more similar to the S/36 usage than it appears at first glance. I personally like the idea of being able to control separate output queues, rather than mixing it all up in one as on the S/36.

Direction, Direction

All right, how do you get a report into a particular output queue? We're talking about within a program, not by having the operator manually move it.

There are many ways to do this. I will probably miss a few, but will tell you some of the ways you will encounter. The first method is to specify the output queue on the SBMJOB command. You can do this either with the OUTQ parameter on SBMJOB, or the OUTQ parameter on a job description used with SBMJOB. In the absence of any further specification, the OUTQ that you select is used for all spool files produced in the job.

Within the job, you can alter the output queue while the job is running. You can use the CHGJOB (Change Job) command to effect a global change. That command lets you name an output queue, which would then be in effect for all subsequent spool files in the job. Another technique is to use the OVRPRTF (Override Printer File) command. To use this command, you supply the name of the printer file in the program and specify the output queue to use. Using this technique, you can temporarily change the output queue, by using OVRPRTF, then using CALL, then DLTOVR to delete the override for the printer file. Be aware that the override remains in effect until it is deleted, overridden again, or until the job ends.

Another technique to direct spooled file output is "prejob." That is, you can specify an output queue to use when you create a printer file, with the CRTPRTF (Create Printer File) or CHGPRTF (Change Printer File) command. This would probably be useful if you wanted to direct non-standard forms types to a specific printer (via a specific output queue); for example, direct the output of printer file CHECKS to output queue CHECKS. Even if you specify an output queue on the CRTPRTF or CHGPRTF commands, you can still use OVRPRTF to direct the output to another output queue.

Performance?

Having many printers going at once does not have nearly as much impact on performance as having many batch jobs working at once. As shipped, the system supplies the QSPL subsystem description; the priority given to jobs (writers) running within this subsystem is less than the priority for interactive jobs. Also, printers don't take as much processing to keep busy as batch or interactive jobs. The AS/400 seems to send a pretty good-sized buffer to a printer; you can observe this by issuing an ENDWTR (End Writer) command with the *IMMED option to an active printer. Many people have sorely complained about the run-on that happens with an immediate cancel, pointedly reminding IBM that on the S/36, an immediate cancel was, well, immediate. My understanding is that this has been corrected in release 3.0.

What Should You Do?

If you are just getting your AS/400, or will be getting one in the future, you should think about how you want the system configured ("soft" configured, that is). If you are really new to the AS/400, just coming from the S/36 for example, you should probably keep things just as they are until you get really familiar with what's going on.

I mention this because of personal experience. When the System/38 was new, we were coming from the System/34. The configuration options for the S/38 and AS/400 are virtually identical. We configured subsystems, and job descriptions, and job queues and output queues. We created classes and execution priorities, and read late into the night about routing steps. We were intrigued by transferring jobs from one subsystem to another. We really fooled around in two megs of memory.

The system still ran pretty slow, though. Even though we attended IBM seminars on performance tuning, and carefully programmed the "native" way, it just didn't go much faster.

One day, we got tired of the whole thing, and put everything back to the way it was when the system was shipped. That is, back to the default IBM configuration. My recollection is that it ran as well in the default (simple) configuration, as in our complicated configuration. The main benefit is that we made ourselves stop fiddling with things that, although interesting, didn't help us. We had more time for programming once we let the system manage itself.

And I still feel pretty much that way now. A lot of the design of the AS/400, and the S/38 before it, couldn't really be put to practical use until the advent of massive amounts of main storage, and perhaps soon, multiprocessor models. So if you are new, or even if you've had the machine for a while and find that you're spending more time playing than producing, let the machine run itself. You need to spend your time planning how work is introduced to the system, which is primarily an issue of how many and what kind of subsystems (batch or interactive) you configure. If you configure more than one batch subsystem, give serious thought to how you will let work enter the subsystems: will you allow more than one batch job at a time in the system? In each subsystem? From multiple job queues?

Above all, try different things. One of the advantages of the AS/400, which has the mainframe guys casting long and envious looks, is that you can very easily change the system, even while it is running other jobs. Just be sure that the changes and configuration of job and output queues that you create make sense for your purposes.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: