04
Mon, Nov
1 New Articles

AS/400 Basics: Program-to-Program Calls

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Programs-to-to Programs Calls

Pogrammers new to the AS/400, especially those from the System/36 world, are sometimes confounded by all of the features of AS/400 programming. In addition to Control Language and new data base concepts, the capability of one program calling another is a programming construct that must be understood. Using the manuals, it is quite easy to understand the mechanics, but some of the concepts can only be learned through experience. This month, we will review program-to-program calls and some opinions that I have formed about this feature of AS/400 programming. The discussion is in terms of RPG, but the concepts are universal for any AS/400 language.

Simply Put

The simplest way to think of one program calling another is to think of RPG subroutines. Usually, a subroutine is written to perform a specific function within a program. Carefully coded subroutines can sometimes be used in multiple programs so that you don't have to keep rewriting the same function. Although some programmers insist otherwise, it has been fairly well accepted in the last two decades that programming with subroutines can make a program easier to understand, modify and extend.

The idea of program-to-program calls builds on this concept in a rather exciting way. Instead of your program executing a subroutine, the program calls another program. Just as a subroutine can modify variables, so can the called program.

Value for Effort

The value of coding a program-to-program call rather than a subroutine is that the called program has a life of its own, apart from the caller. A subroutine is part of the program that contains it, and is subject to the limitations of that program. For example, in a very large RPG program, you might exceed the number of files or table and arrays allowed. In order to keep the functionality that you want, you need some way to split out parts of the program. The program-to-program call technique is ideal for this situation.

Another advantage of calling programs rather than subroutines is that the functionality provided by the called program can change without necessitating a recompile. More to the point, called programs can be thought of as the mythical "black boxes" that we hear about. The idea of a black box is that we provide certain specific inputs and can expect certain specific outputs. Exactly how the outputs are produced is of no concern. In fact, we may never learn exactly how a black box does its job. Examples of this on the AS/400 are as near as your terminal. Type in any command. Here we have a well-defined interface, with specific inputs, and based on those inputs, we can expect certain results. How those results are produced doesn't really matter.

You might think that it makes no difference whether a function is included within a program or called. I have generally found it much easier to program and subsequently modify programs if the "grand events" are written as separate programs. For example, in an order processing application, it would be sensible to code the customer, inventory and order searches as separate programs. The actual order entry program itself can then call those other programs as needed.

What is the advantage of that approach? Primarily, it causes you to think of each program in terms of the problem that it solves. The customer search function should simply provide a search, probably returning the customer number to the caller. It really makes no sense to include the inventory search in the same program as the customer search, so that should be made into another program. The value of this approach is that the single purpose programs will be easier to maintain than the composite program. Also, when you need to change either search, you don't affect the other. This is especially advantageous in RPG, due to the lack of "local variables" in the language. My experience has been that adding new functions to RPG programs usually results in bugs being introduced to previously working functions. I am much happier when I can add a new function through a call to another program, since I feel more confident that the side effects are minimized.

How Much?

One question that you will surely have is, "How far do I carry this?" To what level should you go, splitting out functions into callable programs? For example, should you create a separate program to edit or calculate dates, another to perform file I/O, and another to control everything by issuing multiple calls?

This is perhaps the most difficult part of program-to-program calls. The decision as to how granular your programs will be affects the design of the programs, the ease of coding them, and the ability to test and correct them. Also, your future interaction with the code in terms of modification, correction, and extension is influenced by your choices. Finally, there may be an impact on system performance.

Working backward through that list of worries, we can dispense of the performance concern rather quickly. For the most part, I don't worry too much about it. I would much rather trade ease of programming and modification for a performance hit, which may or may not be there. I mention this because in the early days of the System/38 we were subjected to much contradictory advice from IBM about what was the favored technique. At one presentation, we were told to split everything into modules as small as would be practical. A few months later, word was that keeping things together in bulky programs was more "efficient." The upshot was that we decided nobody really knew what worked best, so we decided to do what was best for us.

What has seemed to work best for me over the years is splitting to the level of a main screen. For example, for a customer file maintenance program, I would prefer putting the search into a separate program from the name and address entry screen. My reasoning is that these are two distinct functions, and I can foresee instances where I want to use the search, but not the detail display. Also, when I must change the detail display, the effect is contained (for certain) to that display, and should not have any effect on the operation of the search. This is just one example; another would be a purchasing or order entry application, in which you would split the header entry from the detail entry.

On the S/36, those programs would have been written as one monster program. You have probably worked on that program: remember the one that used all ninety-nine indicators - the one which everybody was afraid to touch? Well, I'm here to tell you that you do not have to do such things ever again, and the sooner you get away from it, the better off you will be.

Now, obviously if you go splitting out things that once were together, you will find that you must repeat quite a bit of code in each program. For example, an order header might require date editing, and the detail entry might also use date editing. So you will need to include the date edit subroutine in each program. Should you worry about this?

I don't think so. If you examine the routines that are going to become common to multiple modules, you can direct some attention to standardizing and optimizing those modules. You can then package those modules into /COPY modules (or the equivalent in whatever language you use), and then not worry too much about those functions in the future.

Isn't that bad, thoughDhaving the same code in lots of places? After all, if you have five main programs in an application, and four of the five need date edits, then you have four times as much code for that function, compared to putting all five main functions into one program. Well, so what? Some of the most common routines you will need, such as date editing, are really quite trivial in the scheme of things. Squeezing a few bytes with these will probably never yield any detectable performance benefit.

Don't get the wrong idea; I am not advocating profligacy with storage. It is just that some things are best done in the simplest possible way, which can be summarized as "code it and forget it." So where should you be looking if you are splitting out functions but trying to have some awareness of how that will affect the system?

There are a couple of factors at work here. Those are the manner in which you leave a called program, and the manner in which you reference files among programs.

Easy Come, Easy Go

If you have had the chance to observe a "native" AS/400 application, one that has programs calling programs, you have probably noticed that the first time you call a program, the delay is quite noticeable. Subsequent calls to that program usually go much more quickly.

The reason for this phenomenon is that there is a significant amount of work the system must do on the first call to a program. In addition to locating and loading the program, the system must also determine the files being used and create open data paths (ODPs) for those files, initialize work areas, verify authorizations, and on and on.

There are basically two controlled ways that you can return from a called program: either by terminating the called program, or by simply returning without terminating. In RPG terms, you can either return with LR set on, or you can return and not be at LR. If you terminate and return, the system must go through the exercise again on a subsequent call. You will notice the delay again, even if you use the program over and over.

So returning without terminating seems to be the obvious way to go. However, you must be aware that on subsequent calls, things in the program are exactly as they were when you left them. That is, no initializations are done unless you specifically code them.

The difference between these two techniques is similar to the difference between staying at the best hotel in the city or staying at the budget motel. In the fine hotel, you expect to be greeted and fussed over by the doorman and elevator operator and other staff; at day's end when you return to go to your room, the process is orderly and controlled, and you have the assurance that your room has been made up. When you return to the budget motel, you pretty much let yourself in without interference, and may find your room in the same condition that you left it. So the tradeoff is performance (getting to your room without delay) for expectations (expecting your room to be perfect as opposed to the way you left it).

Generally, it is not a problem to simply return without termination. You can always include an initialization routine to "clear up" things when you come back into a program. Most of the time, you won't have any problems with this, but it is something that you should look at when you are getting difficult bugs.

Sharing

In a program-to-program application, one of the biggest performance boosters comes about by sharing open data paths. For example, our suite of five programs might use a total of twenty files, fifteen of which are used in multiple programs. Sharing a file is a technique by which you tell the system that you want it to work through the overhead of opening the file only once in your application. After the first open, the next program that uses that file can simply attach to the prior open.

Sharing files helps most noticeably when you have large applications with a large number of possible called programs. For example, an application might include thirty interactive programs, of which maybe a dozen are always called, and the remainder are called as needed. If the crucial dozen use the files as shared opens, the others will load more quickly when they are first called. If you return without termination, and then recall the program, you get the same effect as a shared open, since one of the consequences of returning without termination is that open files remain open. So sharing opens is most noticeable on the first call to a program, and does not really speed things on subsequent calls.

Does it Hurt to MRT?

Hot-shot S/36 programmers are probably yawning by now, having figured that a lot of this sounds like MRT programming. It's true, in a lot of ways; it is like that. The difference is that the system now takes care of the details for you so you can write your program without regard for the number of simultaneous users.

It turns out that the system knows how to split programs into the "common" and the "per user" parts. If your order entry program is being used by a dozen data entry people, you do not necessarily have an even dozen programs loaded. During compilation, the system determines the parts of your programs which must be unique for each user (their variables and work areas), and determines which parts of the code can be shared by all users. So without any extra effort on your part, your users are actually sharing parts of the same program. I have always found this to be more of a pleasing theoretical concept than something that I could readily observe, but I believe. So the long answer is, no, you do not and should not do anything special to your programs, such as try to replicate MRT logic. Write as if you were writing SRTs, and let the system worry about the rest.

To and Fro

Another idea that you will want to investigate is just how, exactly, you effect this program-to-program communication. Most S/36 programmers would use the LDA to pass data from one program to another. When the amount of data to be passed was too great, there was always the simple expedient of writing to a file, then letting the next program read that in.

Those techniques are still available with the AS/400, but the preferred method is to define a parameter list. A parameter list is simply a list of fields to which you want the calling program to have access. If you call a program and pass it a parameter, then change that parameter within the called program, the change is "known" to the caller when you return to it. This is in contrast to some PC languages, in which a copy of the variable is passed, but changes affect only the copy. The rule is easy to live with if you are aware of it.

The main rule about parameter lists is so ridiculously simple that I am astonished when other programmers (and I) don't get it right. That is, the number and types of parameters must be in agreement between caller and called. For example, if you have three parameters, "A," a character field length 10, "B," a character field length 1, and "C," a numeric 5.0 field, then you must reference those fields in that order with those definitions in the caller and the called. The names of the fields do not have to be the same, just the attribute (character or numeric) and the length.

I suppose the reason this causes problems is because a change in one parameter list might not occur in another. I get around this by simply creating an external data structure that contains all of the parameters, then referring to the data structure name in the parameter list. If I make a change to the list, I simply recompile all of the programs using the list; I do not have to go into each program and manually change the list. Think about it.

Getting Started

So how do you get started with programs calling programs? If you have just brought over a mass of programs from the S/36 to the AS/400, you might be desiring to get into "native" mode. I guess the most likely places to start looking are in the monster programs. See if you have searches coded into those programs; you may be able to split out a search and then call that rather than include it with other things. Or you might see if you have some unique calculations that are repeated in a number of programs. By splitting that routine out, you can make changes in only one place in the future, and breath easier knowing that you have covered all of its usages.

One of the nice features of the S/36 environment is that you can use the CALL RPG opcode so you can start taking steps without having to go the full route all at once. If all of your programming has been with RPG II, it might be hard at first to know what to do. But take some time to study and implement a program-to-program call application, because once you do, the benefits will be obvious. And you will never want to be without it again.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: