29
Fri, Nov
0 New Articles

Are You Keeping Up With CL?

CL
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Let's review the last few releases and see if you know what CL can offer.

 

While Control Language (CL) has been around for over 30 years, my impression is that the last six years have seen more enhancements to CL than all of the preceding years combined. In fact, I'm surprised CL wasn't renamed to CL II at some point in the last six years! This article reviews many of the program-development-oriented CL enhancements that have become available to you in recent releases. While many new and enhanced CL commands in every release are related to running or configuring the system, this article focuses on the commands that target application development—that is, the commands that are used only within a CL program.

V5R3

CL's evolution, to my way of thinking, really got its start with V5R3. Prior to V5R3, when needing to run a command (or set of commands) one or more times, you were pretty much forced to use that bane of structured programming: the GOTO command. V5R3 enabled you to eliminate many situations where you previously would have used a GOTO (note that I say "many" rather than "all" as the GOTO command continues to be necessary for program-level MONMSG commands that do not simply ignore the message being monitored). V5R3 introduced the structured programming constructs of repetition (DOFOR, DOWHILE, and DOUNTIL) along with enhanced selection (SELECT, WHEN, and OTHERWISE to complement the previously supported IF and ELSE commands).

 

The DOFOR command, documented here, allows you to run a group of commands zero or more times based on a loop control variable. For instance, the following command sequence will, if the values of variable &X and &Y are 1 and 5, respectively, run the set of commands found between the DOFOR and ENDDO commands five times—that is, assuming that the commands being run within the DOFOR loop do not either 1) modify the values of &LOOP_CTL or &Y or 2) run commands such as LEAVE, which was also new in V5R3 and will be discussed shortly.

 

DOFOR  VAR(&LOOP_CTL) FROM(&X) TO(&Y) BY(1)

       (some commands to run)

ENDDO

 

The rules of the DOFOR command are straightforward. The variable used for the VAR parameter must be declared as a signed or unsigned integer (a new data type that was also introduced with V5R3); the FROM and TO parameters can be specified as literal values, signed or unsigned integer variables, or expressions evaluating to an integer value; and the BY parameter can be a literal value or a signed or unsigned integer variable. The BY parameter is also optional with a default of 1. So, in the earlier example, I could have omitted the BY keyword.

 

The DOWHILE command, documented here, allows you to run a group of commands zero or more times based on whether or not a condition is true. For instance, the command sequence below will run the set of commands found between the DOWHILE and ENDDO commands zero or more times based on the value of variable &X being less than the value of variable &Y.

 

DOWHILE COND(&X *LT &Y)

        (some commands to run)

ENDDO

 

If the initial value of &X is greater than or equal to &Y, then the set of commands will not be run at all. If &X is less than &Y, then the set of commands will run until &X is no longer less than &Y. This condition (&X being greater than or equal to &Y), depending on the commands being run within the DOWHILE command set, may never happen, leaving the program in an endless loop (which may or may not be exactly what you want for a given situation). The condition being tested can be an expression either as simple as COND(*NOT &IN03) to run the set of commands until indicator 3 (perhaps tied to F3 on a display file) is "on" or reasonably complex as in COND((&A *EQ &B) *OR (&X *NE &Y) *OR ((&A *GE &Y) *AND (&B *LE &X)))

 

The DOUNTIL command, documented here, allows you to run a group of commands one or more times based on whether or not a condition is true. For instance, the following command sequence will run the set of commands found between the DOUNTIL and ENDDO commands one or more times based on the value of variable &X being less than the value of variable &Y.

 

DOUNTIL COND(&X *GE &Y)

        (some commands to run)

ENDDO

 

The key distinction between the DOUNTIL and the DOWHILE is that the condition to be evaluated, in the case of DOUNTIL, is tested after the set of commands has been run. That is, even if the initial value of &X is greater than or equal to &Y, the set of commands associated with the DOUNTIL will run at least once. Only after the command set has been run is the test for &X being greater than or equal to &Y performed. As with the DOWHILE command, the expression being tested can be very simple or quite complex, depending on your application needs.

 

Associated with all three of these DOxxx commands (DOFOR, DOWHILE, DOUNTL) are the LEAVE and ITERATE commands. The LEAVE command, documented here, allows you to immediately end the processing of the command set associated with the DOFOR, DOWHILE, or DOUNTIL command. When the LEAVE command is run, your program will resume at the command following the associated ENDDO command. The ITERATE command, documented here, allows you to skip the remaining commands associated with the DOFOR, DOWHILE, or DOUNTIL command. When the ITERATE command is run, your program will resume at the associated ENDDO command and cause the condition specified with the DOFOR, DOWHILE, or DOUNTIL command to be re-evaluated.

 

Both the LEAVE and the ITERATE commands have an optional parameter, Command label (CMDLBL). The default value for the CMDLBL parameter is *CURRENT, meaning that the current (or innermost) DOFOR, DOWHILE, or DOUNTIL command set is the command group to be left (LEAVE) or re-evaluated (ITERATE). Using the CMDLBL keyword, you can cause the LEAVE or ITERATE command to reference a DOxxx command set other than the current one. To a structured programming purist, this capability to "skip over" intervening DOxxx command sets might be seen as violating the principle of one exit point per programming construct, but I have to admit I find it mighty handy when trying to handle situations such as the user pressing F12 (Return) on a pop-up window when I'm umpteen DO levels deep into the program—and I don't want to key a bunch of IF checks to see if indicator 12 is "on" to avoid processing within the intervening DOxxx command sets.   

 

Before leaving the various DO-related commands, you might be wondering if there are any limits on nesting the DO, DOFOR, DOWHILE, and DOUNTIL commands. There are, with the limit being up to 25 levels of imbedded DO groups, a number hopefully sufficient for most of your applications.

 

In addition to providing support for DOFOR, DOWHILE, and DOUNTIL, V5R3 also introduced the SELECT command. SELECT, documented here and my personal favorite of the V5R3 structured programming commands, allows you to define a series of mutually exclusive conditions to be tested for (and the command(s) to be run when a given test is evaluated as true) using the WHEN command documented here. Details concerning the SELECT command (and associated WHEN, OTHERWISE, and ENDSELECT commands) can be found in my article "Still Programming Like You Did with V1?" 

 

As with the DOxxx commands, SELECTs can be nested up to 25 levels deep. And while not available with V5R3, the CRTBNDCL and CRTCLPGM commands were enhanced in 7.1 with support for the special value *DOSLTLVL (Do/Select level) when using the OPTION keyword. Using this special value, your compile listing can include columns providing you with the DOxxx and SELECT nesting levels.

 

In addition to the DOxxx and SELECT capabilities introduced with V5R3, there were also enhancements made in the area of file support and data types. One historical restriction of CL, when working with database files—and a restriction that often heavily influenced application design—was related to the maximum of one file per program. With V5R3, CL was enhanced to support up to five files per program. This enhancement was made using the Open file identifier (OPNID) parameter of the Declare File (DCLF) command, which is documented here. Using the OPNID keyword, you can now work directly with up to five files, with the files being five instances of the same file, five different files, or any combination of the same and different files. More information on how the OPNID parameter is used can be found in the article "What's New with Files?"

 

One consideration that continued to exist with database files and CL in V5R3 was the inability to read from a file once an end-of-file condition was met. This limitation, however, was addressed in V6R1 with the new Close database file (CLOSE) command, documented here and also discussed in the previously referenced "What's New with Files?" article. Using the CLOSE command (and the OPNID associated with the file using the DCLF command of the previous paragraph), you can now close a database file when end-of-file is reached and, on the next Receive file (RCVF) reference to the file, have the system automatically re-open the file and read the first record.

 

From a data type point of view, CL was also enhanced to directly support integer, often referred to as binary, fields. The Declare CL variable (DCL) command, documented here, provided direct support for signed and unsigned 2- and 4-byte integer variables with zero decimal positions. Many, though not all, programs previously using the %binary built-in to work with integer values could now remove the use of the built-in. Two V5R4 enhancements (which will be reviewed shortly), when used in conjunction with the V5R3 integer support, essentially eliminate the need for the %binary built-in. In V7R1, CL integer support was further improved with the ability to declare 8-byte integers in ILE CL programs.

V5R4

Looking first at new programming constructs available to you, CL in V5R4 provides support for subroutines with the following commands:

 

  • Subroutine (SUBR), documented here, to define the start of a subroutine
  • End subroutine (ENDSUBR), documented here, to define the end of a subroutine
  • Call subroutine (CALLSUBR), documented here, to pass control to a subroutine
  • Return from subroutine (RTNSUBR), documented here, to return control to the caller of the subroutine (the ENDSUBR command implicitly also returns control to the caller of the subroutine)

 

For additional information, the article "Still Copying Code Within a CL Program?" reviews CL subroutine support and points out a few areas where CL subroutines provide flexibility not found with subroutines in languages such as RPG.

 

In addition to support for subroutines, CL also introduced new types of storage for variables. One type of storage, which nearly everyone will enjoy using, is *DEFINED storage. Defined storage allows you to declare variables where the variables are defined as being part of (or a subset of) another variable. In other languages, this capability exists in the forms of RPG data structures, COBOL level numbers, and C structs. Using this support, you can—if, for instance, your CL program needs to work with a parameter that contains multiple pieces of information—do away with built-ins such as %sst and simply declare the individual fields you need to work with as being defined subsets of the larger parameter variable. An example of this, with a CL program being called by a system exit with a parameter describing a spooled report and the CL program then calling a system API to move the spooled report, can be found in "Add Your Own Options to the IBM WRKOUTQ Command." Along similar lines, if your CL program needs to work with various pieces of information in a data area such as the LDA, you can now map those subfields directly as shown in the article "Reduce Those Annoying Substring Operations."

 

To implement defined storage, two new keywords were added to the Declare CL variable (DCL) command, documented here. The parameters are Storage (STG) and Defined on variable (DEFVAR). For the storage parameter, you use the special value *DEFINED, and the DEFVAR keyword is then used to define what variable you are redefining. The following DCL commands, for instance, define a 20-byte variable QualNam that stores a qualified object name and then two 10-byte defined variables representing the actual object name and library name.

 

Dcl   &QualName  Type(*Char)  Len(20)

  Dcl   &ObjName  Type(*Char)  Len(10)  Stg(*Defined)  DefVar(&QualName 1)

  Dcl   &ObjLib   Type(*Char)  Len(10)  Stg(*Defined)  DefVar(&QualName 11)

 

The DEFVAR parameter is used to specify the CL variable you are redefining and the starting location within that variable for the newly defined variable. The length of the declared CL variable is specified using the LEN parameter. In the above example, CL variable &ObjName is being declared as a 10-byte character field, starting in the first byte of the &QualName variable and extending through the first 10 bytes of &QualName.

 

Another set of enhancements to the DCL command is related to support for CL pointers. In addition to previous TYPE special values such as *CHAR, *DEC, and *LGL, you can now also specify TYPE(*PTR) to declare a CL variable as a pointer. And in addition to the *DEFINED special value for the STG parameter of DCL that we just reviewed, you can also use STG(*BASED). *BASED storage allows you to indicate that the storage associated with a CL variable is whatever storage a given pointer is addressing (as opposed to hard-coding a CL variable name such as we did with the DEFVAR keyword). Associated with this support are two new built-ins: %address, documented here, and %offset, documented here. These built-ins allow you to change or test an address stored in a pointer and change or test the offset value of a pointer, respectively. I will not go into any great detail, in this article anyway, concerning pointers as roughly 99.99 percent of CL developers do not have a need for pointers. But you can find several articles of mine that make use of pointers within CL programs, with one such article being "Going Where No Substring (%SST) Operation Can Go."

 

These two enhancements, defined storage and based storage, when combined with the V5R3 support for the integer data type, essentially remove any need for you to use the %bin built-in when working with binary data within your CL programs.

 

Binary data support was further enhanced, though, with V5R4. In previous releases, when you declared a file using the DCLF command and the file contained binary-encoded fields, CL would map the binary data to *DEC fields within your program. In V5R4, the DCLF command was enhanced with the Declare binary fields (DCLBINFLD) keyword, allowing you to bypass this mapping of binary data to packed decimal when the binary field is defined with length of 9 or less and 0 decimal positions. For compatibility, the default for DCLBINFLD is *DEC, but you can specify *INT to leave the database field as binary.

V6R1

CL programming tools continued to be enhanced in V6R1 with support for the Include CL source (INCLUDE) command, documented here. The INCLUDE command copies CL source from the specified source file member into the source currently being compiled. The included source can be declare commands such as DCLs defining a common data structure used as a parameter between programs, shared logic such as subroutines, or just regular CL commands such as CRTPF or CPYF. The INCLUDE command does have a limitation in V6R1 worth pointing out: you cannot nest INCLUDEs—that is, INCLUDE a source file that in turn contains an INCLUDE command. This restriction is removed in 7.1, however.

 

As mentioned previously in this article, V6R1 also introduced the CLOSE command when working with database files. The ability to close a file and have the system automatically re-open the file on the next read operation really simplifies the writing of CL programs needing to process the contents of a file with multiple sweeps through the data.

 

Another productivity boost found in V6R1 is the Declare processing options (DCLPRCOPT) command, documented here. Though this command was initially introduced with V5R4, there was only one keyword supported in the V5R4 release. That keyword, SUBRSTACK, allowed you to specify how many nested subroutines you wanted a CL program to support. The default is 99, but you can specify a nesting level between 20 and 9999.

 

The DCLPRCOPT command allows you to specify compiler options as part of the CL source program, removing the need for you to remember what options to use when compiling your programs. V6R1 introduced 12 additional options, with LOG, USRPRF, AUT, and BNDDIR being a sample of what can now be specified within the source program.

 

For a more complete review of the V6R1 CL enhancements, and there are several beyond what I have mentioned, refer to the article "V6R1 CL: The Story Continues."

7.1

CL continued to be enhanced with 7.1. These enhancements included the ability to use the Retrieve CL Source (RTVCLSRC) command with ILE CL programs (an OPM capability that I know many missed when ILE was introduced), encrypt debug views within a CL program, and support 8-byte integer values. For a more complete review of the V7R1 CL enhancements, refer to the article "V7R1 CL: Something for Everyone" (as I'm way over the suggested size for this article and really need to stop).

Continual Enhancements

CL has seen quite a bit of activity in the last few releases, a view you will hopefully agree with after reading this review. My hope is that, in reading this article, you have discovered a few items that you can use in your shop to improve your programming productivity. I know from experience that it's sometimes difficult to discover what all is actually in a new release of the i operating system.

CL Questions?

Wondering how to accomplish a function in CL? Send your CL-related questions to me at This email address is being protected from spambots. You need JavaScript enabled to view it..

as/400, os/400, iseries, system i, i5/os, ibm i, power systems, 6.1, 7.1, V7,

 

 

Bruce Vining

Bruce Vining is president and co-founder of Bruce Vining Services, LLC, a firm providing contract programming and consulting services to the System i community. He began his career in 1979 as an IBM Systems Engineer in St. Louis, Missouri, and then transferred to Rochester, Minnesota, in 1985, where he continues to reside. From 1992 until leaving IBM in 2007, Bruce was a member of the System Design Control Group responsible for OS/400 and i5/OS areas such as System APIs, Globalization, and Software Serviceability. He is also the designer of Control Language for Files (CLF).A frequent speaker and writer, Bruce can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it.. 


MC Press books written by Bruce Vining available now on the MC Press Bookstore.

IBM System i APIs at Work IBM System i APIs at Work
Leverage the power of APIs with this definitive resource.
List Price $89.95

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: