Practical RPG: Data Structure I/O Is Finally Practical!

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

I like using data structures for I/O, I really do, but this latest PTF finally makes it practical for non-trivial applications.

 

I'm not just an RPG programmer. I'm a Java programmer, and I've always loved the concept of having multiple instances of a business object in memory. In many cases, this is as simple as just being able to have multiple records. I like having, say, the original of a customer master and the updated version. I can compare the two to see if changes were made. Or maybe I have an array of children for a parent. Unfortunately, when I actually wanted to use the data structure to maintain a physical file, RPG fell a little short. But those days are gone!

 

A Quick Primer on Data Structure I/O

In order to use a data structure for record-level access (RLA), you have to define it based on a file. More importantly, you have to define not only the file (or more specifically, the record format) that the data structure is based on, but also the usage. This is because you can create record formats in which some fields are input- or output-only. I'm not going to go into the details on the circumstances under which this occurs (think JOIN logical files for one case), but understand how that might affect the syntax of the RPG code. Here's the free-form definition for an input data structure:

 

dcl-f CUSTMAST keyed;

dcl-ds dsCUSTMAST extname('CUSTMAST' : *INPUT);

 

Very simple, and with it I can now read the CUSTMAST file into the data structure iCUSTMAST:

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

 

Excellent! There are a few gotchas here, though. Let me show you an alternate syntax for defining the data structure:

 

dcl-ds dsCUSTMAST likerec('CUSTMASTFM' : *INPUT);

 

With this definition, you can then execute the same I/O operation. You'll note that you have to define the record format name rather than the file name. A more subtle difference is that the LIKEREC keyword creates a qualified data structure, one in which you must specify the data structure name when accessing a field, like this: iCUSTMAST.CUSTNO. The default for the EXTNAME keyword is to create a non-qualified data structure, although you can override that behavior by simply adding the QUALIFIED keyword to the definition. The following two lines are equivalent:

 

dcl-ds dsCUSTMAST likerec('CUSTMASTFM' : *INPUT);

dcl-ds dsCUSTMAST extname('CUSTMAST' : *INPUT) qualified;

 

Not a big deal, especially since I'm talking about having multiple data structures with the same format. In order to do that, I have to use qualified data structures anyway. The only added complexity about LIKEREC is that you have to have a file specification in your program, but you will if you're doing record-level access. So do you want to use a single syntax for all situations (EXTNAME), or do you want to use a syntax specifically geared toward RLA for your RLA programs? The choice is yours, and neither affects the issue at hand.

 

Where Is the Issue?

The problem is actually pretty simple. Let's write a simple program that updates the first record in a file. What it does is absolutely irrelevant; we just want to see the steps we have to go through. In this case, let's update a usage counter.

 

dcl-f CUSTMAST keyed usage(*update);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*input);

 

dcl-pi *n;

iCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

dsCUSTMAST.USAGE += 1;

update CUSTMAST dsCUSTMAST;

 

return;

 

This is certainly overkill, but it does a great job of illustrating the concept. We've created a data structure based on the customer master record and defined it for the input layout. (As it turns out, you can leave off the second parameter of the LIKEREC keyword and it defaults to *INPUT.) "Wait," you say, "we're using that data structure to update a file!" Yes, we are. But as it turns out, a data structure defined as *INPUT can be used with the UPDATE operation. This is where it gets a little tricky. You can use a data structure of type *INPUT for input (READ/CHAIN) and for update (UPDATE). You cannot, though, use that same data structure for output. Here's a program in which we clone a customer record:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*input);

 

dcl-pi *n;

iCustNo like CUSTNO;

oCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

dsCUSTMAST.CUSTNO += oCustNo;

dsCUSTMAST.USAGE = 0;

write CUSTMAST dsCUSTMAST;

 

return;

 

This, sadly, does not compile. But, you say, what if you use *ALL rather than *INPUT? Well, unfortunately data structures defined that way don't work for either the READ or the WRITE opcodes. And until now, the way around the issue was kludgy at best:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds iCUSTMAST likerec('CUSTMASTFM':*input);

dcl-ds oCUSTMAST likerec('CUSTMASTFM':*output);

 

dcl-pi *n;

iCustNo like CUSTNO;

oCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST iCUSTMAST;

oCUSTMAST = iCUSTMAST;

oCUSTMAST.CUSTNO = oCustNo;

oCUSTMAST.USAGE = 0;

write CUSTMAST oCUSTMAST;

 

return;

 

Note the line of code where I set oCUSTMAST equal to iCUSTMAST. This will work in most circumstances, but theoretically if the input buffer is indeed different from the output buffer, then you have a potential problem. Cases where this happens are few and far between, although one simple case is a logical file in which you create a computed field such as a substring. Those fields are input-only. EVAL-CORR can be used to alleviate this situation, but that seems like a lot of overhead to me.

 

The Fix Is In!

You can read the official announcement on IBM's website. The short version is that all you need is a PTF and the data structure I/O becomes easier for RLA programs. Specifically, you will now be able to use the *ALL parameter on either the LIKEREC or EXTNAME keyword to define data structures that can be used on any valid I/O operation for a file. This allows our original clone program to compile with just one change:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*all);

(…)

 

Now the program compiles and runs, and we have an easier way to write our programs. There's a second change to the syntax: if you don't specify the second parameter of a LIKEREC keyword at all and the compiler sees that the input and output buffers are the same, then it allows you to use that data structure for both input and output operations. Effectively, it changes the default in that situation from *INPUT to *ALL, although I'm not sure I would continue to think of it that way. Personally, I think I'll probably just go with LIKEREC without a second parameter for the majority of my RLA programs.

 

Note that this PTF also supplies some support for longer field names; you're able to decide via a keyword on a file specification whether to use the standard 10-character field names in your program or the longer SQL names. It's a welcome addition and one I'll spend a little more time on in a future article.

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  •  

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: