29
Fri, Nov
0 New Articles

Practical RPG: Data Structure I/O Is Finally Practical!

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

I like using data structures for I/O, I really do, but this latest PTF finally makes it practical for non-trivial applications.

 

I'm not just an RPG programmer. I'm a Java programmer, and I've always loved the concept of having multiple instances of a business object in memory. In many cases, this is as simple as just being able to have multiple records. I like having, say, the original of a customer master and the updated version. I can compare the two to see if changes were made. Or maybe I have an array of children for a parent. Unfortunately, when I actually wanted to use the data structure to maintain a physical file, RPG fell a little short. But those days are gone!

 

A Quick Primer on Data Structure I/O

In order to use a data structure for record-level access (RLA), you have to define it based on a file. More importantly, you have to define not only the file (or more specifically, the record format) that the data structure is based on, but also the usage. This is because you can create record formats in which some fields are input- or output-only. I'm not going to go into the details on the circumstances under which this occurs (think JOIN logical files for one case), but understand how that might affect the syntax of the RPG code. Here's the free-form definition for an input data structure:

 

dcl-f CUSTMAST keyed;

dcl-ds dsCUSTMAST extname('CUSTMAST' : *INPUT);

 

Very simple, and with it I can now read the CUSTMAST file into the data structure iCUSTMAST:

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

 

Excellent! There are a few gotchas here, though. Let me show you an alternate syntax for defining the data structure:

 

dcl-ds dsCUSTMAST likerec('CUSTMASTFM' : *INPUT);

 

With this definition, you can then execute the same I/O operation. You'll note that you have to define the record format name rather than the file name. A more subtle difference is that the LIKEREC keyword creates a qualified data structure, one in which you must specify the data structure name when accessing a field, like this: iCUSTMAST.CUSTNO. The default for the EXTNAME keyword is to create a non-qualified data structure, although you can override that behavior by simply adding the QUALIFIED keyword to the definition. The following two lines are equivalent:

 

dcl-ds dsCUSTMAST likerec('CUSTMASTFM' : *INPUT);

dcl-ds dsCUSTMAST extname('CUSTMAST' : *INPUT) qualified;

 

Not a big deal, especially since I'm talking about having multiple data structures with the same format. In order to do that, I have to use qualified data structures anyway. The only added complexity about LIKEREC is that you have to have a file specification in your program, but you will if you're doing record-level access. So do you want to use a single syntax for all situations (EXTNAME), or do you want to use a syntax specifically geared toward RLA for your RLA programs? The choice is yours, and neither affects the issue at hand.

 

Where Is the Issue?

The problem is actually pretty simple. Let's write a simple program that updates the first record in a file. What it does is absolutely irrelevant; we just want to see the steps we have to go through. In this case, let's update a usage counter.

 

dcl-f CUSTMAST keyed usage(*update);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*input);

 

dcl-pi *n;

iCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

dsCUSTMAST.USAGE += 1;

update CUSTMAST dsCUSTMAST;

 

return;

 

This is certainly overkill, but it does a great job of illustrating the concept. We've created a data structure based on the customer master record and defined it for the input layout. (As it turns out, you can leave off the second parameter of the LIKEREC keyword and it defaults to *INPUT.) "Wait," you say, "we're using that data structure to update a file!" Yes, we are. But as it turns out, a data structure defined as *INPUT can be used with the UPDATE operation. This is where it gets a little tricky. You can use a data structure of type *INPUT for input (READ/CHAIN) and for update (UPDATE). You cannot, though, use that same data structure for output. Here's a program in which we clone a customer record:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*input);

 

dcl-pi *n;

iCustNo like CUSTNO;

oCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST dsCUSTMAST;

dsCUSTMAST.CUSTNO += oCustNo;

dsCUSTMAST.USAGE = 0;

write CUSTMAST dsCUSTMAST;

 

return;

 

This, sadly, does not compile. But, you say, what if you use *ALL rather than *INPUT? Well, unfortunately data structures defined that way don't work for either the READ or the WRITE opcodes. And until now, the way around the issue was kludgy at best:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds iCUSTMAST likerec('CUSTMASTFM':*input);

dcl-ds oCUSTMAST likerec('CUSTMASTFM':*output);

 

dcl-pi *n;

iCustNo like CUSTNO;

oCustNo like CUSTNO;

end-pi;

 

chain (iCustNo) CUSTMAST iCUSTMAST;

oCUSTMAST = iCUSTMAST;

oCUSTMAST.CUSTNO = oCustNo;

oCUSTMAST.USAGE = 0;

write CUSTMAST oCUSTMAST;

 

return;

 

Note the line of code where I set oCUSTMAST equal to iCUSTMAST. This will work in most circumstances, but theoretically if the input buffer is indeed different from the output buffer, then you have a potential problem. Cases where this happens are few and far between, although one simple case is a logical file in which you create a computed field such as a substring. Those fields are input-only. EVAL-CORR can be used to alleviate this situation, but that seems like a lot of overhead to me.

 

The Fix Is In!

You can read the official announcement on IBM's website. The short version is that all you need is a PTF and the data structure I/O becomes easier for RLA programs. Specifically, you will now be able to use the *ALL parameter on either the LIKEREC or EXTNAME keyword to define data structures that can be used on any valid I/O operation for a file. This allows our original clone program to compile with just one change:

 

dcl-f CUSTMAST keyed usage(*input:*output);

dcl-ds dsCUSTMAST likerec('CUSTMASTFM':*all);

(…)

 

Now the program compiles and runs, and we have an easier way to write our programs. There's a second change to the syntax: if you don't specify the second parameter of a LIKEREC keyword at all and the compiler sees that the input and output buffers are the same, then it allows you to use that data structure for both input and output operations. Effectively, it changes the default in that situation from *INPUT to *ALL, although I'm not sure I would continue to think of it that way. Personally, I think I'll probably just go with LIKEREC without a second parameter for the majority of my RLA programs.

 

Note that this PTF also supplies some support for longer field names; you're able to decide via a keyword on a file specification whether to use the standard 10-character field names in your program or the longer SQL names. It's a welcome addition and one I'll spend a little more time on in a future article.

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: