Practical RPG: AI and IBM i

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

How today’s Large Language Models (LLMs) can—and can’t—help you code

AI is everywhere, it seems. Or at least advertisers would like you to think so. It’s the “new and improved” of the last decade or so, a phrase that’s overused and sometimes misunderstood. For example, the chatbot you get on your local home improvement store website isn’t really AI, it’s just a well-written FAQ interface. Trying to make a buck off the bandwagon, much of what we used to call “expert systems” is now being relabeled as AI. However, there’s a big difference between a logic tree, however sophisticated, and actual Machine Learning (ML). The biggest difference is that true AI can be wrong, and therein lies the danger for programmers. But let’s take a look at this in a little more detail.

AI, ML, and LLM

We don’t have the time for a full treatment of all of the terms associated with AI, but these three in particular are important to understanding what AI tools may be able to provide us as we develop code. Let’s start with AI. First, what it’s not: it’s not Skynet or Wintermute. It’s not a self-aware consciousness. The formal definition of AI is a computer system that can do tasks that typically require human intervention. AI would be able to get past those popups that ask you to click on all the streetlights. It’s a very broad term, but one specific subgenre that’s important to our discussion is Natural Language Processing, or NLP. This concept is the foundation of things like ChatGPT and will come up again shortly.

The next term is Machine Learning, or ML. ML is where we first get an inkling of what AI might be able to do for us. It’s where computer actions diverge from the specific linear algorithms we program every day to something that changes based on data. And I don’t mean just a case statement that gives you a tax rate based on a state. Instead, when an ML system tries to determine how to respond to a request, its answer will change over time based on the data it can access to make its response. The machine “learns.” The problem with that is from a programming standpoint; you shouldn’t change from day to day. Even if you learn a new way to do something, you shouldn’t immediately start using it in all your code; in most shops it’s more important to have standards than to have the latest techniques. The other problem with ML is that the computer doesn’t know when it’s wrong, and we’ll see how that can be an issue in a moment.

But first, let’s tackle the third acronym, LLM. LLM stands for Large Language Model, and it is a subset of the NLP concept I mentioned earlier. LLM is entirely about allowing people to ask questions using normal language, and then responding in kind. One way to look at it is that LLM is the UI for your Machine Learning system. You can ask it a question, even with misspellings, and the LLM will figure out what you meant to ask, push that to the actual ML piece, get a response, and then format it back in a way that you can hopefully understand. This is an absolutely monumental undertaking, and it’s what really makes something like ChatGPT useful. It’s nothing short of amazing even before you consider that it will do it in pretty much any language. Just for fun, I asked ChatGPT “Wie is Joe Pluta” and got a nice summary of my work in Dutch. Although on other occasions it said it didn’t know who I am, and that’s one of the issues with ChatGPT and ML models in general, especially unsupervised ones.

How Does This Affect My RPG?

It’s time to get to the brass tacks. There are really only two ways to use an LLM to help you code: ask questions of an existing LLM or build your own. Can the current LLMs help you code? Let’s see. I’ll use the most popular free LLM out there, ChatGPT. I asked it a simple question: with two files, CUSTOMER and TASKS both keyed by field CUSNUM, write RPG code to read every record in the CUSTOMER file, and if there is no record in the TASKS file, set the status to C. I got this:

// Define the files

Dcl-F Customer Usage(*Update) Keyed;

Dcl-F Tasks Usage(*Input) Keyed;

// Define data structures for records

Dcl-Ds CustomerRecord;

   CUSNUM Char(10);

   STATUS Char(1);

End-Ds;

 

Dcl-Ds TasksRecord;

   CUSNUM Char(10);

End-Ds;

// Main logic

// Read the first record from the Customer file

Read Customer;

// Loop through all Customer records

DOW NOT %EOF(Customer);

   // Chain to the Tasks file using the CUSNUM from CustomerRecord

   Chain (CustomerRecord.CUSNUM) Tasks;

   // If no matching Task record is found, set the STATUS field to 'C'

   If %Eof(Tasks);

       CustomerRecord.STATUS = 'C';

       Update Customer;

   EndIf;

   // Read the next record from the Customer file

   Read Customer;

ENDDO;

// Close files

*INLR = *ON;

At first glance, this seems reasonable, and that’s exactly what’s so dangerous. The good news is that it won’t compile, but the bad news is that it could really confuse someone who isn’t familiar with the language. Let’s quickly go through the issues here:

  1. Someone taught the LLM about data structures but not how to use them in I/O.
  2. The data structure CustomerRecord is never populated.
  3. The program references CustomerRecord.CUSNUM, but the data structure isn’t qualified.
  4. The program does a CHAIN but then tests %Eof.

The errors are pretty obvious to someone with RPG programming experience, but they could cause no end of issues to a novice. Now, the LLM can be trained. In fact, the first time I asked the question, ChatGPT used While and EndWhile. When I explained that RPG doesn’t have a While opcode, it responded with “You are absolutely correct.” and proceeded to tell me about DOW and ENDDO and gave me the updated code above. Subsequent asks have since used the correct looping opcodes. After the above response, I said that the %EOF built-in function is not appropriate for CHAIN and got another “Oops” from ChatGPT and new code. I suspect that in the future it will know better. But that doesn’t negate the fact that if you didn’t know the code was wrong, you’d probably spend a lot of time unable to get that code to work.

So I Should Build My Own LLM?

Yeah, not so fast. From what I’ve been able to ascertain, building an LLM from scratch is a daunting task requiring a large staff and huge amounts of processing power and data storage. But I’m still researching the options. There may be a hybrid approach where you start with a pretrained LLM and add your own data to it. I’m also trying to understand the differences between supervised and unsupervised Machine Learning to see if that might be a different approach, since writing RPG code is a very narrow scope and it seems that supervised ML models may lend themselves to a smaller problem set. In any case, please spend some time with ChatGPT and see what you can use it for, with the obvious caveat that what it tells you may not be exactly accurate. Have fun!

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  •  

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: