How today’s Large Language Models (LLMs) can—and can’t—help you code
AI is everywhere, it seems. Or at least advertisers would like you to think so. It’s the “new and improved” of the last decade or so, a phrase that’s overused and sometimes misunderstood. For example, the chatbot you get on your local home improvement store website isn’t really AI, it’s just a well-written FAQ interface. Trying to make a buck off the bandwagon, much of what we used to call “expert systems” is now being relabeled as AI. However, there’s a big difference between a logic tree, however sophisticated, and actual Machine Learning (ML). The biggest difference is that true AI can be wrong, and therein lies the danger for programmers. But let’s take a look at this in a little more detail.
AI, ML, and LLM
We don’t have the time for a full treatment of all of the terms associated with AI, but these three in particular are important to understanding what AI tools may be able to provide us as we develop code. Let’s start with AI. First, what it’s not: it’s not Skynet or Wintermute. It’s not a self-aware consciousness. The formal definition of AI is a computer system that can do tasks that typically require human intervention. AI would be able to get past those popups that ask you to click on all the streetlights. It’s a very broad term, but one specific subgenre that’s important to our discussion is Natural Language Processing, or NLP. This concept is the foundation of things like ChatGPT and will come up again shortly.
The next term is Machine Learning, or ML. ML is where we first get an inkling of what AI might be able to do for us. It’s where computer actions diverge from the specific linear algorithms we program every day to something that changes based on data. And I don’t mean just a case statement that gives you a tax rate based on a state. Instead, when an ML system tries to determine how to respond to a request, its answer will change over time based on the data it can access to make its response. The machine “learns.” The problem with that is from a programming standpoint; you shouldn’t change from day to day. Even if you learn a new way to do something, you shouldn’t immediately start using it in all your code; in most shops it’s more important to have standards than to have the latest techniques. The other problem with ML is that the computer doesn’t know when it’s wrong, and we’ll see how that can be an issue in a moment.
But first, let’s tackle the third acronym, LLM. LLM stands for Large Language Model, and it is a subset of the NLP concept I mentioned earlier. LLM is entirely about allowing people to ask questions using normal language, and then responding in kind. One way to look at it is that LLM is the UI for your Machine Learning system. You can ask it a question, even with misspellings, and the LLM will figure out what you meant to ask, push that to the actual ML piece, get a response, and then format it back in a way that you can hopefully understand. This is an absolutely monumental undertaking, and it’s what really makes something like ChatGPT useful. It’s nothing short of amazing even before you consider that it will do it in pretty much any language. Just for fun, I asked ChatGPT “Wie is Joe Pluta” and got a nice summary of my work in Dutch. Although on other occasions it said it didn’t know who I am, and that’s one of the issues with ChatGPT and ML models in general, especially unsupervised ones.
How Does This Affect My RPG?
It’s time to get to the brass tacks. There are really only two ways to use an LLM to help you code: ask questions of an existing LLM or build your own. Can the current LLMs help you code? Let’s see. I’ll use the most popular free LLM out there, ChatGPT. I asked it a simple question: with two files, CUSTOMER and TASKS both keyed by field CUSNUM, write RPG code to read every record in the CUSTOMER file, and if there is no record in the TASKS file, set the status to C. I got this:
// Define the files
Dcl-F Customer Usage(*Update) Keyed;
Dcl-F Tasks Usage(*Input) Keyed;
// Define data structures for records
Dcl-Ds CustomerRecord;
CUSNUM Char(10);
STATUS Char(1);
End-Ds;
Dcl-Ds TasksRecord;
CUSNUM Char(10);
End-Ds;
// Main logic
// Read the first record from the Customer file
Read Customer;
// Loop through all Customer records
DOW NOT %EOF(Customer);
// Chain to the Tasks file using the CUSNUM from CustomerRecord
Chain (CustomerRecord.CUSNUM) Tasks;
// If no matching Task record is found, set the STATUS field to 'C'
If %Eof(Tasks);
CustomerRecord.STATUS = 'C';
Update Customer;
EndIf;
// Read the next record from the Customer file
Read Customer;
ENDDO;
// Close files
*INLR = *ON;
At first glance, this seems reasonable, and that’s exactly what’s so dangerous. The good news is that it won’t compile, but the bad news is that it could really confuse someone who isn’t familiar with the language. Let’s quickly go through the issues here:
- Someone taught the LLM about data structures but not how to use them in I/O.
- The data structure CustomerRecord is never populated.
- The program references CustomerRecord.CUSNUM, but the data structure isn’t qualified.
- The program does a CHAIN but then tests %Eof.
The errors are pretty obvious to someone with RPG programming experience, but they could cause no end of issues to a novice. Now, the LLM can be trained. In fact, the first time I asked the question, ChatGPT used While and EndWhile. When I explained that RPG doesn’t have a While opcode, it responded with “You are absolutely correct.” and proceeded to tell me about DOW and ENDDO and gave me the updated code above. Subsequent asks have since used the correct looping opcodes. After the above response, I said that the %EOF built-in function is not appropriate for CHAIN and got another “Oops” from ChatGPT and new code. I suspect that in the future it will know better. But that doesn’t negate the fact that if you didn’t know the code was wrong, you’d probably spend a lot of time unable to get that code to work.
So I Should Build My Own LLM?
Yeah, not so fast. From what I’ve been able to ascertain, building an LLM from scratch is a daunting task requiring a large staff and huge amounts of processing power and data storage. But I’m still researching the options. There may be a hybrid approach where you start with a pretrained LLM and add your own data to it. I’m also trying to understand the differences between supervised and unsupervised Machine Learning to see if that might be a different approach, since writing RPG code is a very narrow scope and it seems that supervised ML models may lend themselves to a smaller problem set. In any case, please spend some time with ChatGPT and see what you can use it for, with the obvious caveat that what it tells you may not be exactly accurate. Have fun!
LATEST COMMENTS
MC Press Online