We've all had to process a folder full of files from a PC or a UNIX machine; today we learn how to do it programmatically.
If you're like me, you strongly believe that the IBM i is the best choice as your business integration hub. It talks just about any language, can handle any kind of data, and has unparalleled reliability. In fact, it's a prefect interpreter between other systems in your infrastructure. One of the things that IBM has focused on over the years is a fantastic capability to support stream files. Whether it's the UNIX-like capabilities of QShell, the stream file capabilities of commands like CPYFRMIMPF, or the ability to write C programs to directly access the files, there isn't a stream file requirement that can't be met by the IBM i.
Let's Start at the Top
So let's start at the top. One of the most basic things you'll ever have to do is to read through a folder and process the files within it. Like so many things on the IBM i, you have a number of ways of doing this. Which one you use depends entirely on the business problem at hand. For example, if all you want is a list of file names, you can just use QShell. You can actually use the ls command to list a folder directly into a database member. The technique isn't elegant, but it is simple and functional, properties I appreciate greatly. But if you're going to have to do anything other than just stuff the name of the file into another command, you'll probably want to dig into the IFS functions.
In my case, I'm using some code that I've developed, modified, redeveloped, forgotten, found again, and repurposed over the years. I use the IFS functions available in the C runtime library, which I was happy to find out is now automatically available when you compile an RPG program. In years past, you had to make sure to include the binding directory QC2LE during your compile, but that now seems to be a standard. Just one less thing to remember.
So today we're going to just open the directory and read through it. Functionally, this won't be much more feature-rich than this:
dcl-pi *n;
iDir char(32) const;
iFilter char(32) const;
end-pi;
This is my setup. I'm going to get two things: a directory name and a filter. Please note a couple of obvious limitations here. First, I've made both of my input parameters variables of type char with a length of 32. That means that I can easily call this from a command line. As you'll see though, that means I'll have to use %trim later when accessing the APIs. This is important: The routines we use aren't at all forgiving of embedded or even trailing spaces. Remember to trim!
So the next thing I do is define the data structures that I'll be using. One data structure represents a directory entry. It's sort of like the catalog entry; it basically has the name and a couple of salient characteristics but little else. The next structure is the status data structure, and it contains a lot more: size, owner, timestamps, pretty much everything you need to know about the file itself. You can see the structures below along with the layouts I use.
In my program:
dcl-ds dsDirEnt likeds(IFS_DirEnt) based(pDirent);
dcl-ds dsStat likeds(IFS_Status);
In a separate copy book:
dcl-ds IFS_DirEnt qualified template;
reserved1 char(16);
reserved2 uns(10);
fileno uns(10);
reclen uns(10);
reserved3 int(10);
reserved4 char(8);
nlsinfo char(12);
namelen uns(10);
name char(640);
end-ds;
dcl-ds IFS_Status qualified template;
mode uns(10);
ino uns(10);
nlink int(5);
pad char(2);
uid uns(10);
gid uns(10);
size int(10);
atime int(10);
mtime int(10);
ctime int(10);
dev uns(10);
blksize uns(10);
alcsize uns(10);
objtype char(12);
codepage uns(5);
reserved1 char(62);
inogenid uns(10);
end-ds;
These two structures provide all the information you need. These are my own reformatted versions of the IBM-supplied data structures. You can use theirs; they are in the file SYSSTAT in QRPGLESRC in library QSYSINC. There are definitely pros and cons to using the IBM structures, and don't let my preferences get in your way. I'm just a little happier this way (although I have also been considering changing from this format to one using the POS keyword; this would allow me to not have to define the unused portions of the data structure—perhaps another day).
Next, I need to define the prototypes for the APIs I'll be calling. These definitions are in the same copybooks as the structures above.
dcl-pr IFS_opendir pointer extproc('opendir');
dirname pointer value options(*string);
end-pr;
dcl-pr IFS_readdir pointer extproc('readdir');
pDir pointer value options(*string);
end-pr;
dcl-pr IFS_closedir extproc('closedir');
pDir pointer value options(*string);
end-pr;
dcl-pr IFS_stat int(10) extproc('stat');
pPath pointer value options(*string);
pBuf pointer value;
end-pr;
dcl-c C_MODE_DIRMASK x'0001F000';
dcl-c C_MODE_DIRVAL x'00004000';
I have my own prototypes for all of these routines. You may notice that, like my data structure definitions, my prototypes also start with the characters "IFS_". It's my own way of keeping everything together. Anyway, the routines above can be broken into two sets: the first three allow you to open, read, and close a directory, while the fourth one returns the status (really the attributes) of a specific file.
Using the Routines
Now it's time to actually use these routines to process a directory.
dcl-s hDir pointer;
// Open directory, exit on error
hDir = IFS_opendir(iDir);
if hDir = *null;
show('opendir: ' + %str(strerror(errno)));
return;
endif;
So, think of a directory as a file containing a list of all the files in the directory (a concept that isn't very far from the truth). First you open it, and then you read it. These routines work using something called a "handle," which is fairly common in C library routines. You attempt to open the directory, and if the open is successful, the opendir function returns a handle to the directory. If it fails, the handle comes back as null. This is what we see in the handful of lines of code above.
Note that, if an error occurs, I call a routine strerror that actually extracts the specific error. It's a common technique used by many functions in the C runtime library; I'll go into it in more detail in a subsequent article. But for now, you could just put out a message that says an error occurred on the open (usually this is because you specified a directory that doesn't exist). Now we can process the directory.
// Spin through folder
dou (pDirEnt = *null);
// Get next directory entry, exit on null
pDirEnt = IFS_readdir(hDir);
if pDirEnt = *null;
leave;
endif;
Here is a typical DO loop. We “do” until we reach an end condition, but really the exit occurs after the DO loop. I use this all the time because it affords me a bit more flexibility with ITER, as we'll see in a moment. Here, I call IFS_readdir, which returns a pointer to a directory entry. If the value is null, then there are no more entries and I exit the loop. Otherwise, my directory structure (dsDirEnt, defined above) now points to the next entry in the folder.
// Skip subdirectories
wFile = %trim(%str(%addr(dsDirEnt.name)));
IFS_stat( iDir + wFile:%addr(dsStat));
if %bitand(dsStat.mode:C_MODE_DIRMASK) = C_MODE_DIRVAL;
iter;
endif;
This is why I like the top-loaded exit condition. First, I get the status data structure for the filename found in the directory entry. Then I use the %bitand function to see whether the file is a directory or not. If you're unfamiliar with the concept, do a little reading on bit masking. Basically, though, what this code does is isolate a set of the bits in the mode field and test them to see if the file is a directory. If it is, I use the ITER function to jump back to the top of the loop and get the next file.
// Check for match and process
if %scan( %trim(iFilter): wFile) > 0;
ProcessFile( iDir: wFile);
endif;
Now that I have a file, I see if it passes. The code above is a very arbitrary test: I just see if the value passed as the filter is found somewhere in the name of the file. If so, I process it. The tests can be as complex as needed; this is just an example.
enddo;
IFS_closedir(hDir);
return;
I hope the rest of this is pretty self-explanatory. When I fall out of the loop (due to the LEAVE opcode up at the top), I close the directory handle and exit the program. That's really all there is to it.
Of course, there's still a whole bunch left that has to do with actually processing the file, and I'll get to that shortly.
A Warning on Proliferation
I have to mention one potential problem. The IBM i can get bogged down when there are too many stream files, especially in a single folder. I last ran into the issue a few years ago on a V6R1 system; the magic number was 16383 files in one folder. Personally, I don't recommend having that many files in a folder anyway. You can also have issues with user profiles, because every stream file is an object that must be owned and managed. If you start hitting performance issues, you may need to address your architecture. I usually do it by pulling files off of the IFS and into a CLOB file. You can refer to Part 1 and Part 2 of my article on BLOBs, CLOBs, and XML for information on how to move from the IFS to DB2 to control your stream files.
The IFS is a great addition to the IBM i; and the better you can process those files, the more impact the platform can make on your business systems.
LATEST COMMENTS
MC Press Online