29
Fri, Nov
0 New Articles

Getting Gold out of Journal Records

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Journaling is a facility that extends the AS/400's recovery capabilities beyond what can be obtained through backup/restore commands. What most people don't realize is that journaling is also a treasure chest of information about your system, users, files, and applications.

By accessing journal records directly, you'll be able to use journal records in myriad ways, many more than you could with just the system-supplied Apply Journal Changes (APYJRNCHG) and Remove Journal Changes (RMVJRNCHG) commands.

If you're journaling files presently and have never "played" with the journals, use the Display Journal (DSPJRN) command online and see what they look like. If you have a journal named Journal, here's what the DSPJRN command might look like:

DSPJRN JRN(JOURNAL)
Figure 1 contains an example of the screen you may get. Reading from the left, you'll see sequence
numbers, entry codes, and then entry types. Figures 2 and 3 contain the valid journal codes and entry
types. Also, it indicates which entry types go with which codes.

The heart of accessing journal entries programmatically is the Display Journal (DSPJRN) command. It's
also not a bad command to help you get familiar with journal entries if you are new to the subject.
Prompt DSPJRN from your AS/400 workstation, and page down to the last page of the command. You'll see
the parameter Output with a default value of asterisk (*). You can replace this value with *PRINT or
*OUTFILE to send its output to a printer or database file.

When you are ready to build a file to process with a program, specify *OUTFILE and press Enter. The
system will give you another set of parameters in which you name the output file to receive the data
and some attributes for it. Figure 4 shows the procedure for specifying an outfile (the symbol
indicates which lines I changed).

You can request different levels of journal information through the Output format parameter. The
default-TYPE1-is the smallest amount of information. Other values are TYPE2 and TYPE3, each giving more
information. Personally, everything I need to know is in the TYPE1 format, and that's what I'll focus
on in this article. The others' fields are listed in Appendix 1.2.3 of the OS/400 Backup and Recovery-
Advanced V3R7 manual.

The important field in Figure 4 is Field data format, the first field of the parameter Entry data
length (ENTDTALEN). Notice I put in *CALC.

Now, back up a minute, and I'll give you a couple rules I always use. One, for consistency, I always
send my output from the DSPJRN command to the same-named output file, MDJRNFLE. Two, before running
DSPJRN and outputting another bunch of journal entries to it, I delete it if it is still hanging around
the system from the last time I wrote to it. Here is why: Most of the AS/400 database files are of
fixed length. Journal records themselves are variable length. The DSPJRN command, using the Outfile
parameter, creates a file that is part fixed, part variable.

Let me explain that. The journal entry is a record. It has a fixed set of fields followed by a variable

field that can contain a one-character indicator, a system message of any length, or the image of the
record it is reporting as being changed. The record size within the output file where the journal
entries are put is always "fixed" at the longest journal record length. The result is really a fixed
record file. So every time the output file is created, it can be created with a different length.

The journal entry's first 16 fields (refer to Figure 5 are fixed; the eighteenth field (JOESD) is the
last one and the one that has a variable length.

Getting back to Field data format on the DSPJRN command: Its default is *OUTFILFMT, which gives you a
fixed record of 256 bytes. The database records (the before and after images) kept within the journal
records will be truncated to 138 bytes. If your records are longer than that (most files are), you'll
lose some data. It's better to use the *CALC value for the Field data format parameter; the resulting
file will be large enough to hold the largest records.

If you do use the DSPJRN command, even specifying the *CALC to a file that already exists, you may end
up with a truncation problem anyway. So delete the file you intend to name as the DSPJRN's outfile
before issuing the command, and you'll be OK.

At this point, you have a database file containing journal entries that you can read and do something
with. To summarize the process:
o Delete the outfile if it exists. (I reuse the same name. It simplifies DASD maintenance for me.)

o Run the DSPJRN command. Prompt it, and use as many qualifiers as you can on the second screen to
limit the selection. On the last screen, specify the output as *OUTFILE, press Enter, specify the name
of the output file, and specify its length as *CALC.

After those two steps, you have a database file of journal entries you can access with a query or a
high-level language (HLL) program.

There is one "gotcha" with having your query or program access the contents of the captured application
database record. It is raw data held in a field called JOESD; the journal doesn't differentiate
database fields. Typically, I'll include a data structure within an HLL program that I can move this
field into to parse out its fields.

Why would you want to read the journal entries? I have two reasons: I read them when a file is
corrupted beyond the capacity of using a simple Apply Journal Change (APYJRNCHG) or Remove Journal
Change (RMVJRNCHG) command; I also read them to analyze application functionality.

How can a database file get that corrupted? Lots of ways, but let me give you one from real life. A
company is doing a Year 2000 conversion on a large application. At some point, many file structures are
converted (as the internal date fields grow from 6 to 8 bytes). If the company had an undetected file
problem prior to the conversion and realized it after the conversion, using journal entries in their
traditional sense wouldn't work; the journal entries appropriate to the problem wouldn't match the
file's current record structure. The only way to fix that file is with a custom program that reads and
applies journal entries.

Studying journal entries is also a handy way of learning the AS/400, because journal entries follow the
system (or work management), not the application. You'll notice the journal entries stack up
differently from how you may think your application works. If you ever do read the journal
programmatically and have your own program use its records, you better know how the AS/400 works. Let's
use the journal to explore work management on the AS/400.

I've written a small application I'll call "typical." It opens three files. The first and third files
are populated with identical records, and the second file is empty. The program reads a record from the
first file, changes a field, and updates the record. Then, it adds the record to the second file. It
finishes by finding an identical record in the third file and deleting it. True, it's not too
realistic, but it gives us a good example of lots of functions. Journaling is on, capturing before and
after images for all three files.

So we have an RPG cycle that will do the following:
o Read sequentially and update a record from file 1
o Write that record to file 2
oReadbykeyanddeletearecordfromfile3
There are 100 records in file 1, so the program will go through all three steps 100 times. Note the
order in which the functions are performed in the program-update, add, delete.

Figure 6 shows the journal entries (with journal code "R"). Note that file reads do not appear in the
journal. Entries 12 and 13 show the first record update. Entry 14 shows the first record delete. But
remember the application construction-it wrote a record between the update and the delete, and that
record doesn't appear in the journal.

If you keep reading, however, you'll see an update at entries 111 and 112, followed by a write
(actuallyaPUTorPT)onentry113.Thisisfollowedbymorewritesuntilentry148,whereweseea
record deleted again.

This is what I mean about journaling following the system, not the application. The AS/400
automatically blocks records when it can. Record blocking isn't new or unique to the AS/400; what is
unique is automatic blocking. This is evident in the journal, because the journal logs transactions
only when they occur at the database level-it doesn't care about the buffers or program logic.
Here's how our application works. Updated records are not blocked. File 1's updated records went out to
the database and therefore into the journal in real time. When records are read by key and deleted
(whichiskindofanupdate),blockingisalsoturnedoff.Thedeletesinfile3fallintothis
category, so its database functions and journaling are also real time.

Writing new records to a file is an activity that can easily be blocked. The system does this for file
2's WRITEs. When records are blocked, they are stored in a buffer temporarily until they are written to
the database in a bunch. In the example shown, the buffer records were pushed to the database when the
buffer area filled up.

You can even figure out from the journal how big the buffer was. I know from the Display File
Description (DSPFD) command that each record in file 2 is 117 bytes. I know from the journal entries
that 34 records were written as a group. If I multiply 34 by 117, I get 3,978-almosta4KBbuffer.
You can (and should) always block records in files you process, especially those files you sequentially

process. Do this with the Override Database File (OVRDBF) command. Within that command, however, is a
parameter, Force write ratio (FRCRATIO). When you are journaling a file, leave this parameter at its
default, *NONE.

So maybe you can see that journal entries are a little different from your application, depending on
what's going on with the system. Let's get really weird and see what happens when we turn on commitment
control and apply it to the same program.

Then, let's go one step further to really analyze things. I'll modify my program to commit every
transaction but one-that one it will rollback.

From the results (Figure 7), we can see that when a transaction is committed, all file updates,
deletes, and adds that belong to a transaction are pushed to the database, even though automatic
blocking is technically still in effect.

Let's look at entries 15 through 19, because they are typical of this application's committed
(successful) transactions. Entries 15 and 16 record the update to a file 1 record. The application next
writes a record to file 2, but that record is put in the buffer, so the written record doesn't show in
the journal. The application then reads a record from file 3 and deletes it. That occurs on entry 17.
Now, the application encounters the COMIT op code. Any database records currently held in buffers are
forced to the database, so we see a Put or Write (PT) for file 2 on entry 18, followed by entry 19,
which shows a COMIT (CM) operation was encountered in the program.

Let's look at what happens when we have an unsuccessful transaction and try to roll it back with the
ROLBK op code (refer to entries 30 through 36).

Entries 30 through 32 are fairly normal. A file 1 record is updated, and a file 3 record is deleted. No
write appears yet because it is in the buffer, waiting for either the buffer to fill or a COMIT verb to
force it out. The program encounters the ROLBK.

ROLBK is more exciting than COMIT. Remember, COMIT only pushed buffered records to the database and
logged a journal entry. ROLBK, however, actively uses the journal entries. It starts reading entries
from the current (number 32) until the previous boundary.

A boundary would be the last time a COMIT, ROLBK, or logical unit of work (LUW) was encountered. In
this case, entry number 26 contained a COMIT, so it becomes the boundary the ROLBK operation will stop
at.

The ROLBK logic reads the entry 32, sees it is a record delete, and creates a contra-entry to put the
deleted record back. (A contra-entry is a term borrowed from accounting, meaning the appropriate
opposite function is invoked. The contra-entry for a delete record is a write record.) The entry code
is UR. UR is used only by ROLBK. It is the after image or the record image it is writing to the
database. That becomes record 33.

Let's try that another way. ROLBK reads entry 32. Entry 32 is a delete of a record (DL). It contains
everything the rollback needs to undelete the record-the file/library name and the record image
(contained in field JOESD). The rollback logic takes that information and writes the record back to the
file, then creates another journal entry indicating the action it took. This entry, number 33 in the
example, is identical to the reversing entry, 32, except its entry type is UR.

The rollback continues in this manner with the record pair 31/30, the after and before image for the
original update. Like the delete, each is assigned its own contra-type code (UP-BR, UB-UR), and the
database and journals are updated. After the system does the operation in entry 35, the database has
been restored to the point it was at prior to the start of this transaction.

The ROLBK instruction encountered in the program becomes entry number 36. If the program updated,
deleted, undeleted, or unupdated (to coin a term), what happened to the write? Simple-it never got out
of the buffer, so the system just lost it. It never made it to the database in the first place.

That brings up one sad issue with commitment control-too many AS/400 programmers don't know how to use
it. The committing points should be on transaction boundaries. A transaction boundary can be two
things:
o In interactive programs, it is all the screens needed to enter something. It doesn't matter if the
user is using one screen to enter a new vendor or 10 screens to do order entry. From the time he or she
starts on an initial screen to the time that screen appears again, ready for another vendor or order
entry or whatever, that's one transaction.

o In batch programs, a transaction boundary is everything from the time a record is read from the
primary file (the main one the batch program is processing) through all the secondary file processing
until the next primary file record is read.

I see way too many AS/400 programmers who don't recognize proper boundaries. For whatever reason, they
commit on a program boundary, not transaction-when the program ends, it either commits everything or
rolls back everything.

I won't get into judgment here, but let's look at what happens when we do that in our program. I'll
move the commit verb to the end of the program, run it once, replace the commit verb with a rollback,
and run it again.

Look at each resulting journal. The commit journal looks like the original journal we had commitment
control of. We update and delete over and over, interrupting every so often to write 34 records out of
the buffer. The only thing different is the Commit (CM) journal entry at the end. However, when the
program rolls back, it's got a small problem. The written records have been forced from the buffer to
the database in groups of 34, so the rollback logic simply can't ignore them in the buffer. It must
deal with them as database records. What we end up with is a journal that looks something like Figure
8.

Delete Record (DR) is the rollback contra-entry for Put Record or Write (PT). Commitment control based
on a program boundary has the effect of allowing blocked records-therefore the program runs as fast
when COMIT is used. However, a ROLBK encountered on a program boundary causes a real performance
problem-every transaction made since the program started must be backed out.

Note one thing about working with journal entries of committed records-the cycle ID. Every COMIT,
ROLBK, or LUW defines a cycle. In the correct use of commit/rollback, each transaction created one
cycle while the program-based commit/rollback had only one cycle ID. (LUW stands for Logical Unit of

Work. I won't get into details here; I mention it only for the sake of completeness.)
If you ever work with journal entries from a committed program, sort the entry file into cycles and
work with each set of transactions as a group. You may want to build tables within your program so you
can work out all the codes and contra-codes within a cycle before hitting the database.

We've used all the journal record entry types except for PX. PX is a write using a direct relative
record number (RRN). Instead of writing a record to the next available slot at the end of the file, PX
writes it using the record's RRN. If you write database record number 5, that record will go into
record number 5.

You can process a database this way in RPG, but it is very rarely used. It is used more commonly by
work management when writing records to a file that has been changed to allow the reuse of deleted
records.

V3R1 gave us the capability of designating files as being capable of reusing deleted records. Before
V3R1, every time you deleted a record, its space in the file was left open. Until a file was
reorganized or copied on top of itself, or cleared, those deleted records took up as much physical
space as they did before they were deleted.

Since V3R1, the Create Physical File (CRTPF) and Change Physical File (CHGPF) commands allow us to
designate a file's deleted record space as available for new records. With journaling, you can get a
glimpse of how work management does this. A PX entry type means that it found a deleted record slot and
forced a new record into that slot using an RRN. If the deleted record takes the number 5 slot in the
file, the new record gets plugged into that slot with the PX write. The field JOCTRR in the journal
entry record will contain the actual RRN the system used.

The problem you may have if you're working with journal entries on your own is whether to take PX at
its literal value and access it through the relative position or not. Frankly, I hate using the RRN,
even if the journal used it.

Here's why I feel that way. When I use journal records to fix a file, I run into a serious problem that
has existed for some time. (For fresh, simple database problems, I just would have removed journal
entries with the RMVJRNCHG command). While a record may have been added to the file at a particular
RRN, it may have been deleted later, and an entirely different record could be written into that same
spot. My contra-operation to a PX is a Delete (DL). If I take the journal's RRN literally, without
checking first, I'll delete a perfectly good record.

I have my own personal rules about using PX. If the file is keyed uniquely, I'll ignore the RRN and
just use the record's key fields (from the record contents held in field JOESD) and rely on normal
database access methods.

However, if the file is unkeyed or keyed with duplicate keys allowed, I'll use RRN, but I'll do a
direct read and compare some basic field values to make sure I'm at least in the ball park before doing
anything to it.

Here's a quick primer to help you access database records by RRN in RPG. On the file statement, the
file must be fully procedural; the K in column 51 must be taken out. In the calculation specifications,
use the CHAIN op code to position the file pointer to the record you want to access. Factor 1 for this
operation should contain a numeric literal or a numeric field. Either way, this is the RRN value CHAIN
will use to point to the record you wanted. If you want to point to the fifth record in the file,
either of these will work:

C 5 CHAINMSTFLE 01

C Z-ADD5 RECNO 60
C RECNO CHAINMSTFLE 01
After the file's pointer is successfully positioned (in the example, indicator 01 is off), you can
UPDAT or DELET the record. You can WRITE to the file anytime.

If you are going to WRITE records to a file using RRN yourself (taking that control from the system),
you're a glutton for punishment, but here's what you do:
Set up the file specification statement as file type O (output). You can't put in the F for fully
procedural. Leave off the K in column 51. In the continuation (K) area, put in the RECNO keyword and a
field name that you'll be using as the record number. Here's an output-only file specification:

FTSTJRN1 O E DISK KRECNO RECN A
Here's the code that will WRITE a record (presumably after you've filled its fields) directly to record
number 5:

C Z-ADD5 RECNO
C WRITETJR1 02
C *IN02 IFEQ ë1í
C ... Record was active, not deleted,
C handle that here...

C ENDIF
Be careful with this. You'd better know that record 5 is a deleted record; otherwise, the WRITE will
result in a "duplicate key" error message. You can get around this by specifying an indicator in
columns 56-57 (02 in the example) and checking its status after the WRITE.

I focused on the journal codes "R" in this article because they are representative of how you access
and use journal entries. If you keep in mind the skills I've presented here and refer to the possible
journal entries listed in Figure 3, you'll get a sense of how you can utilize journal entries for
yourself.

You can sweep them for security problems (who accessed a particular record in a file and what did he or
she do with it?) or any of those mysteries that sometimes happen in the computer room (who IPLed the
system last night?). I've even used it for improving complex applications (try using it to see how
often files are opened or closed in a typical session; you may be surprised).

We journal everything anyway, so we even use it for simple tasks like determining which users are
working on the weekend-who, when, how long, etc. Their managers love it. In IT, we use the user access
data to document computer demand each day of the month. That way, we make knowledgeable decisions about

when to take the computer down for service to have the least impact on our users.
While saying that, I realize that I'm working on a machine that carries lots of DASD. I have to make a
disclaimer here that journaling can be expensive in terms of DASD. It can also cause your applications
to take a real performance hit. Although journaling is nice, be careful to think it through before you
start. Try to come up with a journaling strategy that matches your environment to your company's needs.
Mike Dawson is a technical editor for Midrange Computing. He is also the author of The AS/400 Owner's
Manual, published by Midrange Computing, and The AS/400 System's Programming Guide, published by
McGraw-Hill. He can be reached at 602-788-4105.

OS/400 Backup and Recovery-Advanced V3R7 (SC41-4305-01, CD-ROM QBJALF01)

Figure 1: Sample screen from DSPJRN command





Getting_Gold_out_of_Journal_Records05-00.png 900x458

Figure 2: Journal code summary





Getting_Gold_out_of_Journal_Records06-00.png 820x745

Figure 3: Journal entry types





Getting_Gold_out_of_Journal_Records07-00.png 854x1118

Figure 4: Specifying an outfile for the DSPJRN command
(changed fields marked with )





Getting_Gold_out_of_Journal_Records08-00.png 900x418

Figure 5: Journal record layout





Getting_Gold_out_of_Journal_Records09-00.png 852x1295

Figure 6: Journal entries with journal code "R"





Getting_Gold_out_of_Journal_Records10-00.png 795x1239

Figure 7: Journal entries after ROLBK





Getting_Gold_out_of_Journal_Records12-00.png 783x1270

Figure 8: Journal with COMIT at the end of the program





Getting_Gold_out_of_Journal_Records13-00.png 900x745
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: