When you add a field to a file, you do not have to recompile all programs that access the data.
By Sam Lennon
Traditionally, if you add a field to a physical file, you have to recompile all programs that use that physical file definition. Generally, you also need to recompile all programs that use logical files over the physical. But with some planning, you can avoid many of these recompiles, thereby shortening and simplifying the installation. And without level checks.
Perhaps you're in the position where you need to add a field to a heavily used file, one that has been around for years. You know many programs reference this file. You could create a new file with the same key to hold just the new data, but this is a high-activity file, and you know that, for performance, the right thing to do is to add the field to the existing file. So you bite the bullet, crank up your favorite cross-reference tool (if you have one), and are horrified to find that on top of the 13 programs you need to modify to use the new field, and the three you need to write, you are also going to have to recompile 317 programs and re-create 64 queries. There may even be some non-iSeries uses that you don't know about. (While these numbers are fictional, I have experienced such a project.)
Even if you have a smart change-management system that will automatically recompile the programs for you, the installation will take longer and have a higher level of risk. IT management, correctly, likes to minimize risk.
If you are in a situation like this, which may not be of your making, it is because the programs and queries are closely dependent on the physical data layout. To solve the problem, or to avoid it in new applications, the view of the data the program sees needs to be divorced from the physical file layout.
Logical files, used correctly, can add a layer of abstraction that helps minimize recompiles. Once in place, a logical file layer means that when you add a new field, the only existing programs or queries that need to be touched are those that will actually use the new field.
Why Are Recompiles Needed?
Everyone knows the answer: to avoid level checks.
Each record format in an externally defined file has a format level identifier. When an RPG or COBOL program uses a format, the compiler saves the current format level identifier in the program. When the program runs and the file is opened, the system checks the format level identifier of the file against the format level identifier the compiler saved at compile time and, if they are different, sends a CPF4131 escape message (the dreaded level check) to the program. Similar processing occurs with Query/400 query definitions. You can suppress runtime level checking and thus avoid level checks, but doing so is generally considered a bad practice, so the solution for a level check is to recompile the program or open and save the query definition.
The Crux of the Problem
The way we create logical files is the main reason we need to re-create programs and query definitions. When coding the DDS for a logical file, all the fields of the physical file are automatically included if you don't code any field names. This is the quickest and easiest way to define a logical file. So if your physical file is named ORDLINP and is coded like this...
A R ORDLINF
A ORDNUM 7P 0
A ORDLIN 3P 0
A SKU 11P 0
A QTYORD 5P 0
A QTYRCV 5P 0
A LASTRCV L
...then in most cases, the logical file, ORDLIN01, is coded like this:
A UNIQUE
A R ORDLINF PFILE(ORDLINP)
A K ORDNUM
A K ORDLIN
All the fields of ORDLINP are copied into ORDLIN01, and the two files are tightly coupled. If you add a field to ORDLINP, you have to re-create it, which you can do in one of two ways:
- By deleting ORDLIN01, saving the ORDLINP data, recompiling ORDLINP, copying the data back, and finally re-creating ORDLIN01
- By using CHGPF and specifying the new DDS
Either way, ORDLIN01 gets a new format level identifier, so you must re-create all programs or query definitions that reference ORDLIN01 as well as those that reference ORDLINP.
A Better Way
Minimizing recompiles and object re-creation depends on a few rules that require a small amount of up-front work and a significant commitment to follow the rules. Here they are, with the reasoning behind each rule.
Rule #1: When you code a logical file, always explicitly list the fields. This is the little bit of extra work that is needed up front and is the most tempting rule to ignore. ORDLIN01 would be correctly coded like this:
A UNIQUE
A R ORDLINF PFILE(ORDLINP)
A ORDNUM
A ORDLIN
A SKU
A QTYORD
A QTYRCV
A LASTRCV
A K ORDNUM
A K ORDLIN
Now if we add a field to the physical file, we have the option of not adding that field to existing logicals. Unchanged logicals will not get a new format level identifier and thus won't cause level checks.
Rule #2: Do not reference the physical file anywhere. Instead, always use a logical, including in applications that write new records to the file. This ensures that no application is dependent on the physical file definition. In fact, you can add a field to the physical without re-creating any objects. (Kent Milligan of IBM says "The one case that is not recommended is on SQL statements since that will force the usage of CQE (Classic Query Engine) instead of SQE. In that case, the SQL statement should reference the PF directly or they should consider creating an SQL view.")
Rule #3: When using embedded SQL, do not use the "SELECT * FROM ..." construct, where "*" means all fields. Instead, specify the fields you need explicitly, even if you are using all the fields. This will make the application not only independent of field additions to the physical file, but also independent of field additions to the logical file (unless, of course, the application needs the new field; in that case, you will have to re-create the application anyway). Selecting only the fields you need is also more efficient.
Rule #4: Do not code any keys on the physical file. This encourages enforcement of rule #2, since traditional IO nearly always uses a key. (Note that SQL will happily accept the physical file, keyed or not. This is not a problem, but it will likely generate "noise" in your research and your cross-reference tool.)
Rule #5: When you add a new field to the physical file, consider specifying a default value for new date, time, and timestamp fields. Remember: all existing programs that add data records will be doing so through a logical (rule #2) and will not reference your new field, so the default value will be used. Unless coded otherwise in the DDS, these field values default to the current date and time, which is likely to cause confusion. Ideally, you would specify DFT(*NULL), but this might not play well with older code, so something like DFT('0001-01-01') for date fields, DFT('00.00.00') for time fields, or DFT('0001-01-01-00.00.00.000000) for timestamp fields may be more suitable.
Example: Logical File Layer Exists
Let's see how it works. Suppose the ORDLINP file (DDS above) exists and was created about 15 years ago. A user department manager has asked that, when an order is delayed, we keep a status of the reason it was delayed and the date the delay occurred. The delay status is to be a single-character field, where a blank means no delay has occurred and anything else is a delay reason code. The support manager signs off on the change request since it looks easy. (Bear in mind this is a contrived file that would have many more fields in real life.)
You are assigned to make the change, and you find there are 330 programs and 64 query definitions that reference the file. Re-creating nearly 400 objects could make for a long and risky install. There are 23 logicals over ORDLINP, but fortunately, the logical file layer is in place and no objects directly reference the physical file.
The new ORDLINP will look like this, with fields DELAYSTS and DELAYDATE added at the end:
A R ORDLINF
A ORDNUM 7P 0
A ORDLIN 3P 0
A SKU 11P 0
A QTYORD 5P 0
A QTYRCV 5P 0
A LASTRCV L
A DELAYSTS 1
A DELAYDTE L DFT('0001-01-01')
After your analysis, you know you need to write two new programs to maintain the two new delay fields. Both will access the data using SKU and LASTRCV as the key. There are two existing logicals with these fields as the key. You also need to change three existing programs, all of which access the file by ORDNUM and ORDLIN, and there are several logicals with this key.
This means that just two logical files will need to contain the new fields. The trick is making the decision: do you want to change an existing logical or add another logical?
Consider the two new programs first. Suppose ORDLIN01 has the correct key (SKU and LASTRCV) for the new programs, and 17 programs and 5 queries already use it. ORDLIN05 also has the right key plus a couple of additional non-key fields, and 45 programs and 8 queries use it. You have several choices:
- Add the new fields to ORDLIN01 and re-create the 17 programs and 3 queries that use it. This seems reasonable, if ORDLIN01 has all the fields that will be required for the logic of the two maintenance programs explicitly coded in the DDS. You will also need to make sure there are no collisions between the new field names and existing variable names in the 17 programs you will have to recompile,
- It is likely that ORDLIN05 may be newer and will have more fields explicitly coded, so it might be a better choice to change, but the downside is the re-creation of the 45 programs and 8 queries.
- Create another logical, ORDLIN24, with the needed fields and keyed by SKU and LASTRCV. The downside is there will be another object on the system, but there should be no space or runtime overhead for this new logical because it will share the access path of either ORDLIN01 or ORDLIN05. The upside is that you won't have to change an existing logical and re-create its dependent objects.
The third choice is the quickest install with the least risk.
You will need to go through a similar exercise for the three existing programs that need to be changed. Find which logicals they use, see how many other programs and queries use those logicals, and weigh the cost, risk, and effort of recompiling those programs against creating another logical.
Whichever choice you make, it will be a quicker and safer installation than having to re-create almost 400 objects.
Example: No Logical File Layer Exists
What do you do if no logical file layer exists and you have to make the same change as the previous example, but you want a shorter, less-risky install than one that involves 400+ objects? The good news is that you can do it incrementally and have a logical file layer in place when you're finished. I am a fan of incremental installs, where possible.
The first step is to get rid of all references to the physical file. Create a logical that explicitly defines all the fields in the physical, with the same keys if the physical is keyed. Then change all the references to the physical to use the new logical. You can do this one object at a time or in groups sized to your comfort level.
Next, one at a time, change each logical to explicitly define all fields. This coding is simple, largely cut and paste. The file format level should not change, and you can easily check this using the DSPFD command. Existing programs and queries should not notice that the file has changed, but run a simple regression test if you're nervous.
Alternatively, you can make the change to each logical even more granular. One logical at a time, create a new logical with explicitly defined fields and the same keys. The new logical should share the access path of the original, so there should be no system overhead. Then, one by one, change dependent programs and queries to use the new logical. After some time in production, the old logical should show no use and can be deleted. Be aware that when you do the delete, an index build will probably be triggered on the new logical, which up to that point had been sharing the access path of the old logical.
Now you have a logical file layer in place and can proceed accordingly.
Conclusion
By making all references to your data through logical files, you can add fields to your physical database without having to re-create programs or queries. This makes for a shorter install that is less risky. Remember, management likes to minimize risk.
Notes
PF-38 or LF-38 Files: Be cautious if you have PF-38 or LF-38 files that were created a long time ago. There was a period when the logic to create the format level identifier on System/38 files was incorrect. Simply re-creating such a file with the same DDS today generates a new, correct format level identifier value. A colleague and I identified this issue with IBM several years ago, but there is no fix, and there probably can't be, since the old IBM code had a bug. We had no choice but to re-create the programs that used the file.
If you have any doubts, use DSPFD to check the file format identifier in the old and new versions of the file. You are looking for a column something like this bolded text:
Record Format List
Record Format Level
Format Fields Length Identifier
ORDLINF 6 28 2B1E6E5BB3280
Alternatively, you can run DSPFD to an outfile, like this:
DSPFD FILE(yourfile) TYPE(*RCDFMT) OUTPUT(*OUTFILE) OUTFILE(QTEMP/somefile)
You will find the format level identifier in field RFID.
Shared Access Paths: When talking about an additional logical file sharing an access path with an existing logical, I have used the word "should." There are occasional situations where logical file sharing may not take place. Run a DSPFD on the new logical; if it is shared, you should see output like this:
Implicit access path sharing . . . . . : Yes
Access path journaled . . . . . . . . : No
Number of unique partial key values . . :
Key field 1 . . . . . . . . . . . . . : 2
Key fields 1 - 2 . . . . . . . . . . : 7
File owning access path . . . . . . . . : LENNONS1/ORDLINL
If it isn't shared, consult the IBM documentation on "Using existing access paths." You can find the V5R3 Info Center documentation at this link.
LATEST COMMENTS
MC Press Online