Does the program-conversion requirement in V6R1 mean you should put off your move to V6R1? That depends....
A lot of issues have been brought to light regarding the latest release of i5/OS, perhaps none bigger than the object conversion issue. I've heard incredible amounts of static on the mailing lists about this, and it's perhaps the single most talked-about topic regarding the upgrade to V6R1. But is it really that much of an issue? I contend that, depending on whom you are, there may be much bigger fish to fry.
The Elephant in the Room
But let's start with the elephant in the room: namely, object conversion. The short version is simple: Under certain conditions, programs that used to run on V5R4 (and earlier releases) will no longer work on V6R1. This is the first time this has happened since the platform was introduced, and it's causing a furor. It's quite a bit more complicated than that, however (and it's also not the first time something like this has happened, but I'll touch on that later as well).
The medium-length version is that V6R1 requires programs to be converted. An IBM-supplied command will do the conversion for you; it just requires you to allocate the time for the conversion in your upgrade plans. IBM supplies a command, ANZOBJCVN, that will help you estimate the required time. I'll talk more about that later.
The problem is that certain programs won't convert. Specifically, programs that were compiled for V4R5 and prior (including any program containing modules compiled for V4R5 and prior) and had observability removed will not convert. They are missing essential information required by the conversion tool.
The furor primarily stems from the perception that IBM is only telling people about this now, and V6R1 is about to be released into the wild. To be more accurate, IBM has been telling people about the concept of encapsulation since V5R1, and in fact all programs compiled for V5R1 or later have the appropriate information. And IBM has been clearly telling people that the programs need to be re-encapsulated since last summer (right about the time that V5R2 was no longer officially supported, an event whose importance will become clear in a moment).
The Long Version
OK, here's the long version. When you compile a program, it has lots of internal gook that is used for various system functions. How much gook you have depends on a number of things, such as debugging level or profiling. But in general, the default state for programs prior to V4R5 was to have something called "observability" included in the program object. In ILE, this information is stored at the module level and was included in the program (or service program) when the program was bound.
Now, a clever programmer could remove this observability using either the CHGPGM or CHGMOD command. This was done for various reasons, ranging from smaller object size to reduced ability to reverse-engineer programs.
However, one of the key features of observability is that it contains enough information for i5/OS to be able to convert a program as circumstances warrant. The last time we had a major conversion was the CISC-to-RISC days; many of us older fogies still remember seeing a system message the first time an old CISC program was run, as the operating system converted it to RISC architecture.
Well, there is enough change in the program model for V6R1 that a conversion is necessary again. And IBM saw this coming; as of V5R1, any time you removed observability on a program, the information was not physically removed but was instead hidden from application-level scrutiny. In fact, the help text for the RMVOBS keyword on the CHGMOD and CHGPGM commands tells you this. Below is the relevant text from the CHGPGM command:
Some programs retain unobservable creation data even when observable creation data is removed. OPM programs created for release V5R1M0 or later (TGTRLS parameter when the program was created) will always contain creation data even when *ALL observability is removed. ILE programs created only from modules created for release V5R1M0 or later will always contain creation data even when *ALL observability or *CRTDTA observability is removed.
What this means is that since V5R1 IBM has built in a safeguard to prevent you from removing the information crucial to an upgrade. Note though, that the behavior of this depends on the TGTRLS parameter. That is, if you compile your program or module for TGTRLS(V4R5) or earlier, you can still circumvent this safeguard and thus create objects that cannot be converted.
And that, dear reader, is why the V5R2 end-of-service date is so important. With IBM's policy of only allowing TGTRLS to specify as far back as two releases, then V5R2 was the last release on which you could create an object targeted for V4R5. As of release V5R3, the furthest back you can compile is TGTRLS(V5R1), which includes the creation data safeguard.
So now that V5R2 has gone out of service (back in April of 2007, nearly a year ago), no programs compiled on an up-to-date release will fail conversion. In fact, it will now take quite the perfect storm of circumstances to create a program that will not convert. It's not impossible, but it does require some work:
- An OPM program, or one or more modules of an ILE program, must be compiled at target release V4R5 or earlier.
- This means they must be compiled on an out-of-service machine running on release V5R2 or earlier.
- Observability must have been removed.
And even if that does occur, you can still fix the problem by recompiling the offending module or program. So you're still OK, unless you have no source. Not only that, you have no source and you are attempting to upgrade to the latest release of the operating software.
So Who Does This Really Affect?
Well, let me go down the list.
- Software vendors. No reason their stuff shouldn't work, since they should have compiled once in the last year on an in-service machine. Remember, if you compile everything on a V5R3 or later machine, there is no way to create programs that do not convert. (That "everything" qualifier is a tricky one, though, and you'll see why shortly).
- Shops on up-to-date vendor software. They should be fine, and if they're not, they have reason to really complain to their vendors (see #1).
- Shops running software with observability. These folks are fine.
- Shops with home-grown software with source. They should be fine as well once they recompile everything. In the worst case, they'll have to recompile programs that they haven't compiled in years. Again, the "everything" piece can be a gotcha.
- Shops running software without source on back releases. These are "stabilized" systems; it ain't broke, and they ain't fixing it. These folks are fine, because they aren't upgrading to V6R1.
- Shops running old software without source and without observability but on up-to-date releases.
So of all the segments above, the only folks with problems are shops that want to upgrade to newer releases of the software yet they're running old software without source. These folks are definitely in trouble, and if you fit that particular profile, you had best start doing some research.
Typically, this happens when someone has old software that's no longer on maintenance, or the company they bought it from is no longer around. It happens more than you might think; there are folks still running S/38 code out there. And those folks are now at a serious decision point: resubscribe to maintenance (if possible), rewrite the old pieces, stabilize at V5R4, or move the affected features to another platform. None of these are particularly appealing options, but that's the crossroads we're at. However, as I've said, this is not a huge issue for most shops. Again, unless you fit #6 on the list above, you are in good shape.
Other Issues
Honestly, I think there has been way too much hype on the program conversion issue. Unless you fit the profile above, it's a complete non-issue. Other compatibility issues exist in V6R1 that might bite you hard.
JVM Defaults to Not Adopting Authority
This won't affect those people who have switched from the "classic" JVM to the new IBM Technology JVM, but if you're still using the classic version and you have created your Java programs to adopt authority, then these will stop working with V6R1. The only way to get them to work is to install a special PRPQ, and IBM highly discourages even that, because the PRPQ will be withdrawn in "a future release." The point is that an application that used to work in V5R4 may now start getting authority problems in V6R1.
SQL Call Level Interface (CLI) API
If you use the SQL CLI APIs, you may be querying the returned data to determine the type for a column. For some reason, IBM has changed the values for two types. Binary fields changed from 96 to -2, while varying binary types changed from 97 to -3. If you use these values, you have to change your code!
FTP Client Defaults to Extended Passive Mode
This is a good thing if your FTP server supports it; EPSV mode can help resolve some communications issues, especially when dealing with firewalls and SSL. But if your FTP server doesn't understand EPSV mode, suddenly you may start seeing errors on FTP scripts that used to work just fine.
CLOSQLCSR Defaults to *ENDMOD
This change probably won't affect the majority of users, and even for those it does affect, it is easy to fix by simply specifying the appropriate SQL option. However, the fact that the cursor is closing at the end of the module could cause some hard-to-diagnose errors or even performance problems.
New Syntax for JOIN with Using Clause
Because of a change that makes SQL syntax more standard, previously working code will now cause an error. Before, you would specify the following:
select t1.a2 from t1 join t2 using (a2,a3)
However, since a2 is in the using clause, it must be the same in both files, so you really don't need to qualify it. And in fact, in V6R1 you cannot qualify it. Instead, you must code the statement this way:
select a2 from t1 join t2 using (a2,a3)
Any code using the previous syntax must be modified.
CCSID on Java Runtime.exec()
This could really burn you. i5/OS is an EBCDIC environment, and Java is a Unicode environment, and PASE is UNIX and thus essentially an ASCII environment. Because of this, running Java programs in the PASE environment could require special consideration of CCSID issues. Sometimes, you had to write code to convert the stream files; other times, you could just pass the stream to another program. But no matter what, you had to be careful and consistent. With the new release, they've made it easier for new code because the child process will output its text using the file.encoding attribute of the JVM. And while this does indeed make it a bit less complex to write new code, any existing code depending on the old behavior must be rewritten. There is a compatibility flag, but that then makes new code use the old complex rules.
And on and On...
This is by no means an exhaustive list, either! To get more information, you should get the <complete list from IBM. The selection above is simply meant to point out that V6R1, like any new release, has a wide variety of issues that need to be reviewed and addressed, and that in that context, the fact that you might need to recompile your programs is really not that big of a deal.
Back to ANZOBJCVN
OK, so how big of a deal is it? Well, IBM provides a command that helps you answer that question. The command, ANZOBJCVN, is available via PTF. On my V5R4 machine, the PTF number is SI28415. On a machine with Internet connectivity, installing the new command is a snap. First, I ran SNDPTFORD, in my case for PTF number SI28415. In my joblog I saw the following messages:
PTF 5722SS1-SI28415 V5R4M0 received and stored in library QGPL.
Cover letter has been copied to file QAPZCOVER member QSI28415.
PTF 5722SS1-SI28410 V5R4M0 received and stored in library QGPL.
Cover letter has been copied to file QAPZCOVER member QSI28410.
PTF 5722SS1-SI26706 V5R4M0 received and stored in library QGPL.
Cover letter has been copied to file QAPZCOVER member QSI26706.
PTF 5722SS1-SI25502 V5R4M0 received and stored in library QGPL.
Then I did a LODPTF *SERVICE, followed by an APYPTF. This added a command ANZOBJCVN into QSYS, which I was then able to run. The command has several options, the two primary ones being OPTION(*COLLECT) and OPTION(*REPORT). The first option goes out and looks at the objects on your machine, while the second prints a report based on what has been collected.
I won't go into a long description of how to use the tool; it's all spelled out in detail in the IBM Informational APAR II14306. However, I can tell you something I found very interesting.
I ran the tool on my PSC/400 product, a product that I upgrade on a regular basis and that has been under constant development over the past several years. It took about 10 minutes to download and install the PTF and another 10 minutes to run the program on my libraries. And I found that several of the objects in PSC/400 failed!
This was rather amazing to me. I had fully expected zero errors, since I regularly recompiled and did nothing out of the ordinary. And yet, there they were. And so I went in and looked at the programs in question and I found the issue.
It seems that I have a utility routine that I have used for sending error messages for about a decade, and it hasn't changed in nearly five years. In fact, the routine has become so ubiquitous in all of my code that I moved it into a separate utility library; that library contains cross-product code that is used by not only my commercial products, PSC/400 and CPYSPLFPDF, but also my internal utilities and even code I write for clients. And while I regularly recompile my PSC/400 code, I haven't recompiled that one utility routine since September of 2003, with a target release of V4R5.
And so, since I regularly remove observability for my programs, any program that had that module statically bound would and did fail. I don't want to debate the pros and cons of having utility modules or of static binding; I just thought this would be a good real-world example of how even the best-intentioned system could cause problems with the new V6R1 requirements
It was quite easy to fix my programs. A quick recompile and rebind and everything was good. Of course, had I not had the source, I'd have been in a pickle. So I guess the moral of this particular story is that it's a good idea to get the tool as soon as possible and run it, just to see what's lurking. But also, please don't get blinded by this one small issue and forget to check all the other things in the memo to users. You might be surprised by what you find.
LATEST COMMENTS
MC Press Online