IBM continues to add services to DB2, many of which provide programmatic access to system information.
A couple of years ago, I wrote an article in which I introduced some DB2 services. This was early in the development cycle for these wonderful additions to the IBM i toolkit. In the ensuing years, that particular pouch of tools has grown to an entire rolling tool chest. More and more collections of important system data have been surfaced by DB2 services, and today I'll present a couple that I find extremely useful.
Why Are These Services So Important?
Before we delve into the specifics, I think it's important to understand why the very concept of a DB2 service is so important. Why go through the trouble of learning a new technique? Some (although not all!) of these services mirror data that can be materialized using CL commands. For example, I'll be showing you a DB2 service that allows direct access to journal entries, a task you can also accomplish using the DSPJRN command to an output file. However, by placing this same access within a service, now you can embed the access directly in an RPG program without having to go through the traditional exercise of creating a work file and then reading through it. Yes, you'll need to use embedded SQL syntax, but I hope you're already using that today; if not, some of these services might be enough to give you a reason to learn it!
Add to this ease of use the fact that many of these services don't even have a direct CL counterpart. The QHST log service I'll present in a moment can be emulated only by executing DSPLOG QHST to print and then scanning through the resulting spooled file. This is definitely not a future-proof solution! So let's take a look at some of the newest services.
Let's Get Historical!
One of my regular tasks involves checking the history log, QHST. I can't tell you how many times I've needed to spin through the history log in an attempt to track down certain events. To do that, I have to execute DSPLOG QHST and try to limit the scope using parameters to get to near where I want, and then page back and forth. It gets more difficult when I 'm looking for multiple events. Let's say, for example, I want to find the start and end dates and times for jobs by a specific user (a typical example would be an external database access). Since I can't use the DSPLOG parameters to filter by user, I instead have to use the technique I alluded to earlier: dump the log to a spooled file, copy the spooled file to a database file, and then finally scan through that database file. It works, but it's neither elegant nor future-proof, since the layout of the printed report may change in the next release.
Note: DSPLOG's first parameter is the log, which must be entered as QHST. Did IBM intend to have more? Enquiring minds want to know!
Anyway, let's take a look at a way to use the DB2 service associated with the history log, which has the appropriate name of history_log_info. With history_log_info, I can get a list of messages from the history log and manipulate them as needed. In this case, I want to get a list of all job start (CPF1124) and job completion (CPF1164) messages for a specific user. The user is QTFTP, which is the default user for trivial FTP requests.
with t1 as
(select * from
table(history_log_info('2018-06-01-00.00.00.000000')) hli)
, t2 as (select * from t1 where MESSAGE_ID = 'CPF1124')
, t3 as (select * from t1 where MESSAGE_ID = 'CPF1164')
select FROM_JOB, t2.MESSAGE_TIMESTAMP, t3.MESSAGE_TIMESTAMP,
int(t3.MESSAGE_TIMESTAMP - t2.MESSAGE_TIMESTAMP) Duration
from t2 join t3 using (FROM_JOB) where t2.FROM_USER = 'QTFTP'
The only really tricky part is the initial subselect, in which we access the history_log_info table function. This table function is designed to return a table, and by putting that within the TABLE clause and performing a SELECT on it, we now have a subselect like any other subselect. The history_log_info function takes parameters in various ways (which we'll see in more detail in the second example), but in its simplest form, the first parameter is just a timestamp that identifies the earliest entry to process. So in simplest terms, the T1 common table expression (CTE) consists of all entries since the beginning of June 1.
Other than the table access, the rest of the SQL is pretty straightforward. We create two additional CTEs: T2 for the job start messages and T3 for the job completion messages. Finally, we join those together showing job name, start, end, and duration for every job for user QTFTP. The result looks like this:
FROM_JOB START_TIME END_TIME DURATION
169703/QTFTP/QTTFT00017 2018-06-02-23.37.14.417034 2018-06-03-01.37.14.611797 20,000
169704/QTFTP/QTTFT00004 2018-06-02-23.37.14.421874 2018-06-03-02.07.14.679087 23,000
The first job lasted for two hours, the second for two hours and 30 minutes. Nothing to it!
Journal Mining
I also do a lot of journal mining. Journal mining involves locating and analyzing entries in a journal for forensic examination. This might be done to help understand a business process better or to identify a program that is updating the database in an unexpected manner. Either way, it involves looking at specific data. But one of my most common techniques is to send marker entries to the journal to then later retrieve data within a specific period. For example, I'll start by executing a command to send an entry to the journal:
SNDJRNE JRN(JRNLIB/MYJRN) ENTDTA('TRANSACTION BOUNDARY – START 01')
This puts an entry into the journal. I then execute whichever function it is I'm analyzing. Then I send another entry:
SNDJRNE JRN(JRNLIB/MYJRN) ENTDTA('TRANSACTION BOUNDARY – END 01')
Now I just want to review all the entries between those two. In order to do that, I need to get the beginning and ending sequence numbers, which I do with this query:
select ENTRY_TIMESTAMP, SEQUENCE_NUMBER, char(ENTRY_DATA, 100) Data
from table(display_journal('JRNLIB','MYLIB',
JOURNAL_ENTRY_TYPES => '00',
STARTING_TIMESTAMP => '2018-06-02-09.00.00.000000') ) a
Note that, once again, we have a table function (in this case, it's display_journal), which we put inside a TABLE clause. In this case, we don't need a CTE because we can do all the selection criteria within the call to display_journal. I used a specific syntax here in which I combined both position parameters and keyword parameters. The first two parameters are the journal library and name, so I was able to add those without the keywords. However, a very nifty syntactical feature of table functions is that you can specify parameters by keyword. In this case, I was able to add a keyword to limit the selection to only my type 00 journal entries, and also to start at 9:00 a.m. on June 2. The result looks like this:
ENTRY_TIMESTAMP SEQUENCE_NUMBER DATA
2018-06-03-15.29.02.675152 26,059,704 TRANSACTION BOUNDARY - START 01
2018-06-03-15.29.23.544480 26,059,801 TRANSACTION BOUNDARY - END 01
Now, in order to see what happened, I only need to look at the entries between 26,059,704 and 26,059,801. Not surprisingly, I can specify those values on a second call to the table function and further limit the scope with more parameters—to a single file perhaps or a program.
The nice thing about this technique is that I can run this inside a program. I could theoretically run a SELECT statement like this every night to review all the changes to a specific file. I could review the data and notify someone if a database anomaly is identified. As I noted earlier, I could have done this with DSPJRNE to an outfile, but this is so much easier, especially when it comes to automating repetitive queries.
So this article just brings to light a couple more DB2 services provided by the developers at IBM. I think the one thing I'd recommend to everyone is to keep an eye out from release to release and look for more of these great services!
LATEST COMMENTS
MC Press Online