OS/400 does many great things behind the scenes to make our jobs as information professionals easier. One of these things is making query processes run faster. The OS/400 query optimizer selects the most efficient technique in which to retrieve data requested by third-party query products and more familiar IBM products, such as Query/400, OPNQRYF, and SQL/400.
An SQL statement such as SELECT is used only to specify what data the user wants. The optimizer actually performs the operations needed in how to select this data. In other words, it chooses the access path to the data. The query optimizer uses cost estimation, access plan validation, join optimization, and grouping optimization to perform this complex task. All of these are covered in the IBM manual DB2 for OS/400 SQL Programming. In this article, I will briefly cover cost estimation as it deals with the query optimizer.
Cost estimation is used in all queries or SQL statements. By performing calculations to figure the implementation cost, the optimizer can predict the best access method. The total cost of an access method is the sum of the following:
o Start-up costs
o Costs associated with the optimize parameter (*FIRSTIO, *ALLIO, or *MINWAIT)
o Costs of any access paths created, the expected number of page faults to read the records, and the expected number of records to process
Each of these costs is covered in more detail in the IBM manual OS/400 DB2/400 Database Programming.
The first model the optimizer uses is reading rows directly from the file. The method is most likely used when a query is performed on a file without any sequencing or record selection statements. For example, if you are running a query on a file to display all the data, the optimizer will choose not to use any existing access paths and read the data directly from the physical file.
The second model used is reading rows through an existing access path. For example, say you have a customer master file with a logical built over it keyed by the customer number. In your query or SQL, you are either selecting by the customer number only or sorting the data returned by customer number only. The existing access path created by the logical may be used.
The third model used is creating a temporary access path directly from the physical file. Using the customer file as mentioned above, you specify selection or sorting by customer name. Assuming there is no logical with the customer name as the leftmost key, a new access path may be created.
The fourth model used is creating a temporary access path from an existing access path. Again, using the same customer file with a logical built over the customer number, let's say you want to select records by customer number and customer ZIP code. The access path already in place by the customer number may be used as a base to create the new access path. This dramatically reduces the time needed to create the access path compared to creating an entirely new one.
We've all experienced it at one time or another. You're busy working, and, all of a sudden, the machine seems to come to a screeching halt. After using the Work with Active Jobs (WRKACTJOB) command, you discover a user running a query and consuming most of the system resources. You decide to wait it out, but after a while, nothing has changed. You call the user responsible only to discover that the query is not doing what he intended or that it could have easily been run later in the day when the system isn't needed for everyday processes.
Ending a long-running query abnormally is an option, but it's also a waste of system resources. On top of that, some queries' operations cannot be interrupted by the End Request option, because they have created temporary keyed access paths or are using a column function without a group by clause.
Fortunately, there is a solution. The DB2 for OS/400 Predictive Query Governor can stop a query in its tracks if the query's estimated runtime is excessive. The governor does not act while a query is running; instead, it acts before a query is run. The governor does this by comparing the estimated time to run the query against a predefined time limit.
The time limit is specified in seconds with the Query time limit (QRYTIMLMT) parameter on the Change Query Attributes (CHGQRYA) command. When a user runs a query, the estimated runtime is evaluated by the optimizer. If the time calculated by the optimizer is less than or equal to the time limit, the query runs. If the time limit evaluated by the optimizer exceeds the predefined time limit, a message is sent to the user stating that the query request will exceed the user-defined limit. The user then has the choice to end the query before it is run or to continue and run the query.
The CHGQRYA command affects the job specified in the JOB parameter. It will stay in effect until the job or session is ended or the time limit is changed again. This provides great flexibility as this limit can be changed depending on a number of factors, such as time of day or availability of system resources.
It is important to keep in mind that the time estimate produced by the optimizer is just that-an estimate. The actual runtime could differ from the estimated time, but in most cases, these two values should be fairly close.
In most shops, producing reports is a major part of the software package, selecting and formatting accumulated data into a concise, readable format that suits the user's needs. Whether the software is "canned" or "homegrown," Query and SQL are usually underused for creating reports for one reason or another. One factor may be the unknown consequences to the system resources. Relying on the optimizer or governor to deal with these unknowns is only part of the picture. Understanding how, as an IT professional, you can help Query or SQL produce better response times is another.
Query processing involves three steps: 1) validating the query and calculating the best possible method for retrieving the data requested; 2) performing the input and output operations for this data; and 3) presenting the data in the requested format. All three of these stages can be enhanced if you know a few of the basics of optimization.
A keyed sequence access path tells the system in what order to retrieve records from the database file. Access paths are mainly created using the Create Logical File (CRTLF) command. An access path is also generated when you run a query and the optimizer decides that this is the most efficient way to retrieve the records. The access path created by the query is only a temporary one and is not saved after the query is run. If you have a query that is run often or many queries that use the same access path, creating a permanent access path with the CRTLF command might be a good idea. Without access paths, query must read every record to see if it fits the criteria specified by the user.
Another factor one must weigh is the burden of creating permanent access paths. When a permanent access path is created, any change to a field in a database file results in updating all access paths where the updated field is a key. Obviously, the more access paths you have, the greater the performance hit when updating a file. Restore times are also affected by the number of access paths, as they must be re-created during a restore.
Using existing access paths is one way to help query performance. Consider the file in Figure 1. Four access paths are built over the file. In creating a query, we are asked to sort the file by fields A and B. We specify this in our query. When you run this query, only the access paths created by FILE2LF and FILE4LF are considered during optimization. This is because their leftmost keys match the sequence we have requested. It is inefficient for the query to read records from the access path created by FILE1LF or FILE3LF and sort again by the additional keys.
Sorting a query using existing access paths is one way to speed up the process. Another way is selecting records using fields that match the key fields of an existing access path. This works in the same way that we might use the Read Equal (READE) operation in RPG. Instead of reading the entire file and testing a value against the corresponding field from the file, we would perform a Set Lower Limits (SETLL) operation using the key value and then use READE with the key value to select the records we want. Using more key fields as selection criteria will improve performance even more, as query will have fewer records to sort through.
Sometimes in our job we will find the need to join files in a query or SQL. An example of this might be a listing of customer orders. For this listing, the user will most likely ask us to provide the customer name along with or in place of the customer number. Good database design tells us that we are not going to include the customer name in the order header file, so in order to get the name, we must join the order header file with the customer master file.
It is a good idea to have an access path built over the secondary file used in the query. For example, if we join the two files mentioned above by customer number and use the customer master file as the secondary file, having an existing access path built over the customer number in the customer master file would be ideal. As with sorting or selecting, including more join selection tests will significantly reduce the amount of I/O time required to run the query.
Another consideration when joining files is the order of the files. When at all possible, try to use the smaller of the files as the primary file. In most cases, the file with the least amount of records will be used. If you are going to select only a few records from a large file, then this large file could be used as the primary file, since the selection from the primary file will most likely take place first.
If sequencing of the data must be specified on a query using joined files, record selection and join selection tests become increasingly important. The fewer records that query selects, the fewer records that query must place in a temporary file and sort.
As you can see, the AS/400 helps us help ourselves. Query and (especially) SQL are being used more and more in everyday applications. With a little thought and understanding of how the AS/400 handles different situations, these powerful tools can help cut development time. Using Query for simple reports or SQL for record selection, if used correctly, will provide more of the valuable time that we all seem to be losing more of each day.
Bradley V. Stone is a programmer/analyst for Taylor Corporation in Mankato, Minnesota. He can be reached at
DB2 for OS/400 SQL Programming (SC41-4611, CD-ROM QBJAQ801)
OS/400 DB2/400 Database Programming (SC41-3701, CD-ROM QBKAUC00)
LATEST COMMENTS
MC Press Online