Raise or lower the priority of a workload as needed to meet user-specified goals.
This article is an excerpt from chapter 6, “Performance and Tuning,” of DB2 11 for z/OS Database Administration: Certification Study Guide (Exam 312).
Using z/OS Workload Manager (WLM), you can set z/OS performance options for DB2. With WLM, you define performance goals and assign a business importance to each goal. The goals for work are defined in business terms, and the system decides how much resource, such as CPU and storage, to give to the work to meet its goal.
WLM controls the dispatching priority based on the user-supplied goals. It raises or lowers the priority as needed to meet the specified goal. Thus, you need not fine-tune the exact priorities of every piece of work in the system and can focus instead on business objectives.
The three kinds of goals are as follows:
- Response time: How quickly the work is to be processed
- Execution velocity: How fast the work should be run when ready, without being delayed for processor, storage, I/O access, and queue delay
- Discretionary: A category for low-priority work with no performance goals
Response times are appropriate goals for user applications, such as DB2 QMF users running under the TSO address space goals, or users of CICS using the CICS workload goals. Response-time goals can also be set for distributed users.
For DB2 address spaces, velocity goals are more appropriate. A small amount of the work done in DB2 is counted toward this velocity goal. Most of the work performed in DB2 applies to user goals.
Workload Manager
With z/OS Workload Manager, work is assigned to a service class. For each service class, define both the wanted performance goal and a business importance of meeting the stated goal.
You can define transactional work in a z/OS system to run with either transaction response time goals or with velocity goals. Generally, transaction response time goals are recommended because they provide more management control. However, velocity goals work well in many installations for workloads such as CICS, IMS, and WebSphere. The WLM service definition of workloads must be defined with knowledge of all workloads that run in LPAR and the Parallel Sysplex.
You might need to adjust the recommendations for DB2 workloads so they are correctly incorporated into the overall WLM policy design. To account for DB2 address spaces in a WLM service definition, apply the following recommendations:
- Place the IRLMPROC address space in SYSSTC dispatching priority. This address space manages IRLM locks and latches, and must run at a high dispatching priority that provides little or no CPU delay.
- Place the following address spaces, which are critical for efficient system operation, in the same user-designed service class. Define this class with aggressive performance goals and a high importance:
- ssnmMSTR contains the DB2 system monitor task and requires an aggressive WLM goal so it can monitor CPU stalls and virtual storage constraints.
- ssnmDBM1 manages DB2 threads and is used for system services such as page set opening. In data sharing environments, this address space is critical for global operations such as P-lock negotiations, notifications, and global commands.
- ssnmDIST and WLM-managed stored procedure address spaces run only the DB2 service tasks and work for DB2 that is not attributable to a single user. These address spaces typically place a minimal CPU load on the system. However, they do require minimal CPU delay to ensure good system-wide performance and to avoid queuing of threads. DDF workloads, with their higher CPU demands, are controlled by the WLM service class definitions for the DDF enclave workloads. Similarly, the higher CPU demands for processing stored procedures are controlled by the WLM service class definitions for the workloads that call the stored procedures.
The velocity goal of these address spaces must be high enough that new work and new connections can be started and placed into their own goal or the goal of the caller.
You should specify a high importance for the DB2 workloads, typically importance 1. In every case, set the service class with the DB2 address spaces with an understanding of the overall WLM service definition in use. Use this information as a guide in setting the DB2 portion of the overall WLM policy. To make certain that the DB2 environment receives the CPU service that it requires to ensure a well-running system, take the following actions:
- Ensure that protocols and methods such TCP/IP and VTAM are defined in SYSSTC.
- If necessary to meet performance objectives, consider designating certain service classes as CPU critical. Doing so provides long-term CPU protection of critical work. Designating a service class as CPU critical ensures that less important work generally has a lower dispatch priority than work that is marked CPU critical. This protection can be valuable for work that is extremely CPU-sensitive. Generally, having an appropriately set goal for the DB2 service class is all that is required to ensure a well-running system. However, CPU critical protection is available if needed.
- Set the velocity goal for the DB2 address spaces to be among the highest for the current LPAR. The intent is to define the DB2 address spaces so that they experience little CPU delay. These address spaces must be defined with a WLM velocity goal.
- Typically, DBAs place these address spaces together into the same user-defined service class. This user-defined service class need not be dedicated to these address spaces. Because they must be defined with a velocity goal, it is important to set an appropriately high goal set for this workload. Velocity goals are dependent on the overall LPAR logical CP configuration and on the amount of processor capacity allocated to the LPAR. LPARs that have fewer logical CPs can attain only lower velocities, whereas LPARs that have more logical CPs can obtain higher velocities.
- For LPARs that are constrained by a lack of CPU resources and have lower-priority work that uses DB2, use blocked workload support. This support is most important in an environment where work that uses DB2 resources runs constrained at low priorities. The low-priority constrained work might hold DB2 resources required by other, more important, workloads in the system or Sysplex.
- For environments where low dispatch priority DB2 workloads run in periods of CPU constraint, it might prove beneficial to help blocked workloads more frequently. Changing the default blocked interval (BLWLINTHD in IEAOPTxx) from the default value of 20 seconds to a value of 5 to 6 seconds might provide better overall system throughput at high utilizations. However, long-term reliance on blocked workload support to accommodate CPU saturated systems is not recommended.
Workloads that are misclassified in WLM might receive insufficient CPU. Accounting reports often display this as NOT ACCCOUTED FOR time. (Accounting reports are further described in this chapter.)
Storage and Performance
You can use several techniques to minimize the use of storage by DB2, thereby improving performance of users of memory. The next sections explore those methods.
Minimizing Storage Usage
The amount of real storage must often be close to the amount of virtual storage. Real storage refers to the processor storage where program instructions reside while they are executing. It also refers to where data is held—for example, data in DB2 buffer pools that has not been paged out to auxiliary storage, the EDM pools, and the sort pool. To be used, data must either reside in or be moved into processor storage or processor special registers. The maximum amount of real storage that one DB2 subsystem can use is the real storage of the processor, although other limitations might be encountered first.
The large capacity for buffers in real storage and the write avoidance and sequential access techniques allow applications to avoid a substantial amount of read and write I/O, combining single accesses into sequential access so that the disk devices are used more effectively.
Virtual storage is auxiliary storage space that can be regarded as addressable storage because virtual addresses are mapped to real addresses. Proper tuning of the buffer pools, EDM pools, RID pools, and sort pools can improve the response time and throughput for applications and provide optimum resource utilization. Using data compression can also improve buffer pool hit ratios and reduce table space I/O rates.
To minimize the amount of storage that DB2 uses, follow these recommendations:
- Use less buffer pool storage. Using fewer and smaller buffer pools reduces the amount of real storage space DB2 requires. Buffer pool size can also affect the number of I/O operations performed; the smaller the buffer pool, the more I/O operations needed. Also, some SQL operations, such as JOINS, can create a result row that does not fit on a 4 KB page.
- Commit frequently to minimize the storage required for locks.
- Improve the performance for sorting. The highest performance sort is the one that is avoided. However, because you cannot avoid all sorting, make sorting as efficient as possible.
- Provide for pooled threads. Distributed threads that are allowed to be pooled use less storage than inactive database access threads. Per connection, pooled threads use even less storage than inactive database access threads.
- Ensure that ECSA size is adequate. The extended common service area (ECSA) is a system area that DB2 shares with other programs. Shortage of ECSA at the system level leads to use of the common service area.
- Ensure that EDM pool space is being used efficiently.
- Use the long-term page fix option for I/O intensive buffer pools. Specify PGFIX(YES) for buffer pools with a high I/O rate—that is, a high number of pages read or written.
- Specify an appropriate value for the MAXKEEPD subsystem parameter. A larger value might improve the performance of applications that are bound with the KEEPDYNAMIC(YES) option, yet also keep SQL statement storage allocated when it is not in use. For systems with real-storage constraints, minimizing the value of the MAXKEEPD might reduce storage use.
Buffer Pools
Buffer pools are areas of virtual storage that temporarily store pages of table spaces or indexes. When a program accesses a row of a table, DB2 places the page containing that row in a buffer. When a program changes a row of a table, DB2 must write the data in the buffer back to disk (eventually), normally either at a DB2 system checkpoint or at a write threshold. Each write threshold is either a vertical threshold at the page set level or a horizontal threshold at the buffer pool level.
The way buffer pools work is fairly straightforward by design, but tuning these simple operations can make a big difference to application performance. The data manager issues getpage requests to the buffer manager, which might be able to satisfy the request from the buffer pool instead of having to retrieve the page from disk. We often trade CPU for I/O to manage our buffer pools efficiently. Buffer pools are maintained by subsystem, but individual buffer pool design and use should be by object granularity and sometimes by application.
DB2 buffer pool management by design provides the ability to alter and display buffer pool information dynamically, without requiring a recycle of the DB2 subsystem. This feature improves availability by allowing you to create new buffer pools when necessary as well as dynamically change and delete buffer pools.
Although you set initial buffer pool definitions during installation/migration, they are often hard to configure at this time because the application process against the objects usually is not detailed at installation. Regardless of the installation settings, however, you can use an ALTER statement at any time after the installation to add or delete buffer pools, resize buffer pools, or change thresholds.
–DISPLAY BUFFER POOL Command
In several cases, you can tune buffer pools effectively by using the –DISPLAY BUFFERPOOL command. When a tool is not available for tuning, the following steps help you tune buffer pools:
- Use the command, and view statistics.
- Make changes (i.e., to thresholds, size, object placement).
- Use the command again during processing, and view statistics.
- Measure statistics.
The output from the –DISPLAY BUFFER POOL command contains valuable information, such as prefetch information (sequential, list, dynamic requests) pages read, prefetch I/O, prefetch disablement (no buffer, no engine), and I/O and getpage information. The incremental detail display shifts the time frame whenever a new display is performed.
DISPLAY obtains several different types of details as well as varying degrees of granularity. Here are two examples that are useful for tuning applications with possible I/O problems.
The following DISPLAY command will produce a report showing all active buffer pools in the subsystem with getpage and I/O counts:
-DISPLAY BUFFERPOOL(ACTIVE) DETAIL INTERVAL(*)
The next example’s output will show the details, including I/O, for all objects in a particular buffer pool:
-DISPLAY BUFFERPOOL(ACTIVE) LSTATS(ACTIVE) DETAIL
LATEST COMMENTS
MC Press Online