02
Sat, Nov
2 New Articles

AS/400 JDBC Performance in Overdrive

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

People often come to me for help with application design. Usually they have a pretty good idea of what they want to do and a vague idea of how they want to do it, and they often have arbitrary performance goals. Getting people to reconsider their performance goals is usually the hardest thing to do. Often it is easier to convince someone that his application should accomplish a different task than it is to convince him he doesn’t need performance levels as demanding as he thinks.

Remember that you can always get the performance level you need over time by building an application that is functional, scalable, and maintainable. Don’t be too concerned that you plan to have it rolled out to 10,000 people within five years if you are going to have it used by only 1,000 people by next year. Build the right infrastructure and performance tune later; your requirements are going to change. The application is going to be used in different ways than you can envision up front. And you probably will have replaced or augmented some, or all, of the involved hardware within five years. I have watched whole projects fail because too much time was spent on performance too early. The application functionality simply never got completed or got so complex that no one could finish it. One of the key tips to ensuring optimal performance of your application is to put it in perspective. There is a time and a place for performance optimization. Developers of the most successful projects recognize that functionality comes first and performance comes second. Take a look at how to achieve some of the most demanding performance goals.

The Right Tools Are Critical

For optimal Java Database Connectivity (JDBC) performance, it is critical that you keep up with current releases and that you apply PTFs as they are made available. IBM is always finding ways to make JDBC perform faster. A good way to do this is with the block insert enhancement PTF just completed for V4R5. (There is more detail on this PTF later in the article. It was just wrapping up at press time and should be available by the time you read this). In internal testing, this PTF was making common batch updates perform three times faster, while putting considerably less stress on the garbage collector. (Keep in mind that this test was a comparison of the new code in debug views with the old fully optimized


code.) The benefit of applying PTFs as they become available is that you get performance enhancements without waiting to the next major OS/400 release. I save major releases for new versions of the specifications, support for new APIs, and other stuff like that.

Analyze What Is Being Done

So, once you are staying current and getting down to serious JDBC work, the next thing you need to do is invest some time in understanding the work flows of your application. Nothing is done by magic. Understanding how your application work flows relate to JDBC, and how JDBC requests relate to the underlying database work that has to be accomplished, is an important part of predicting how to layout functionality for best performance.

The first place you want to focus your attention is on your application. Evaluate what work you need to accomplish. What is an acceptable response time? How much of the work has to be done in the critical path? Can background threads do some of the work? What are you willing to trade off, in terms of cache size, for application performance? What are the common cases, and what are the extreme cases? Are there common pieces of work that can be factored out and reused? You need to evaluate all of these questions and exercise a bit of creativity in your application layout.

The second place you want to focus your attention is on the layout of your JDBC calls. Don’t focus here first, because you are only interested in JDBC calls in your code’s “hot spots.” There is no point in optimizing a piece of code that is not performance-critical. You can count on many of the common JDBC functions to be very efficient, but there are a couple common things to look out for.

Simple Fixes

These are some simple performance fixes you can implement. One such fix, when dealing with ResultSets, is using the getXXX method, which takes in an integer instead of the version that takes in a String. There are two reasons for this. The first is that, under the covers, the database only understands numeric columns. Therefore, the ResultSet.getInt(String colName) method does the exact same work that the ResultSet.getInt(int colIndex) method does, but first it has to resolve the colName into a colIndex value. Obviously, the version taking in a String can never perform as well as the version taking in an int. The second reason is that this type of processing is often a heavily used piece of code. The statement is created only once and then executed only once, but there could be hundreds of rows fetched and thousands of data values retrieved. In this case, a little overhead paid for a lot of times can really add up.

Another performance fix is selecting the specific columns you will use on queries instead of taking the SELECT * approach. The reason for this is that there is a fair amount of hidden overhead in having the JDBC driver deal with unused columns. Even though your application never retrieves the columns, the JDBC driver still has to fetch them from the database. This is wasted data movement and wasted space that your application is taking.

Yet another performance fix is using PreparedStatement objects instead of Statement objects wherever possible. This seems like a simple tip, but it is common to see applications not do this. You’ll improve performance even if you only use the prepared statement twice. Prepared statements are so important that you should structure your programs around using them. There are few applications that can’t be designed to use prepared statements.

Caching to the Extreme

When a resource is necessary but expensive to create, the first thing that should pop into your mind is caching, or pooling. If a resource is expensive to create, you want to create as few of them as possible and reuse the ones you have already created. This is the principle


behind connection pools. Database connections are one of the most expensive resources to create. If you create a database connection every time you need to do database work, your application will not be successful.

Fortunately, database connections are also about the most reusable objects around. It is fairly easy to build a pool of database connections, to synchronize the adding of connections to the pool and the retrieval of connections from the pool, and to modify your app to use the pool instead of creating connections. I will not cover this in depth, but, if you are looking for a simple sample implementation of a connection pool or for more information on the subject, you can download my Spring COMMON presentation package (www.as400.ibm.com/developer/jdbc/index.html). In that package, I implement a connection pool and demonstrate a time test of how much more efficient it is to implement a pool than to create individual connections.

Note that connection pooling, while not hard to implement, is also often not necessary to implement. Many products and frameworks provide built-in connection pools that you can take advantage of instead of rolling your own. Products like WebSphere have an advanced connection pool framework, which you can use with only minor modifications to your application code. Consult the documentation for your specific application server or framework to see if there is built-in support.

Taking it one step further, the statement objects under a connection can also be pooled. While I say “statements” here, the pool is really more likely to consist of prepared statements or callable statements, which provide the pool with greater flexibility through the use of parameters. For example, say you wrote a real estate directory and the front page (assuming the application was a series of servlets and JavaServer Pages) of the application listed the properties that a particular agent was responsible for. From there, the agent could select a property and see a list of details about that particular property. The SQL statement for every agent that signs on will vary by just one value; it would be something like this:

SELECT * FROM PROPERTIES WHERE AGENT_NO = ?

Given this fact, it probably doesn’t make good sense to pool the connection and recreate the statement for every execution. Why not pool the connection and the prepared statement together? Assuming you wrote a very quick pool retrieval method, you would just have to set one parameter and execute the statement. See Figure 1 for the time measurements I recently achieved when I modified a piece of code to go from no pooling to connection pooling and then to statement pooling. A simple statement pool implementation is also a part of the COMMON presentation package and can be downloaded at the URL listed earlier. Of note here is that, once your pool is populated with the statements that are going to be used (something that can be done during startup or in the background before needed), you are basically doing the equivalent of static SQL.

Can you take it further? Sure. You can pool result sets, right? Well, I wouldn’t suggest it for a couple reasons. The first is that, once you have a result set, you have a state that is hard to get rid of. For example, if someone uses a result set and reads the first five rows before putting it back into the pool, what happens to the next person to use that result set? He reads the sixth row. You can use scrollable cursors to handle that by positioning to the first row and starting to read again, but scrollable cursors mess up the result set’s ability to block fetch data. Block fetch is a term for how the JDBC driver fetches data from the database. JDBC forces the user to fetch one row at a time from the database by way of the Next method. Under the covers, the JDBC driver fetches many rows at one time and keeps track of the user’s position in the block itself. It does this automatically, unless you use scrollable result sets. Then it switches back to fetching a single row at a time as needed.

There is a solution for situations in which the data can be cached as a group for reuse: The data can be “disconnected” from the database. For example, if you have an employee file with 1,000 employee records in it, and you use this information to regularly update screens in a human resources application, but the data is generally static (that is, you


are not adding new employees every couple of hours)—the whole employee table can be read out of the database into an object structure. Once the data is in an external structure of some sort, no database resources are required to use the data; the data can simply be used by the application.

While this might seem like a weird thing to do, the situations in which it could be used are quite common. Further, there are a number of advantages to this disconnection of the data. The first advantage is that database resources are not needed, so your limited database resources can be used for other work. A second advantage is that little overhead for the structures is involved, making it possible for you to use your data in new ways, like downloading it to a PDA and taking it with you. A third advantage is that, if the data for the application is “read only,” it can be used by multiple threads at the same time, which can significantly increase performance.

The JDBC 2.0 Optional Package provides a framework for an object called the RowSet, which provides for the capabilities I’ve described. Sun Microsystems has made available an early release of three different implementations of the RowSet interface. The CachedRowSet is the functional version of the technique I have described. The CachedRowSet even allows you to update the disconnected data and synchronize it back into the database later. If you are interested in investigating this technology further, I encourage you to download and experiment with the early release (http://java.sun. com/products/jdbc/); these row set implementations will be a standard part of Java Development Kit (JDK) 1.4.

Batch Mode Operations

You may be aware that one of the goals of JDBC 2.0 was to provide mechanisms by which applications could perform faster than in the past. (For a basic overview of batch mode processing, read the article “Test Driving JDBC 2.0 with jt400,” on the AS/400 NetJava Expert Web site, at www.midrangecomputing.com/anje/article. cfm?id=159&md=19992.) One of the additions made to increase performance was batch mode operations. You may also have noticed that batch mode operations do not perform faster than operations done without batch mode (actually, they perform marginally slower). There is a new PTF for V4R5 that will change that, though—at least under certain conditions.

The problems with providing an optimized batch mode were many. First and foremost, you can only take advantage of functions that exist in the system. The AS/400 database does not have a generic batch mode that can be used to increase performance for all operations, but the database does provide for a blocked insert, which is significantly faster than doing inserts one at a time. These are the requirements for a blocked insert:

• The operation must be an insert

• The operation must use parameters (prepared statements only)

• The parameters must be lined up, end to end (the JDBC driver handles this internally)

Fortunately, the population of tables with a lot of data values is the most common use of batch updates. Figure 2 shows an example of a batch (admittedly, a small one) that could take advantage of the new blocked insert support. As stated earlier, batch operations that are blocked inserts will show very noticeable performance improvements with this new support. Look to the AS/400 Native JDBC Web site to provide more detailed information on this new feature.

Avoiding Expensive Data Translations


It is really cool how JDBC will take care of formatting various data types into other data types for you. This keeps your application code simple and elegant, but again, nothing ever happens without a cost. Many of the JDBC data manipulations are cheap, but some are expensive. You can expect the common data type conversions to be highly optimized. If you question whether or not you are doing a common data type conversion or not, I suggest you write a little program to test the performance of various options. Figure 3 is a snippet of code showing what this performance test might look like.

Other than writing your own performance test, here are some tips that you might find useful:

• Expect working with BigDecimals to be slow. Where possible, use BIGINTs, DOUBLEs, FLOATs, and other data types in the place of BigDecimals.

• Expect to pay a modest performance penalty for doing anything with ResultSet.getObject or PreparedStatement.setObject. These methods first figure out the data type of the object passed in, then they get the base data type from the object. Finally, they just do the same work that would have happened if you had passed the base type in the first place.

• Expect Date, Time, and Timestamp operations to be slow. They involve object creations and string manipulations. If your dates are just for viewing, store them in the database in CHAR fields and Get/Set them as strings.

• Consider using a BIGINT column and setLong/getLong if precision is your rationale for using NUMERIC or DECIMAL data and you are targeting V4R5.

• Expect setString of a NUMERIC or DECIMAL field to perform significantly better than in the past. This should help many AS/400 developers, as these data types are quite common.

The real danger with data translations is that they are found in either ResultSet.getXXX() method calls or PreparedStatement.setXXX() calls. Both of these types of operations are routinely found in the critical paths of an application and are sometimes called hundreds of times a second. So you want to ensure performance-critical applications are doing things efficiently with these calls.

Don’t Use Escape Syntax and Turn It Off

JDBC drivers are supposed to support escape syntax by default. Huh? What’s that? To achieve a higher degree of compatibility, JDBC drivers are supposed to be able to modify SQL statements passed to them to change them to the format that a particular database wants. This syntax is denoted by brackets ({}) in SQL strings that are passed to the driver. You can use the escape syntax to implement various SQL features that are not as standard as they should be, and each JDBC driver will modify the SQL string to the syntax of the particular database it runs on. This is a good idea on the surface, but few people ever use it. If you don’t intend to use escape syntax processing, turn it off so the JDBC driver is not parsing all of your SQL strings. You can do this through a connection attribute; the code to accomplish this task would look like this:

DriverManager.getConnection(

“jdbc:db2:localhost;do escape processing=false”);

If you do use escape syntax processing, you might want to consider eliminating it. The code required to handle escape processing is expensive, as it involves heavy string manipulation of the input SQL text. Anything that does heavy string manipulation is going


to cause lots of garbage collection. Of course, an application that makes wise use of caching, as discussed earlier, will not be generating enormous numbers of statements.

Send Me Your Tips

I hope the tips presented in this article prove useful to you as you continue on your journey through Java and JDBC development. If you have tips you have picked up along the way, please email them to me. I always enjoy learning new tips on Java and JDBC performance.

REFERENCES AND RELATED MATERIALS

• AS/400 Native JDBC page: www.as400.ibm.com/developer/jdbc/index.html
• AS/400 Toolbox JDBC page: www.as400.ibm.com/toolbox/
• Sun Microsystem’s JDBC page: http://java.sun.com/products/jdbc/

Base unit of work
(100 inserts and
100 queries)
with no pooling used 42.867 seconds

Base unit of work
using a connection pool 24.315 seconds

Base unit of work
using a statement pool 1.367 seconds

Figure 1: Pooling connections and statements can make a big difference in performance.

// A batch that can use the new optimized batch support.
try {

// Obtain a database connection.

Connection c = DriverManager.

getConnection(“jdbc:db2:localhost”);

// Turn off autocommit for batch processing.

c.setAutoCommit(false);

// Prepare a statement to insert into a database table.

PreparedStatement ps =

c.prepareStatement(“insert into mytable values(?)”);

// Setup a batch to insert 100 rows into the table.

for (int i = 1; i <= 100; i++) {

ps.setInt(1, i);

ps.addBatch();

}

// Execute the batch.

ps.executeBatch();

// Cleanup the database resources.

ps.close();

c.close();
} catch (Exception e) {

System.out.println(“Error!!! “);

e.printStackTrace();
}

Figure 2: This batch request takes advantage of the new batch update support.

// An example of testing the performance of an operation.
try {

// Keep track of the time that an operation starts.

java.util.Date start = new java.util.Date();

// Obtain a database connection.

Connection c = DriverManager.

getConnection(“jdbc:db2:localhost”);

// Keep track of the time that an operation ended.


java.util.Date end = new java.util.Date();

// Display the time needed to complete the operation.

long time = end.getTime() - start.getTime();

System.out.println(“Operation running time: “+time);

// Clean up.

c.close();

} catch (Exception e) {

System.out.println(“Error!!! “);

e.printStackTrace();
}

Figure 3: This is a simple way to get an idea of the relative performance of JDBC operations.


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: