People often come to me for help with application design. Usually they have a pretty good idea of what they want to do and a vague idea of how they want to do it, and they often have arbitrary performance goals. Getting people to reconsider their performance goals is usually the hardest thing to do. Often it is easier to convince someone that his application should accomplish a different task than it is to convince him he doesnt need performance levels as demanding as he thinks.
Remember that you can always get the performance level you need over time by building an application that is functional, scalable, and maintainable. Dont be too concerned that you plan to have it rolled out to 10,000 people within five years if you are going to have it used by only 1,000 people by next year. Build the right infrastructure and performance tune later; your requirements are going to change. The application is going to be used in different ways than you can envision up front. And you probably will have replaced or augmented some, or all, of the involved hardware within five years. I have watched whole projects fail because too much time was spent on performance too early. The application functionality simply never got completed or got so complex that no one could finish it. One of the key tips to ensuring optimal performance of your application is to put it in perspective. There is a time and a place for performance optimization. Developers of the most successful projects recognize that functionality comes first and performance comes second. Take a look at how to achieve some of the most demanding performance goals.
The Right Tools Are Critical
For optimal Java Database Connectivity (JDBC) performance, it is critical that you keep up with current releases and that you apply PTFs as they are made available. IBM is always finding ways to make JDBC perform faster. A good way to do this is with the block insert enhancement PTF just completed for V4R5. (There is more detail on this PTF later in the article. It was just wrapping up at press time and should be available by the time you read this). In internal testing, this PTF was making common batch updates perform three times faster, while putting considerably less stress on the garbage collector. (Keep in mind that this test was a comparison of the new code in debug views with the old fully optimized
code.) The benefit of applying PTFs as they become available is that you get performance enhancements without waiting to the next major OS/400 release. I save major releases for new versions of the specifications, support for new APIs, and other stuff like that.
Analyze What Is Being Done
So, once you are staying current and getting down to serious JDBC work, the next thing you need to do is invest some time in understanding the work flows of your application. Nothing is done by magic. Understanding how your application work flows relate to JDBC, and how JDBC requests relate to the underlying database work that has to be accomplished, is an important part of predicting how to layout functionality for best performance.
The first place you want to focus your attention is on your application. Evaluate what work you need to accomplish. What is an acceptable response time? How much of the work has to be done in the critical path? Can background threads do some of the work? What are you willing to trade off, in terms of cache size, for application performance? What are the common cases, and what are the extreme cases? Are there common pieces of work that can be factored out and reused? You need to evaluate all of these questions and exercise a bit of creativity in your application layout.
The second place you want to focus your attention is on the layout of your JDBC calls. Dont focus here first, because you are only interested in JDBC calls in your codes hot spots. There is no point in optimizing a piece of code that is not performance-critical. You can count on many of the common JDBC functions to be very efficient, but there are a couple common things to look out for.
Simple Fixes
These are some simple performance fixes you can implement. One such fix, when dealing with ResultSets, is using the getXXX method, which takes in an integer instead of the version that takes in a String. There are two reasons for this. The first is that, under the covers, the database only understands numeric columns. Therefore, the ResultSet.getInt(String colName) method does the exact same work that the ResultSet.getInt(int colIndex) method does, but first it has to resolve the colName into a colIndex value. Obviously, the version taking in a String can never perform as well as the version taking in an int. The second reason is that this type of processing is often a heavily used piece of code. The statement is created only once and then executed only once, but there could be hundreds of rows fetched and thousands of data values retrieved. In this case, a little overhead paid for a lot of times can really add up.
Another performance fix is selecting the specific columns you will use on queries instead of taking the SELECT * approach. The reason for this is that there is a fair amount of hidden overhead in having the JDBC driver deal with unused columns. Even though your application never retrieves the columns, the JDBC driver still has to fetch them from the database. This is wasted data movement and wasted space that your application is taking.
Yet another performance fix is using PreparedStatement objects instead of Statement objects wherever possible. This seems like a simple tip, but it is common to see applications not do this. Youll improve performance even if you only use the prepared statement twice. Prepared statements are so important that you should structure your programs around using them. There are few applications that cant be designed to use prepared statements.
Caching to the Extreme
When a resource is necessary but expensive to create, the first thing that should pop into your mind is caching, or pooling. If a resource is expensive to create, you want to create as few of them as possible and reuse the ones you have already created. This is the principle
behind connection pools. Database connections are one of the most expensive resources to create. If you create a database connection every time you need to do database work, your application will not be successful.
Fortunately, database connections are also about the most reusable objects around. It is fairly easy to build a pool of database connections, to synchronize the adding of connections to the pool and the retrieval of connections from the pool, and to modify your app to use the pool instead of creating connections. I will not cover this in depth, but, if you are looking for a simple sample implementation of a connection pool or for more information on the subject, you can download my Spring COMMON presentation package (www.as400.ibm.com/developer/jdbc/index.html). In that package, I implement a connection pool and demonstrate a time test of how much more efficient it is to implement a pool than to create individual connections.
Note that connection pooling, while not hard to implement, is also often not necessary to implement. Many products and frameworks provide built-in connection pools that you can take advantage of instead of rolling your own. Products like WebSphere have an advanced connection pool framework, which you can use with only minor modifications to your application code. Consult the documentation for your specific application server or framework to see if there is built-in support.
Taking it one step further, the statement objects under a connection can also be pooled. While I say statements here, the pool is really more likely to consist of prepared statements or callable statements, which provide the pool with greater flexibility through the use of parameters. For example, say you wrote a real estate directory and the front page (assuming the application was a series of servlets and JavaServer Pages) of the application listed the properties that a particular agent was responsible for. From there, the agent could select a property and see a list of details about that particular property. The SQL statement for every agent that signs on will vary by just one value; it would be something like this:
SELECT * FROM PROPERTIES WHERE AGENT_NO = ?
Given this fact, it probably doesnt make good sense to pool the connection and recreate the statement for every execution. Why not pool the connection and the prepared statement together? Assuming you wrote a very quick pool retrieval method, you would just have to set one parameter and execute the statement. See Figure 1 for the time measurements I recently achieved when I modified a piece of code to go from no pooling to connection pooling and then to statement pooling. A simple statement pool implementation is also a part of the COMMON presentation package and can be downloaded at the URL listed earlier. Of note here is that, once your pool is populated with the statements that are going to be used (something that can be done during startup or in the background before needed), you are basically doing the equivalent of static SQL.
Can you take it further? Sure. You can pool result sets, right? Well, I wouldnt suggest it for a couple reasons. The first is that, once you have a result set, you have a state that is hard to get rid of. For example, if someone uses a result set and reads the first five rows before putting it back into the pool, what happens to the next person to use that result set? He reads the sixth row. You can use scrollable cursors to handle that by positioning to the first row and starting to read again, but scrollable cursors mess up the result sets ability to block fetch data. Block fetch is a term for how the JDBC driver fetches data from the database. JDBC forces the user to fetch one row at a time from the database by way of the Next method. Under the covers, the JDBC driver fetches many rows at one time and keeps track of the users position in the block itself. It does this automatically, unless you use scrollable result sets. Then it switches back to fetching a single row at a time as needed.
There is a solution for situations in which the data can be cached as a group for reuse: The data can be disconnected from the database. For example, if you have an employee file with 1,000 employee records in it, and you use this information to regularly update screens in a human resources application, but the data is generally static (that is, you
are not adding new employees every couple of hours)the whole employee table can be read out of the database into an object structure. Once the data is in an external structure of some sort, no database resources are required to use the data; the data can simply be used by the application.
While this might seem like a weird thing to do, the situations in which it could be used are quite common. Further, there are a number of advantages to this disconnection of the data. The first advantage is that database resources are not needed, so your limited database resources can be used for other work. A second advantage is that little overhead for the structures is involved, making it possible for you to use your data in new ways, like downloading it to a PDA and taking it with you. A third advantage is that, if the data for the application is read only, it can be used by multiple threads at the same time, which can significantly increase performance.
The JDBC 2.0 Optional Package provides a framework for an object called the RowSet, which provides for the capabilities Ive described. Sun Microsystems has made available an early release of three different implementations of the RowSet interface. The CachedRowSet is the functional version of the technique I have described. The CachedRowSet even allows you to update the disconnected data and synchronize it back into the database later. If you are interested in investigating this technology further, I encourage you to download and experiment with the early release (http://java.sun. com/products/jdbc/); these row set implementations will be a standard part of Java Development Kit (JDK) 1.4.
Batch Mode Operations
You may be aware that one of the goals of JDBC 2.0 was to provide mechanisms by which applications could perform faster than in the past. (For a basic overview of batch mode processing, read the article Test Driving JDBC 2.0 with jt400, on the AS/400 NetJava Expert Web site, at www.midrangecomputing.com/anje/article. cfm?id=159&md=19992.) One of the additions made to increase performance was batch mode operations. You may also have noticed that batch mode operations do not perform faster than operations done without batch mode (actually, they perform marginally slower). There is a new PTF for V4R5 that will change that, thoughat least under certain conditions.
The problems with providing an optimized batch mode were many. First and foremost, you can only take advantage of functions that exist in the system. The AS/400 database does not have a generic batch mode that can be used to increase performance for all operations, but the database does provide for a blocked insert, which is significantly faster than doing inserts one at a time. These are the requirements for a blocked insert:
The operation must be an insert
The operation must use parameters (prepared statements only)
The parameters must be lined up, end to end (the JDBC driver handles this internally)
Fortunately, the population of tables with a lot of data values is the most common use of batch updates. Figure 2 shows an example of a batch (admittedly, a small one) that could take advantage of the new blocked insert support. As stated earlier, batch operations that are blocked inserts will show very noticeable performance improvements with this new support. Look to the AS/400 Native JDBC Web site to provide more detailed information on this new feature.
Avoiding Expensive Data Translations
It is really cool how JDBC will take care of formatting various data types into other data types for you. This keeps your application code simple and elegant, but again, nothing ever happens without a cost. Many of the JDBC data manipulations are cheap, but some are expensive. You can expect the common data type conversions to be highly optimized. If you question whether or not you are doing a common data type conversion or not, I suggest you write a little program to test the performance of various options. Figure 3 is a snippet of code showing what this performance test might look like.
Other than writing your own performance test, here are some tips that you might find useful:
Expect working with BigDecimals to be slow. Where possible, use BIGINTs, DOUBLEs, FLOATs, and other data types in the place of BigDecimals.
Expect to pay a modest performance penalty for doing anything with ResultSet.getObject or PreparedStatement.setObject. These methods first figure out the data type of the object passed in, then they get the base data type from the object. Finally, they just do the same work that would have happened if you had passed the base type in the first place.
Expect Date, Time, and Timestamp operations to be slow. They involve object creations and string manipulations. If your dates are just for viewing, store them in the database in CHAR fields and Get/Set them as strings.
Consider using a BIGINT column and setLong/getLong if precision is your rationale for using NUMERIC or DECIMAL data and you are targeting V4R5.
Expect setString of a NUMERIC or DECIMAL field to perform significantly better than in the past. This should help many AS/400 developers, as these data types are quite common.
The real danger with data translations is that they are found in either ResultSet.getXXX() method calls or PreparedStatement.setXXX() calls. Both of these types of operations are routinely found in the critical paths of an application and are sometimes called hundreds of times a second. So you want to ensure performance-critical applications are doing things efficiently with these calls.
Dont Use Escape Syntax and Turn It Off
JDBC drivers are supposed to support escape syntax by default. Huh? Whats that? To achieve a higher degree of compatibility, JDBC drivers are supposed to be able to modify SQL statements passed to them to change them to the format that a particular database wants. This syntax is denoted by brackets ({}) in SQL strings that are passed to the driver. You can use the escape syntax to implement various SQL features that are not as standard as they should be, and each JDBC driver will modify the SQL string to the syntax of the particular database it runs on. This is a good idea on the surface, but few people ever use it. If you dont intend to use escape syntax processing, turn it off so the JDBC driver is not parsing all of your SQL strings. You can do this through a connection attribute; the code to accomplish this task would look like this:
DriverManager.getConnection(
jdbc:db2:localhost;do escape processing=false);
If you do use escape syntax processing, you might want to consider eliminating it. The code required to handle escape processing is expensive, as it involves heavy string manipulation of the input SQL text. Anything that does heavy string manipulation is going
to cause lots of garbage collection. Of course, an application that makes wise use of caching, as discussed earlier, will not be generating enormous numbers of statements.
Send Me Your Tips
I hope the tips presented in this article prove useful to you as you continue on your journey through Java and JDBC development. If you have tips you have picked up along the way, please email them to me. I always enjoy learning new tips on Java and JDBC performance.
REFERENCES AND RELATED MATERIALS
AS/400 Native JDBC page: www.as400.ibm.com/developer/jdbc/index.html
AS/400 Toolbox JDBC page: www.as400.ibm.com/toolbox/
Sun Microsystems JDBC page: http://java.sun.com/products/jdbc/
Base unit of work
(100 inserts and
100 queries)
with no pooling used 42.867 seconds
Base unit of work
using a connection pool 24.315 seconds
Base unit of work
using a statement pool 1.367 seconds
Figure 1: Pooling connections and statements can make a big difference in performance.
// A batch that can use the new optimized batch support.
try {
// Obtain a database connection.
Connection c = DriverManager.
getConnection(jdbc:db2:localhost);
// Turn off autocommit for batch processing.
c.setAutoCommit(false);
// Prepare a statement to insert into a database table.
PreparedStatement ps =
c.prepareStatement(insert into mytable values(?));
// Setup a batch to insert 100 rows into the table.
for (int i = 1; i <= 100; i++) {
ps.setInt(1, i);
ps.addBatch();
}
// Execute the batch.
ps.executeBatch();
// Cleanup the database resources.
ps.close();
c.close();
} catch (Exception e) {
System.out.println(Error!!! );
e.printStackTrace();
}
Figure 2: This batch request takes advantage of the new batch update support.
// An example of testing the performance of an operation.
try {
// Keep track of the time that an operation starts.
java.util.Date start = new java.util.Date();
// Obtain a database connection.
Connection c = DriverManager.
getConnection(jdbc:db2:localhost);
// Keep track of the time that an operation ended.
java.util.Date end = new java.util.Date();
// Display the time needed to complete the operation.
long time = end.getTime() - start.getTime();
System.out.println(Operation running time: +time);
// Clean up.
c.close();
} catch (Exception e) {
System.out.println(Error!!! );
e.printStackTrace();
}
Figure 3: This is a simple way to get an idea of the relative performance of JDBC operations.
LATEST COMMENTS
MC Press Online