"The golden rule of DB2 is never share a resource across threads (connection, statement, any resource, etc.)."
—IBMer Tony Cairns
Being in a technology job has us playing the role of lifelong learner. I wouldn't have it any other way because that's one reason I like being in the technology field. It's almost like putting together a never-ending puzzle, yet still having the satisfaction of completing incremental puzzles along the way (i.e., projects involving technology).
Of course, not every day in tech is roses. Such was the case this past week as I was implementing a Node.js customer project that had a fairly simple feature set of authenticating with an IBM i profile and searching a table to display the results. The technology stack uses the Hapi.js server-side framework and the Node.js DB2 asynchronous driver to communicate with DB2 for i. Up to that point, I had mostly used the synchronous driver that IBM originally released. Well, the reality of the synchronous driver is it inherently stops you from stepping on your own toes because it doesn’t allow parallel processing in the same process (aka IBM i job), so there wasn't a threat of multiple threads infringing on each other (aka unexpected results). In switching to the asynchronous driver, that threat is very real, and, as I found out, it’s highly likely that you will have issues.
You can see some of the above storyline played out in a Node.js iToolkit repo conversation I had with IBM PASE expert Tony Cairns.
The long and short of it is that I wasn't using the Node.js DB2 asynchronous driver correctly! [Aaron looks around to see if anybody objects to him airing his ignorance.] In my defense (and probably everybody else’s who is also currently using the new async driver), the documentation is currently incorrect and not very descriptive in how to "do things right." The premise of this article is best stated by Tony in the aforementioned repo conversation: "The golden rule of DB2 is never share a resource across threads (connection, statement, any resource, etc.)." I was definitely sharing both connections and statements across threads. In that conversation, Tony made mention of his Node Bears tutorial, a code base that focuses on the idiosyncrasies of interacting with an asynchronous database adapter. A good read for anyone venturing into Node.js on IBM i.
In the Node Bears tutorial, Tony introduced a connection pooling methodology that would keep both connections and statements from being shared across threads. The model implementation (BEARS schema and BEAR table) was included in the single database pool file, and that wouldn't work for future uses, so I endeavored to break apart the code so it would be more generic and could be used by many people in many ways in the future.
Below is my refactoring of Tony's Node Bears connection pool. Place it in a file named db2ipool.js. This is the entirety of what's necessary in order to do primitive connection pooling. I say “primitive” because it doesn't include things like garbage collection for stale connections, or named private connections that could be used for more stateful connections. It is also hard-coded to use the most recent version of Node.js on IBM i, V6.9.1.
Conn = function(idx, database) {
this.db = require('/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a');
this.conn = new this.db.dbconn();
this.conn.conn(database);
this.inuse = false;
this.idx = idx;
this.stmt = new this.db.dbstmt(this.conn);
}
Conn.prototype.free = function() {
var newstmt = new this.db.dbstmt(this.conn);
if (this.stmt) {
delete this.stmt;
}
this.stmt = newstmt;
}
Conn.prototype.detach = function() {
var newstmt = new this.db.dbstmt(this.conn);
if (this.stmt) {
delete this.stmt;
}
this.stmt = newstmt;
this.inuse = false;
}
Conn.prototype.getInUse = function() {
return this.inuse;
}
Conn.prototype.setInUse = function() {
this.inuse = true;
}
// Connection pool
// =============================================================================
Pool = function(opt) {
opt = opt || {};
this.pool = [];
this.pmax = 0;
this.pool_conn_incr_size = 8;
this.database = opt.database || "*LOCAL";
}
Pool.prototype.attach = function(callback) {
var valid_conn = false;
while (!valid_conn) {
// find available connection
for(var i = 0; i < this.pmax; i++) {
var inuse = this.pool[i].getInUse();
if (!inuse) {
this.pool[i].setInUse();
callback(this.pool[i]);
return;
}
}
// expand the connection pool
var j = this.pmax;
for(var i = 0; i < this.pool_conn_incr_size; i++) {
this.pool[j] = new Conn(j, this.database);
j++;
}
this.pmax += this.pool_conn_incr_size;
}
}
Pool.prototype.easy = function(sql, callback){
this.attach( (conn) => {
conn.stmt.exec(sql, function (query) {
callback(query);
conn.detach();
});
});
}
exports.Pool = Pool;
Sometimes it's best to detail the use case scenario before diving into the underlying code, so let’s do that. Below is file pooltest.js that can be used to test the above db2ipool.js connection pool. For the sake of this example, place this file in the same directory as db2ipool.js.
var db = require('./db2ipool')
pool = new db.Pool();
var sql = "SELECT LSTNAM FROM QIWS.QCUSTCDT LIMIT 2";
pool.attach( (conn) => {
conn.stmt.exec(sql, function (query) {
conn.detach();
console.log(`query1: ${JSON.stringify(query,null,2)}`);
});
});
pool.easy(sql, (query) => {
console.log(`easy: ${JSON.stringify(query,null,2)}`);
});
Now let's talk through some of the features. First, we require the db2ipool.js module and then instantiate a new pool. At this point, there are two approaches to interacting with the connection pool, through either pool.attach or pool.easy. I originally only had pool.attach but then realized the majority of my queries would entail the exact same code; specifically, obtaining the connection, running my SQL, receiving the result, and then detaching from the connection. That's why I created the pool.easy wrapper, so my code could be cleaner.
Digging into pool.attach, we can see it has a while loop that will work to find a valid connection. If none are found, it will create more, in batches of eight (8). Once a connection is obtained from the pool, it will be passed back to our program via the following line of code.
callback(this.pool[i]).
At this point, you have a sequestered SQL connection and statement that will be used for the duration of a single SQL transaction and thereby eliminate the chance of "toe stepping." Take this code for a test drive on your free Litmis Spaces environment and let me know what you think.
In the future, I plan on putting this connection pooling into its own repo and also making an npm out of it so it can be easily installed into your Node.js application. That will be good content for my next article!
If you have any questions or comments, then please comment below or email me at
LATEST COMMENTS
MC Press Online