XML has become the standard for exchanging complex data, and i is quickly becoming the standard platform for processing it.
Passing business data from one machine to another has always been one of the more difficult issues, but one of the most common standards today is XML. XML has a number of features that make it very powerful for data interchange but that don't mesh with the fixed records and fields of traditional i languages. However, each release of i provides new capabilities that make it easy for i programmers to work in the XML world. In this article, we'll examine some of the features that have been added in recent releases, including Version 6.1.
In earlier times, it was all about size. Whether it was storing data on expensive disk drives or sending it over slow communication lines, space was at a premium. The easiest way to minimize space requirements was to format the data in flat files with predefined layouts. Numeric data in particular was stored in binary format, and the data itself--whether stored on disk or transmitted over the wire--contained no information about the data: no metadata, as such information came to be known.
Removing the Ties That Bind
This sort of hard-coded data definition has a number of benefits, not only in storage space, but also in processing speed. If you can define a buffer of contiguous fields in your program and simply overlay data read from disk or from a communication line directly into that buffer, then obviously that's the best performance you can achieve. If, on the other hand, you have to parse the incoming data and validate each field before being able to use it in your program, your processing requirements obviously increase, in some cases significantly.
However, a significant downside to such tight coupling exists: Everybody has to agree beforehand on the storage format. And while such agreement is easy to achieve when one RPG program calls another RPG program on the same machine, it is a little more difficult to come by when one side is a Java application or a .NET program and the other is RPG on the i. It's often quite a feat to convert data from one programming environment to data understood by another. As an example, packed decimal fields, a mainstay in all i languages, do not exist in most other environments. In fact, one of the most important tools in the IBM Java Toolkit is a set of classes that translate between fundamental RPG and Java data types (including data structures).
So as long as we insist on transmitting data between layers in a binary format, there will be data translation issues. What is needed instead is a standard platform-independent data representation. And while they are wildly divergent in both origin and use, the two constructs that best fit that definition today are SQL result sets and XML documents.
I won't spend a whole lot of time on result sets here. The only reason I bring them up is because, when they are appropriate, they can reduce a lot of the problems associated with binding, and anybody looking to connect two layers of an application should always check to see if SQL is a good fit. Their primary advantages include embedded metadata and very standard and well-known protocols for accessing the data. On the negative side, they are rather bulky. Also, results sets require an active connection to the database and so can't be persisted (saved to disk); thus, they don't lend themselves well to asynchronous tasks. (Note: There are ways around these difficulties, including RowSets in Java and other techniques, but that's not the focus of this article.)
Results sets grew out of the concept of an RDBMS, so they're closely linked to the concept of database tables: specifically, retrieving rows of data from those tables. The tables are defined, so really the idea of an external definition of the data isn't a high priority. The inclusion of metadata in the result set makes it a little easier to handle ad hoc requests, but generally the data contained within a result set represents rows of fields from an existing table.
With XML, it's a little different. The data in an XML document often has no relationship to existing persistent data. XML is close in spirit to Electronic Data Interchange (EDI) documents, which are intended to be standardized messages sent from one computer to another. The typical process would be for one computer to fill in the fields of an EDI document from fields in its database and then transmit the EDI document to another computer, which would then apply that data to its own, often quite different, database.
This is where the similarities end. EDI documents are typically fixed-length flat files, with records having varying layouts depending on the record type, usually found in the first few characters of the record. Older formats are entirely fixed, while the newer ANSI standards allow delimited records (often fields are delimited by asterisks and records by tildes). XML documents store data in a tagged format. Let's compare a simple example:
An EDI segment containing name and address information might look like this:
N1*Pluta Brothers Design, Inc.*542 E. Cunningham Dr.~
If you'll notice, the data in the EDI document is positional and delimited. I'm not an EDI expert, so I don't know what happens when the data contains the escape character. If, for example, the business name has an asterisk in it, I don't know how you represent it in EDI.
XML is a tagged representation. The same information in an XML document might look like this:
<COMPANY>
<NAME>Pluta Brothers Design, Inc.</NAME>
<ADDRESS1>542 E. Cunningham Dr.</ADDRESS1>
</COMPANY>
XML has specific methods for handling escaped data and in fact is built from the ground up to support advanced data formatting techniques such as Unicode. In fact, there's almost nothing you can't store in an XML document. Not only that, but XML documents also have external definitions that can be used to verify the data in the document prior to sending it to someone else. While it can't check for logical data errors, such as invalid customer numbers, it can check to make sure that a customer number has been included if required and even do basic syntax checking on the data. This editing was originally done through something called a Document Type Definition (DTD), which has since been superseded by the XML Schema Document (XSD) or "schema" for short. Here is one possible schema for my XML example above:
<xs:element name="company">
<xs:complexType>
<xs:sequence>
<xs:element name="name" type="xs:string"/>
<xs:element name="address1" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
Note I said "one possible schema." That's because there are many, many ways to specify data in an XML document. You can specify attributes on the individual data fields, you can group multiple fields into complex types, you can define the sequence of fields and whether they are optional, and so on.
How Does That Relate to i?
OK, that's all fine and good. XML lends itself to formatting and transmitting data from one machine to another by removing the positional requirements of flat files or delimited records. Data is stored in tagged files, which also provide standardized editing and validation. However, the further we progress down this road, the further away we get from standard processing using standard fixed-length structures prevalent in standard i languages such as RPG and COBOL.
Let's face it; neither language is particularly adept at string handling. I've written XML parsers and generators in RPG, and while the %trim BIF makes generators pretty simple, parsers are no walk in the park even if you take a number of shortcuts and remove some of the weirder capabilities (such as sending binary data in a tagged field). Until recently, the only other option was to use a parser written in another language. Awhile ago, Scott Klement wrote an interface to the Expat XML Parser, which is an open-source parser written in C. Scott went through the trouble of porting that parser to the i and then writing interface layers in both C and RPG. Another option is to use one of the powerful Java parsers, such as Xerces or Sun's own JAXB parsers. Either of these options obviously requires some level of Java knowledge. But you really didn't have a lot of other options.
Over the past several releases, this has changed significantly. While COBOL had the initial advantage, I think the fact that RPG has support for both SAX and DOM parsing gives it a leg up. I won't go into a long dissertation about DOM vs. SAX processing; the short version is that with a SAX-type parser, you don't call the parser to look for data in the document. Instead, the parser runs through the document and calls you for each event (such as data or a beginning or ending tag). With a DOM parser, you can look for only those elements you need, whereas with a SAX parser, you need to at least acknowledge all the possible events and ignore the ones you don't want. DOM is more flexible, while SAX is typically faster.
V5R3
V5R3 saw the first foray into HLL support for XML; in this case, the XML PARSE statement was added to COBOL. XML PARSE is an event-driven parser similar to the SAX-type parsers of JAXB and Expat. You define the name of a procedure that will be invoked for each event generated while the XML document is parsed, and the information about each event is passed in several special registers (XML-CODE, XML-EVENT, XML-TEXT, and XML-NTEXT).
You code a simple XML PARSE statement, which directs processing to your handler procedure. That procedure is usually a set of evaluate/when constructs, with ultimately one branch for each field in the document. One of the biggest issues I have with SAX-based parsers is that they require a different coding philosophy (or, at the least, some redundant trapping) to handle the following cases:
<LINE>
<ITEM>ABC1234</ITEM>
<QUANTITY>12</QUANTITY>
</LINE>
<LINE ITEM="ABC1234" QUANTITY="12" />
The first case is a fully tagged example; every child element gets its own tag. The second format uses attributes rather than child tags and significantly reduces the amount of redundant data. However, SAX-based parsers handle the attributes during the processing of the parent tag (in this case, the LINE tag). That requires extra code in the LINE tag. And if you wish to support both syntaxes, that code must be duplicated in the ITEM tag.
SAX-based parsing can get a bit involved. However, at least the parser is now native to COBOL; that gives COBOL the edge, at least in this release.
V5R4
This release brought XML to RPG in the form of the XML-SAX and XML-INTO statements. The XML-SAX statement provides support similar to that in COBOL, in which a procedure is invoked for each event.
XML-SAX %HANDLER(handlerProc : commArea) %XML(xmlDoc : options)
The handlerProc is the address of the procedure invoked, while the commArea parameter identifies a data structure used to communicate between the parser and the handler. The xmldoc parameter can specify either a field containing the XML or a file on the IFS. The options parameter controls many of the other processing options for the command.
XML-INTO supports a more DOM-style processing of the data, in which you have a predefined structure in which you store data from an XML document. Depending on the options you select, this allows you to read only portions of an XML document with a very simple syntax:
XML-INTO variable %XML(xmldoc : option)
The various components of this statement specify the actual work to be done. The variable is typically a data structure, although it can be a simple variable or an array of variables or structures. This is a powerful command that in many cases can provide everything needed to get data out of an XML document in a way that a standard RPG program can use it.
At the same time, COBOL is given the XML GENERATE statement, which allows COBOL programs to generate XML documents. This is increasingly important in the SOA world as Web services become a standard communications technique. Web services typically are made up of an XML request document and an XML response document; programs that wish to participate in the Web services world need to both parse and generate documents.
Please note that as far as I can tell, generation of XML in COBOL is limited to fully tagged data. That means that even the simplest records will expand quite a bit. So at this point, COBOL has lost the lead in parsing, but it still has an edge in that it now has generation capabilities, albeit not the most efficient generation.
Finally, 6.1
The problem here has been that even with all the enhancements to the language, RPG has still been dogged by field-size limitations that, while perfectly acceptable in the fixed-length world, caused a lot of problems as we moved into the new stream style of XML-driven code.
The ubiquitous 64KB limit was more than enough for most purposes in RPG. Heck, remember that once upon a time one of the leading figures in the IT industry said no computer would ever need more than 640KB of memory total. That being the case, limiting a field to 64KB isn't such a stretch, especially since that's the limit of a 16-bit integer and so allows offsets to be stored as two bytes.
And back in the day of record-based logic, where you read in a single record and processed it, no field needed to be longer than that. It isn't until you start dealing with entire transactions as atomic units that this becomes an issue. Now that programmers are sending entire orders or even batches of orders as a single XML message to be processed, that means that the buffer that holds the XML document needs to be large enough to hold the entire transaction. As RPG matures from a record-based business language to a more general-purpose processing language, that 64KB limitation becomes onerous. The workaround, using a user space, is just that: a workaround. It allows a larger document to be processed, but it's not very elegant.
And then finally in Version 6.1 (and this is the correct syntax now; technically, there is no V6R1 of i5/OS; we are now working with IBM i 6.1), IBM has increased the limit for fields to 16MB. This may seem to be a fairly trivial change, but actually it required a lot of work on the part of the compiler teams. You can scan the mailing lists for discussions on how to best represent the VARYING keyword for both backward compatibility and forward movement. In any case, the 16MB limit ought to be enough for most standard documents; anything over 16MB is something that probably needs to be processed in pieces anyway.
But then again, where have we heard that before?
LATEST COMMENTS
MC Press Online