02
Sat, Nov
2 New Articles

Is There an AS/400 in Your Application's Future?

Commentary
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Editor’s Note: Is there an AS/400 in your application’s future? This question was recently asked in Midrange Computing’s forum. Joe Pluta’s response was so eloquent that Midrange Computing felt we should publish it. Joe feels that in order to see the future, you have to study the past. Here’s Joe’s response to the telling question.

I believe the next generation of software is currently under development. Application design is cyclical in nature, with the type of applications dependent upon the relationship between presentation, back-end processing, and bandwidth. In the earliest days of batch processing, there was so little back-end processing power that you needed to submit jobs in batch in as close to machine-readable form as you could. Remember punched cards? It was your job to make sure the computer understood those cards, and only then would the computer spew out data in a semi-readable format. Cards to print, that was pretty much the cycle, with both the card reader and the printer being slave units to the CPU (back when CPU really meant Central Processing Unit).

Service Bureau

The next cycle was the service bureau. Clients of a service bureau sent large amounts of paper-based data such as invoices and payments which were keypunched via a 3741 or similar batch input device. This data updated batch files, which in turn were used to spew out reports. The reports were packaged and then shipped off to users. You might be able to call this the first foray into distributed processing, although it really was little more than the old punch card and printer interface with a UPS truck in between.

Until this time, there were only three types of programs: the batch data-entry program, the batch processing program, and the report print program. There were occasional combinations of these three, but the life cycle of the computer pretty much consisted of getting data, putting it into a file, reformatting the file, and printing it. This was where RPG (and the infamous cycle) had its beginnings.


Online Transaction Processing

Next, you began to see the concept of online processing. While I’m no expert on CICS, I expect it was very similar to the NEP-MRT processing of the original System/3 boxes running CCP. Users in a remote location could enter data on a data-entry terminal and call up inquiries. At this point, users began to standardize on the 24x80 screen (anybody care to enlighten me as to why it became 24x80?) and a block-mode user interface. Note that at this point, serious distributed processing was being done, even if it didn’t look like it. The 3270 and 5250 terminals were, for their day, pretty powerful computers in their own right, capable of handling not only communications but also screen formatting and user input. More importantly, though, these terminals were dedicated user interfaces, predefined to support a specific protocol.

The introduction of online processing allowed two new program types: online inquiries and possibly the biggest quantum leap in business applications, OLTP or online transaction processing. While an online inquiry was little more than a report being sent to a screen, the OLTP program was a whole new beast. Now you had requirements for record locking, concurrent updates, sessions and states, error processing, and the whole host of things that come from a distributed processing environment.

Computer to Computer

The next cycle came with the advent of peer-to-peer processing. At this time, computers began to speak to one another rather than only to human beings. This development spawned the whole raft of communications protocols such as async and bisync and SDLC. Consequently, users began to run into issues of compatibility—communication protocols had to match between computers, as did the data formats. Async devices could not communicate with bisync devices. Checksums protected data communications differently than cyclical redundancy checks. And even if the computers managed to establish a communications link, there was no guarantee that the data would match. This was when ASCII was 7-bit and EBCDIC was 8-bit. ASCII was the character format of choice for asynchronous communication, which used start-and-stop bits and was primarily for low- speed connections (I’m talking slow here...110 baud—that is, 110 bits at uniform density or, in current lingo, 110 bps). EBCDIC was used on high-speed bisync communication (bisync was high-speed because it didn’t require start-and-stop bits; however, you couldn’t have gaps in the transmission and had to insert TTD (temporary text delay) characters if you got behind—that made for some interesting communications programming).

But peer-to-peer processing was still batch mode: You received a batch of data from another computer (such as a list of price updates or transactions), and you posted that data to your files. OLTP, on the other hand, was still using a captive user interface, and so you programmed monolithic transaction processing programs that assumed you had control over the user interface. More importantly, these two programming tasks were seen as distinct and different: batch processing and interactive processing. You still see the remnants of that concept in the iSeries with its horrendous penalty for interactive processing. iSeries servers increase sharply in price when you add interactive CPW. (CPW is short for Commercial Processing Workload, IBM’s unit of measure for processing power, and is the yardstick the company uses to price their machines. Machines with an interactive CPW of zero are much less expensive than models with a high interactive CPW.)

Client/Server

This first generation of computer-to-computer processing was done without graphics, without PCs, and without the Internet. However, the next cycle was beginning to form: client/server processing. With client/server processing, small, dedicated computers were used to perform offline tasks that weren’t suited to the batch-processing characteristics of


the mainframe. These tasks ranged from CAD/CAM to Computer Integrated Manufacturing (CIM) to electronic data interchange (EDI) to any of a number of other tasks wherein a dedicated microprocessor-based client machine performed a specific function and interacted with the host machine. At first, this communication was pretty much batch. The client device might download some information, do its work, and then upload a batch of information back to the host.

Note that the keyword is batch. Even though you were now connecting small computers (the term PC hadn’t yet come into common use) to midranges, you primarily thought of communication between two machines as a batch process. A few visionaries were beginning to throw around terms like n-tier architectures, but, for the most part, these concepts were more blue-sky than concrete.

There were, however, a few companies that were beginning to understand the concept of client/server transaction architecture. With this architecture, a message was sent from one machine to another with all the information required to perform a request. The machine sending the request was a client, and the machine processing the request was a server. Now, I don’t know about the non-ERP, non-midrange market, but, for me, this message-based architecture was a direct response to creating graphical applications that could use AS/400 data. Users had a lot of knowledge about transaction processing in RPG and were learning about developing graphical applications in C and later in C++, but there was no good way to connect the two.

Harvesting PCs for Data

My first practical application for such a system was at International Harvester. I helped design a system where PCs installed at gas stations around the country could dial into IBM Portable PCs at a central office. These PCs were, in turn, hooked up through 5250 emulation to a very simple transaction-processing program on the host. This program put out a screen with a single 256-byte input field and a matching 256-byte output field. The Portable PC accepted a message from the PC at the gas station, typed the message into the input field, and “pressed” the Enter key. The message might have been a credit card authorization or a master file update—it didn’t matter. The midrange (in this case, a System/38) processed the message and sent back a response, either an OK or an error. This response would, in turn, be sent back down to the PC, which would display the results to the user.

This technology looked (to me, anyway) to be the ultimate way to design systems. Small application programs could be designed that would run on different machines, and those programs would run together seamlessly by passing messages back and forth. The location of the programs would be arbitrary, and a fairly simple routing architecture would move the data around. Systems could be designed to be extensible, flexible, and self- balancing (for example, a print server with nothing to do could volunteer to do a graphics conversion to help out an overloaded graphics server).

Some of us in the development community began to move in that direction, and at System Software Associates, I designed a very sophisticated set of client/server APIs that allowed just that sort of extensible, flexible, and self-balancing design. The design was primarily used to connect graphical client programs to RPG servers, but the interesting thing was that it could also use the client/server architecture to connect programs on the same box, creating zones of control where user interface programs never saw databases and vice versa. This development was leading to a wonderfully distributed programming environment.

Tech Confusion

A slew of technologies, however, came along, most importantly ODBC, that allowed direct access to data on a host machine. Screen scrapers also began to proliferate, and it became quite the rage to slap a graphical front-end on a 5250 data stream, thereby relieving the


software houses of what was perceived as an insurmountable job of trying to re-implement existing monolithic architectures into a true client/server design. To the company’s credit, SSA attempted to perform a true re-implementation on a project, but thanks to incredibly poor management decisions, the project failed spectacularly, and from the project’s ashes rose the ill-fated move to UNIX.

However, the move to writing complex applications on the PC that could directly access data on the AS/400 (or other host) had gained momentum, and you saw the tide turning away from message-based client/server processing and back to monolithic program design, where the user interface and the database access were inextricably linked. And, unfortunately, the tools for the PC, which allowed rapid application development, also generated this kind of monolithic code. Unfortunately, the business logic now had migrated to the PC, bringing with the shift problems of application integrity. But that change was of no matter to the designers and users who simply wanted more features, more functions, and integrity be damned. (Why do I say that? Well, the bugs that are found and tolerated on today’s typical desktop application would have been cause to delay release to clients or even cancel a project in the 1980s.) And so the cycle swung, until quite recently.

Message-based Middleware

Now, the tide seems to have cautiously moved back in the direction of message-based client/server processing. The advent of object-oriented programming languages has given more weight to the concept of separation of user interface and business logic. The Swing user interface of Java at its core embraces the Model-View-Controller (MVC) design philosophy that elegantly supports complete separation of data and presentation. Client/server programs can easily be written for the thick-client architecture where the client program is primarily a presentation device that communicates via messages to a server in the background.

But a conflict is being waged at the Web-interface level between those who want to design Web applications that talk to the screen as if they were old-fashioned OLTP programs, and those who want a more message-based design that can be created using servlets and back-end servers. HTTP protocol is the next evolution of the 3270/5250-style block mode of user interface where a message is sent from the user to a processing program. This development lends itself quite well to the send a screen of data/get a response type of programming you’re used to in the monolithic midrange school of design. But this type of programming is ultimately a very poor design because it assumes the characteristics of the user interface and doesn’t provide the hooks for any other interface
(i.e., a thick client or a cell phone). Instead, in my opinion, the best architecture consists of message-based servers that are independent of the user interface. First, you design a message-based business protocol. Then, if you want to implement a browser interface, you create what I call smart widgets that can present a request message to the user and gather the request data or present the contents of a response message. Why do this? Because the same servers that support this protocol for the browser can also be used by thick clients for B2B processing or any new interface that needs to be supported.

There’s an AS/400 in Joe’s Future

So what’s the point of this long-winded missive? The fact is that there is still an AS/400 in my future for application development. Why? The AS/400 is still the most productive business logic development platform on the market, bar none. The AS/400’s security, ease of operation, scalability, and many other features that made it a great platform for monolithic ERP applications actually translate doubly to the server environment. While I’m all for those users who would like to create a standard graphical interface for the AS/400, my position is that there already is one, and it’s called a browser, or a PC, or a cell phone.


AS/400 developers should concentrate on building a suite of powerful, message- based server applications that can be used by any sort of client and let the desktop folks fight it out for who has the prettiest graphics. I don’t want the browser wars to affect how I develop my business logic. I want to be immune from the user interface requirements, and a message-based business application architecture does just that. This type of architecture is the best way to make use of and extend the viability of the midrange platform.


Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: