29
Fri, Nov
0 New Articles

Tangled Technology

General
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

When I first started programming, I was the quintessential whiz-bang programmer. I rang every new bell and blew every new whistle, and I managed to shoehorn every new feature into whatever application I was writing, whether it needed it or not.

This was bad enough in the days of assembly language or C, when you were pretty much limited to a single language, or even early versions of RPG, when there were only so many ways of doing things. But nowadays, technology is bolting ahead by leaps and bounds, and it's hard to tell which new tools will help and which are hype, especially when what works for me may not necessarily work for you. Not only that, but some technologies just don't work well together and require a little extra coddling to co-exist.

In this article, I'll outline some fundamentals for identifying whether a technology is a good strategic fit for your company, and then I'll show you a couple of case studies to illustrate why certain technologies may not fit without some assistance.

The Latest Is not Necessarily the Greatest

Just because something is new does not mean that it is improved. We've seen many situations where that has been made clear. To identify whether a technology fits your organization, you need to address several key factors:

  • Is this technology stable?
  • Is this technology compatible with the rest of my environment?
  • Does this technology make me more dependent or less?

These questions don't address the issue of whether the technology helps you achieve your business goals and is cost-effective for your particular environment; I consider these issues to be business-oriented, and this article is focused entirely on the technology.

Is This Technology Stable?

This is a tough question, and one that you may not always get correct, but you must at least attempt to make the right call. In my mind, calling a technology "stable" requires that it meet a lot of criteria, not all of which are obvious at first glance.

For example, is the technology going to continue to be supported? Back in the day, that meant determining whether the vendor would still be around in five years. These days, the concept is more elusive, especially when dealing with open-source technologies. In the short-attention-span world of public consensus, today's Cadillac is tomorrow's Edsel.

Technologies die out for different reasons. Two typical reasons are over- and under-engineering. Examples include Enterprise JavaBeans (EJBs) and Struts. EJBs are already being recognized as a solution without a problem, especially in business applications. The overhead associated with EJBs and their lack of essential business capabilities make them unsuitable for all but the strictest Java-centric environments. Struts, on the other hand, is an example of an under-engineered idea. Sort of an "alpha" release of a really neat idea, Struts is now all but orphaned even by its creator and is well on its way to being eclipsed by the newer JavaServer Faces (JSF) standard.

Other issues associated with stability include whether a given technology is an accepted standard, whether the community that maintains it has a good vision, and whether the technology actually performs acceptably under your production workload.

Is This Technology Compatible with the Rest of My Environment?

This is not just a question of whether a given technology will actually work. It's also a question of whether you need the technology, both short- and long-term. An example would be Wireless Application Protocol/Wireless Markup Language (WAP/WML). While there is nothing intrinsically wrong with the WAP/WML protocol, it's really not necessary for most people. Unless you need to use a specific wireless device that happens to conform to the WAP standard, then WAP is simply not required. If the idea is to access your Web site from a cell phone, then standard HTML is probably fine, especially as the capabilities of cell phones continue to progress. I'd be willing to bet that there are companies that over-invested in WAP/WML because it looked like a good idea at the time, but in the long run it really wasn't needed--it wasn't compatible with their strategic direction.

Does This Technology Make Me More Dependent or Less?

This is the issue I get the most pushback on these days, and there's a valid reason for that pushback. My question boils down to this: Once you start using this technology, are you then tied to it? More importantly, are you "black box dependent" upon it? By "black box dependent," I mean are you no longer capable of actually debugging the code you are putting into production? I liken many of the newer tools, especially the Web-enabling middleware, as electronic ignition for cars. Prior to the introduction of the computer, a backyard mechanic could usually diagnose a car's problem and fix it without too much trouble. As cars became more computerized, with more sensors and electronics, it became all but impossible for someone with a screwdriver and a timing light to figure out what was wrong. Instead, only the computer knows what's going on and why the engine is behaving poorly. Today, the first thing that happens at the repair shop is the mechanic hooks the car up to an OnBoard Diagnostic (OBD) reader, which basically just displays the diagnostic codes that identify whatever the computer thinks is the problem. Your options are basically limited to replacing whatever the computer tells you to replace.

We're fast approaching just that level of automation with some of the products and technologies being handed to us. Some, like Struts, simply provide another layer on top of something we could already do ourselves. While an easier syntax is not necessarily a bad thing, if you are locked into that specific syntax and it doesn't provide what you need somewhere down the road, you've painted yourself into a corner. This is especially true for those orphaned technologies that are no longer actively supported.

Another area of programming automation comes from tools like WDSc. While I think I've been pretty clear on how much I love WDSc, it's also crucial to recognize that some of its features can be misused. The wizards in particular are prone to abuse. It's one thing to use a wizard to generate some code that I can look at, learn from, and adapt for use in my environment. It's another thing entirely to depend on the wizard for production-quality code. And I fear that more and more we're starting to see IT departments do just that: use a wizard to generate a prototype and then shove that prototype directly into production. And that raises some very red flags in my opinion. An example might help show why.

Case Study: The Web Service Wizard--Good or Evil?

Okay, that's a bit of an overstatement of the issue. The wizard itself is neither good nor evil, but it can certainly be misused. I had a real-life situation in which I was guilty of misusing the wizard, so I can attest to the potential danger of the approach.

Black Box or Black Hole?

In this case, I wanted to expose a service to one of my clients. This service allowed them to send me an XML file, which I would then process to generate and send back various objects, including some nicely formatted JSP files. From an initial review, this looked like an absolutely perfect use of a Web Service. The client would simply attach to a Web Service on my machine and transmit the XML file, and I would then return the objects appropriately encapsulated in another XML file. Nothing to it!

At first, this seemed fine. Although I ran into a few issues during the initial prototyping, most of them could be attributed to my own inexperience in developing Web Services. The WDSc wizards worked pretty much flawlessly, allowing me to generate a "HelloWorld" Web Service, which I was subsequently able to expand into a more full-featured utility. After some initial testing, I was ready to go to market, so to speak.

Unfortunately, that didn't happen. When you package a Web Service for deployment, you simply send some descriptive information to your consumer, who then uses that to build a Web Services client. All of this is under the covers, and very little is really exposed to the programmer. And while this can be seen as a good thing, it has one drawback: It's a black box. And when something goes wrong in a black box, there's very little anyone can do to fix it. In this case, a problem with firewalls arose. But since the person in charge of firewalls didn't understand Web Services, and since I only had the black box capabilities of the generated code, there was precious little that could be done to resolve the problem.

I did begin the research required to identify the problem, and I'll tell you about that in next month's "Weaving WebSphere" column. As a little preview, I'll let you in on an absolutely fantastic tool I found for analyzing TCP/IP traffic. The tool is Ethereal, and it's a sterling example of an open-source project: lots of people working together to create a great, free product. It's almost too good to be true, actually; I'd love to know how the project is funded and how the original author, Gerald Combs, supports himself. Regardless, it's the best TCP/IP analysis tool I've seen out there, including commercial products, and I highly recommend it.

Anyway, I ended up having to go back to the tried-and-true method of using a servlet. My client now sends me the XML via a servlet, and I send back all the generated objects in a zip file. That particular logic was quite cool, actually; it involved making use of the fairly recent javax.zip packages, as well as one or two other open-source packages. All in all, the project was successful, but the Web Service portion of it was a dismal failure. Why the difference? Well, primarily because I could play with the open-source packages and see how they worked, while the Web Service stuff was completely hidden beneath the generated code from the wizard. It is this level of black boxing that I believe to be generally detrimental to our industry.

Same Story, Different Day

Think about it: If all you know as a programmer is how to attach a bunch of black boxes together, and you're fundamentally incapable of either diagnosing what's wrong with those boxes or adapting them to changing business conditions, then you really aren't providing any value add to your department. If software is written by wizards, and those wizards are as readily available in Beijing as in Boston, then there's no reason not to ship that work offshore.

I don't mind wizards as ways to learn how a given task is done. But the code generated by a wizard is almost always far inferior to anything that can be handcrafted by a good programmer. It's that craftsmanship that differentiates an IT professional from a code jockey. And yet, these days I often hear (sometimes from people I respect in the industry!) that they need more WYSIWYG tools and code generators to do the work for them. But as soon as companies start accepting the output of generators, then there's no need for programmers, no way for ground-breaking software ideas to be business differentiators, and eventually no impetus for innovation.
And once that happens, the industry flies offshore faster than you can blink.

Case Study: RPG Source Code in the IFS

This one has me scratching my head. While I know Java source and HTML source and all the other "cross-platform" types of source all live best in a stream file environment such as the IFS, I still don't see that as a good reason to move my RPG, CL, and DDS source to the IFS along with it (and don't get me started on using SQL rather than DDS to define files, please, or we'll be here all day).

This is simply another case when the technology change doesn't have any value add. Look at what you lose: line-by-line source change dates, PDM-based access, source change date on objects.... I can't see what the benefits are here, but maybe you can. If you disagree, please share your views in the forum. I'd really like to know what the benefits are.

In the meantime, though, I do know that there are some tools out there that allow you to do some cool things with source files that are stored in the IFS. The most powerful tools for text file manipulation are in UNIX, and many of them are available under the QShell environment. I believe you can also access the IFS from a Linux PC, although it may require the Samba file mapping utility. Personally, though, my favorite way is to map a drive and then use the UNIX emulation package Cygwin to access the IFS. Cygwin is a very hefty but very powerful set of UNIX utilities ported to run under Windows. It is not small, though, so be warned (a full installation can be hundreds of megabytes).

The Final Word

Technology is good, change is good, productivity aids are good. But as in all things, moderation is the key, and forethought is required before adopting new technologies. Take the time to assess whether a given technology is actually going to help your company in the long run. Remember, if you let your tools dictate your design, your design is unlikely to provide you an advantage.

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been working in the field since the late 1970s and has made a career of extending the IBM midrange, starting back in the days of the IBM System/3. Joe has used WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. Joe is also the author of E-Deployment: The Fastest Path to the Web and Eclipse: Step by Step. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: