You can't be sure your software applications work properly without testing them. Solutions available for the IBM i help automate the testing process. But there are issues.
Before any software application is ready for sale or deployment, it has to be tested to make sure it works as intended. Although you personally have probably seen some software that seemed like it might not have followed that path, a majority of applications, whether homegrown or offered for commercial sale, must have some testing behind them.
If this sounds straightforward, it's really anything but. For one thing, software testing doesn't prove that software works; it can really only prove conclusively that something doesn't work. This is why software may contain errors despite the most rigorous testing and why IT staffs and software vendors under pressure to produce something let buggy software out the door: it's impossible to test everything.
There is a thicket of additional issues and terms surrounding software testing, and if you're facing a need to test software, for whatever reasons, you first need a basic primer on what some of those issues and terms are. You'll find them to be useful when you look at the features software tool vendors offer with their products, because this is the nomenclature you'll encounter.
Multiple Choice or Essay?
What kind of testing do you need to do? It turns out that there are many kinds. For example, if you're part of an enterprise that is considering a software purchase, you have to figure out if the software will perform as advertised and actually do what you need it to do. This is called acceptance testing, and it basically requires you to have a checklist of acceptance criteria—that is, a collection of actions you need the software to complete successfully—against which to test. If you have concerns about how the software will function if it's accessed by a large number of users or how it will operate if you run it on a smaller system, you'll also have to do some performance testing. If you want to check out the effects of a large number of users accessing the same application code or database records at the same time, that's called concurrency testing.
If the software is sufficiently different from what your end users have been employing so far to do their jobs, you have to be concerned with usability testing (how easily users can learn the new product) or even user acceptance testing, which is necessary when users have sign-off authority on adopting the new software. If you're installing new software along with new hardware, you'll be interested in benchmark testing to make sure everything works together.
If the software performs functions critical to the enterprise (which would be just about any software according to at least someone in the organization, wouldn't it?), you'll have to address concerns about guarding against access to the software by unauthorized users, or unauthorized actions by the software itself. That's called security testing. Does the application include passing information to a sequence of users for review or approval? Then you're into workflow testing.
Maybe you're simply thinking of installing a new version of an application you already have. If you want to make sure that software functions that used to work just fine still do, you're into the realm of regressive testing. If you want to be sure the basic features of the software function without getting into too much detail, then you'll be doing breadth testing. And if you need to meet some kind of national or international standard for the performance of the software—perhaps for use in an engineering or medical environment —you'll need to be concerned with conformance testing.
Finally, there's the software's environment to consider. Does the application use a GUI? Is it accessible by browser? Is it going to run, or run on, a Web site, the performance of which might also be affected? Will the software interact with databases or other applications? Those aspects must be tested as well, and most of them are considered to be part of integration testing.
The trouble is, you'll probably need to do at least several of these in an adequate test regimen.
Can You Tell Your SATs from Your GREs?
Once you've decided the "what" part of testing, there's the "how" to deal with. If you aren't concerned with the actual code in an application and plan to test primarily by entering data and seeing what comes out the other side, that's called black box testing. If you do care about testing the code itself, that's white box testing, and taking a combined approach is called grey or sandwich testing.
Do you think you should start by testing major components of the software and work your way down to verifying the detailed parts, or start at the bottom by testing small components before working your way up to the application as a whole? That's the nub of the philosophical argument between top down testing and bottom up testing. Or maybe you don't care; you just want to test all the piece-parts at some point, which is called unit testing.
The two major types of code testing are functional testing and regressive testing. Functional testing means testing to be sure that all lines of code in an application are exercised in a test. Regressive testing refers to repetitive runs of data and functions to be sure that all parts of an application that are expected to work still do.
Then there's the test bed. That's a special environment you're going to need to set up for testing. It'll need to duplicate, at least in miniature, the operating environment in which the application being testing eventually will run. You won't want to risk corrupting real data, so you'll need one or more temporary databases that approximate the actual data the application will use. You'll need copies of anything else the application will interact with. You might want it to run on a non-production system too so that testing doesn't interfere with actual operations or affect the performance of the machines you're actually relying on to stay in business. After all, if you do find a bug, you don't want it to affect anything else.
It's been standard practice for many years to build an application, then test it to identify problems, and then fix the problems. More recently, some testing specialists point out that that's the most expensive way to do it. It's better to find bugs early, the new argument goes, when fixing them doesn't potentially touch lots of other pieces of finished code. Are you testing an application you developed in-house? Are you using a cutting-edge development methodology such as agile programming? Then you may need to do even more research. Agile programming, for example, has advocates that preach a test methodology called test-driven development. Simply put, it calls for unit testing of each software component as it's developed and not waiting until you have a finished, or nearly finished, application before you start testing.
Who is going to do the testing? If your users have to learn and accept the new software, at least some of them should be involved. If you're developing software in-house, it might be better to not have the same developers who built the application test it. What methodology will you use? How will you know if you've covered all the bases?
The final big "how" question is the issue of manual versus automated testing. Manual application testing processes have been with us as long as there have been programmers, but such methods can be as time-consuming as the development process itself. Key to shortening the testing curve, as well as making sure the test process will reliably find any operational problems, is to automate the process as much as possible. On the other hand, automated testing is really only useful when testing a large number of repetitious actions. Complex interactions, special conditions or situations, and software aspects you only have to test once are better handled manually.
The point here is that software testing takes some serious planning. You can't just pick a testing product and expect it to come up with all the answers. Someone has to analyze the issues and dangers a new piece of software represents and figure out a strategy for testing—a strategy in which a software tool is only one piece.
An important part of formulating a testing strategy is impact analysis, a process of determining which parts of software affect other software, databases, and other parts of the computing environment. While some products that provide impact analysis services aren't specifically software testing tools, they can be important in developing a testing strategy.
Filling in the Blanks
When it comes to testing software for the System i, the basic problem is that there are a limited number of automated tools out there to help you test server-based RPG or COBOL programs. But there are a few. Please note that the following is merely summaries of a few important features of each product. You should consult the vendor Web sites for more complete information on each offering.
ARCAD Software
ARCAD-Qualifier
www.arcadsoftware.com/index.php?option=com_content&task=view&id=16&Itemid=136
ARCAD-Qualifier is a regression testing suite with three modules for applications running under i5/OS and Windows 98/2000/ME/NT/XP. The Verifier module focuses on repetitive tests that cover not only application functions that have been directly modified, but also peripheral functions that may be affected. The Extract module helps testers extract sample business data to use in testing and places it in a separate storage area so that if it's inadvertently modified incorrectly, the original data is still safe. The Test Coverage Analyzer module analyzes application code lines to make sure all lines are tested.
Databorough
X-Analysis
www.databorough.com/products/x-analysis.html
X-Analysis is a comprehensive tool suite for documenting and modernizing IBM i applications. Although not specifically designed for testing, it includes functions for application-code impact analysis, automated business-rule extraction, test-data extraction and cleansing, and automated data-model extraction. These functions can all help with application development and testing.
MKS
MKS Integrity for Test Management
MKS Integrity for Test Management is part of MKS' Integrity line of software lifecycle management tools and focuses on administering and tracking a testing program. It provides traceability between all aspects of application development and offers an array of querying, charting, and reporting options.
Original Software
Original Software's products are available as a suite or standalone.
Extractor
www.origsoft.com/products/data_extractor.htm
Extractor is a database analysis and data extraction tool that helps test teams establish data sets for application testing.
Qualify
www.origsoft.com/products/qualify.htm
Qualify is an application quality-management system that not only includes test execution features, but also provides tools for determining software requirements, managing defect correction, and delivering project-management reporting.
TestBench
www.origsoft.com/products/Testbench.htm
TestBench is an application-testing administration feature that helps testing teams set up valid tests, monitor and compare results, and test peripheral system impacts such as the effects of an application on job and message queues. It also includes tools for scrambling confidential data, so it can be used for testing without the testers being able to read it directly. An option called TestLoad focuses on testing system performance requirements of software and making sure an application doesn't degrade other operations.
TestDrive
www.origsoft.com/products/Testdrive.htm
TestDrive offers facilities for functional and regressive application testing, includes tools for testing GUI and browser interfaces, and works with IBM i applications using WebSphere, Oracle, Microsoft SQL Server and .NET, and AJAX. It's also specifically designed so that testers don't need to be programmers.
TestDrive-Assist
www.origsoft.com/solutions/manual_testing.htm
TestDrive-Assist is an administration and testing product that helps personnel carry out manual application-testing plans and activities.
Thenon Holdings, Ltd.
SmartTest400
www.thenon.net/smarttest400.htm
SmartTest400 is a testing solution that includes test script help, tools for setting up a data environment, record-and-play utilities for repetitive tests, and reporting tools. It embodies a test strategy that includes testing interactive processes in batch, masking of sensitive data, and volume testing.
Bonus Answers
Before closing, a few products and resources are worth mentioning for special situations.
If you are concerned primarily with testing client-side applications running on PCs, there is a wide assortment of Windows-based testing tools too numerous to mention here that may be of use. A good summary of some possibilities is available through ApTest, but be sure you check out the emulator requirements for any product you might consider using.
Aldon's Configuration Management Database (CMDB), although primarily focused on analyzing IT infrastructure and service issues, can provide impact analysis between IT assets and software programs. Aly's Docolution offers an analysis solution exclusively for document-management applications, including those on the i. Information Consultants' DataLens offers impact-analysis tools for System i database-intensive applications.
Since its purchase of Borland Software, MicroFocus offers Borland's line of testing products. SilkTest will handle testing of Windows and Java programs on platforms including the i, and the company's SilkCentral Test Manager works for i servers running Linux. IBM's Rational Functional Tester also provides a test suite for System i machines running Linux, as does LogiGear's Test Automation product. ManageEngine's QEngine offers application load-testing capabilities for System i apps written in AJAX, J2EE, Microsoft .NET, and Ruby on Rails as well as apps using SOAP-based Web services.
LATEST COMMENTS
MC Press Online