Not all software is flawed, but since the software business flutters at an accelerated velocity, defects in new releases are sometimes blurred. Entrenched in the Darwinism of commerce, vendors survive by outpacing their competitors in the contentious battle for processor serial numbers. They aggressively market their wares to dominate the machine space, and while they try to serve the profit objective of their clients and deliver a competitive lever, development man-hours are finite and corners get cut in the interest of profitability. And often, the corners cut are the ones at the end of the development process, the ones allocated to quality control and testing.
System users are conditioned to suffer. We wince, then yield, and aren't surprised when something fails or when patches arrive at our desktop. Certainly, data systems are complicated, but many believe that complexity and reliability are mutually exclusive propositions.
The world revolves around information. Orders cannot be placed, patients treated, and airplanes cleared for takeoff without computers and software. Walk into any office and you'll see keyboards and monitors, but computers are prevalent in other forms now, too. Computers without keyboards control lots of things—like automobiles, MRI machines, and elevators—and these things are prone to bug-related failures as much as traditional information management systems.
Software testing is, by and large, an arduous, manual process. Plans and scripts are written, and then teams of testers plod through applications with the intention of breaking something. It can take months to manually test software. When testing starts to encroach on the strategic objectives of other departments—marketing, for example—testing stops.
The help desk inherits the quality control initiative when software is released to users. The mindset here is, it will eventually work as intended; it's just going to take a few fixes, patches, downloads, upgrades, and tweaks. If you have computers, or sell software, you need to have a help desk staffed by competent people because things are going to fail.
The landscape of computing is changing. Companies once had a big computer with a bunch of terminals. They had control over the environment that the computers were housed in, the hardware, the software, and the way in which they were accessed.
Current computing models offer information as a service. Now, if you use an e-commerce application that needs to clear credit-card transactions, check insurance availability, or ship a parcel, your software application talks to some other computer application operated by another company that might be located 10 time zones away. The advantages of delivering functionality that exploits service-oriented architecture (SOA) are cost savings and better customer service. The trouble with doing business in cyberspace, though, is that it's hard to get things fixed when they're broken. Quality control is more important than ever.
But complexity and reliability can coexist, even in IT. Once the will exists, IT organizations can take steps to improve their output. Solid procedures, metrics, and tools can ensure that software delivered to users works predictably even under extreme circumstances. A formalized QA regimen used at several intervals in the development cycle improves any development shop's output and helps IT better align itself with the quality-based objectives of the organization it serves.
One testing methodology that is useful for software development shops is the Software Testing Maturity Model (SW-TMM) conceived by professors from the Illinois Institute of Technology. The SW-TMM serves the objectives of IT departments better than many of the other QC models because it's easy for IT organizations to embrace. It also lays out a strategy for self-assessment, guiding IT departments though a series of stages that gradually evolves the development process over time.
The SW-TMM identifies five levels of maturity. All IT organizations hover at one of these levels, and those who register at the lower levels of testing maturity can benefit by moving to a higher plane. Experience with the SW-TMM shows that high-maturity organizations achieve great success and constantly improve their software development process, while low-maturity organizations tend to wade in mayhem.
Automated software testing tools can shorten the time it takes to QA new software. Test automation has been around for a while but is only now beginning to hold the interest of those who can use such tools to their advantage. By mimicking the keystrokes of small or large groups of users, these tools can perform tests in a fraction of the time it would take people to do so. Test packs are created for each application, and once assembled, they can be reused.
Cargill Global Financial Solutions is one example of an organization that takes software quality control very seriously. Global Financial Solutions is an administrative arm of Cargill, the international food, agriculture, and risk management company. Global Financial Solutions focused on its core processes and examined how they were tested manually. It then developed a procedure for using an automated testing solution to streamline these processes. In one case, a process that used to take 260 hours to test manually can now be done in just a few hours. That's a saving of around 95%. Over a 12-month period, Global Financial Solutions saved more than $270,000.
The ethos of quality needs to sweep into the IT space, as it did in manufacturing a few decades ago. By delivering top-quality software early in the product lifecycle, software developers reduce the expense associated with maintaining it over time. Sound software testing practices don't need to stand in the way of profitability.
Robert Gast writes for The Original Software Group, Ltd. OSG develops automated software testing solutions for IBM System i, Microsoft, and Web-based systems. Robert Gast can be reached at
LATEST COMMENTS
MC Press Online