Computer applications that people depend on heavily are called critical systems. A system is thought to be critical if the cost that would be incurred if the system fails is high—sometimes very high. Three degrees of critical systems are generally recognized:
- Safety-critical systems—These are the most important computer applications, and they must always be available. An example is an air traffic control system that analyzes radar data, tracks flights, and warns operators when something is wrong. Failure of a safety-critical system can be life-threatening.
- Mission-critical systems—The failure of a mission-critical system, though not directly related to loss of life, can cause the failure of an entire enterprise. For example, consider the guidance system for a space exploration satellite. Failure of the software that controls the positioning of the satellite will render it non-responsive and will result in failure of the entire project.
- Business-critical systems—This is where the majority of critical systems lies. A failure in this area can be extremely costly to the enterprise that runs the application, but the failure is not life-threatening, and it does not directly cause the failure of the entire operation. An example of a business-critical application is the billing system for a large gas and electric utility.
Measuring Application Dependability
The dependability of a computer application is a metric that expresses how trustworthy it is—that is, how much confidence the user should have in the system. This metric is usually put into the form of a probability. Dependability of a system is measured in four ways:
- Reliability—How likely it is that, over a given period of time, the application will deliver expected and accurate results. Also called "correctness."
- Safety—The probability that an application will not cause damages for the user or others.
- Security—How likely it is that an application can resist accidental or intentional intrusion.
- Availability—The probability that the application will be up and running and able to deliver the services as expected.
These four metrics are interrelated, and the aggregate evaluation of an application will usually be limited to a somewhat subjective overall designation of "not dependable," "dependable," "very dependable," or "extremely dependable."
Architecture Considerations
Of course, the underlying system architecture that an application runs over is important to application availability. HA technologies like check-summing, mirroring, and cluster computing are accepted methods of maximizing availability of computer services at the hardware level. These mechanisms operate independently of the applications they service in an effort to protect data and eliminate the much-feared "single point of failure." The applications themselves, however, are the cause of loss of availability most of the time.
Application Design for High Availability
As it turns out, there is much you can do within your custom applications to improve the availability of services—that is, to create dependable software.
A problem that is capable of causing a failure in your application is called a "fault." There are three ways to deal with faults and thereby increase the probability that your software is dependable:
- Fault avoidance—This approach to application development eliminates most faults through adoption of safe programming practices and standards.
- Fault detection—Many emergent faults are removed as the application is tested and problems are corrected.
- Fault tolerance—All remaining faults are handled by the application itself in a manner that will not cause a system failure.
Considerations for Fault Avoidance
Decisions made at application development time can significantly impact the dependability of an application. One of the hardest faults to eliminate is unpredictable program behavior resulting from unexpected program state. That is, the program zigged when it should have zagged. It's long been known that the infamous "goto" statement has contributed to this type of failure because program state is not localized. According to software engineering authority Ian Sommerville, the following program design methods should be avoided:
- The aforementioned "goto" statement
- Dynamic memory allocation—Languages like C and C++ allow you to allocate memory when the program runs rather than at compile time. This can cause a halting failure when the memory is not released and it eventually becomes exhausted (called a "memory leak"). This condition can occur sporadically and so can be very hard to find. Better to use a memory-managed language like C# or Java.
- Aliasing— Aliasing allows different names to be used to refer to the same program object. It promotes programmer errors and can reduce the probability of high availability.
- Pointers—Some languages allow you to reference an actual memory address. While this removes a layer of translation and improves program performance, it allows aliasing and can introduce more programmer errors. Pointers can be mishandled in ways the programmer may not anticipate.
- Floating-point numbers—Floating-point numbers may be evaluated imprecisely under large degrees of precision. For example, the value 3.00000000 may be interpreted as 2.99999999 or as 3.0000001. This can lead to a problem when two floats are being compared. Use fixed-position variables when possible. If fixed-position numerics are not available, make sure you understand how binary values are converted to and from floating-point representations and how your language of choice allocates them.
- Recursion—Recursion is a handy technique in which a function repeatedly calls itself until a signaling level is reached. The functions then return to the next higher level and perform some sort of processing as they "unwind." This practice, while used frequently in scientific applications, represents a potential fault in an HA application because the program can run out of stack space and choke.
- Inheritance—Yes, I know, inheritance is one of the pillars of OOP and just can't be a risk. It is, though, because inheritance causes the source code for an object to be segmented and kept in separate areas, again raising the likelihood of faults.
- Interrupts—As the term implies, interrupts can break into a critical process, regardless of the program's state.
- Weak typing—Languages usually support either strong typing, in which all variables used in an expression must be of the exact same type, or weak typing, in which any old types you happen to be using can be associated. When you use a weakly typed language, the opportunities for errors increase severely because the compiler does not complain about the mismatch.
- Unbounded arrays—A well-known vulnerability of languages like C is the array overrun, in which adjacent memory—even instructions—may be stepped on. In these languages, an array is merely a series of equally spaced memory addresses
In-House Standards
Further, the benefit of establishing standards within your development team can be significant. For example, you can help eliminate errors by developing naming standards for variables and constants. Consider a naming scheme that includes not only the intended purpose of a variable but also the variable's type and its level of scope:
private int private_int_numberOfExemptions = 0;
Treat constants similarly:
Also note that all variables should be given an initial value when they are defined.
In-house standards controlling other aspects of source code development, like a standard function return value scheme and standard meanings for those values, can also have a positive benefit in avoiding faults.
Fault Detection
The process of finding and eliminating faults that make it into an application is accomplished through program verification and validation (V and V). These practices are well beyond the bounds of this article, but they do fall within ordinary testing and debugging procedures, including these:
- Establishment of a software function testing plan
- Software inspections and peer reviews
- Defect identification, correction, and retesting
- Program flow analysis, data use analysis, and interface analysis
The essential V and V processes of iterative and incremental software development, error detection, and correction apply to high availability applications just as they do to non-critical systems, except they're even more important.
Coding for Fault Tolerance
The last area where a program's dependability can be improved is in its ability to deal with unexpected events. This is done on the fly in an approach called "defensive programming." To make this happen, your program should internally handle any exceptions that might be raised as the result of an error. Once caught, an error exception should be dealt with appropriately and program execution should resume. High availability systems should report errors encountered to an error log so that the cause of the error can eventually be corrected. Note that system faults resulting from hardware problems cannot be handled in this manner.
Finally, database-centric applications should employ the tried-and-true techniques of commitment control and rollback. Commitment control/rollback is used when you have a transaction that involves multiple database operations. To help ensure the integrity of the data, a fault-tolerant program will set a checkpoint at the beginning of the transaction processing code and will then execute the database updates. Then, when all portions of the transaction have been performed, the program will issue a commit of the updates and the database changes will become permanent. If an error occurs during the transaction updates, the program will issue a rollback instruction and the whole group of transaction updates will be undone.
High availability program design is a valuable technique for producing solid applications. Ironically, though, it won't solve the most common cause of program failure: that of improper program specification in the first place.
Chris Peters has 27 years of experience in the IBM midrange and PC platforms. Chris is president of Evergreen Interactive Systems, a software development firm and creators of the iSeries Report Downloader. Chris is the author of The i5/OS and Microsoft Office Integration Handbook, The AS/400 TCP/IP Handbook, AS/400 Client/Server Programming with Visual Basic, and Peer Networking on the AS/400 (MC Press). He is also a nationally recognized seminar instructor and lecturer at Eastern Washington University. Chris can be reached at
LATEST COMMENTS
MC Press Online