02
Sat, Nov
2 New Articles

Extreme Programming at Lakeview Technology

General
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

A number of factors contribute to the long-term success of software vendors, but three, in no particular order, stand above the rest: the abilities to deliver high-quality code, fulfill customer requirements, and minimize time-to-market.

Why are quality, needs fulfillment, and speed so important? The need for quality code is obvious. The reputation of a vendor that repeatedly delivers bug-ridden software will plummet rapidly, turning customers and prospects against it. Conversely, vendors that consistently deliver software that fully meets specifications, fulfills all of the vendor's promises, and never fails will find that their reputations rise over time.

The importance of fulfilling customer requirements is equally clear. People buy software that solves problems or allows them to take advantage of available opportunities. If the software does not meet their needs, they won't buy it.

Finally, time-to-market is important because the first vendor that introduces new features and functions enjoys a significant competitive advantage. Since customers must absorb large switching costs when replacing one product with another, making a sale to a customer who has already bought a competitive product is, to say the least, difficult.

As this article asserts, vendors and in-house developers can make significant gains in all three of these areas by employing a relatively new software development methodology called extreme programming. Although this article discusses this new methodology from a vendor's perspective, the technique applies to in-house developers as well.

Traditional Methodology Shortcomings

Traditional development methodologies place impediments in the way of achieving all three of these objectives.

Traditionally, vendors adhere to a release cycle schedule when developing software. Seen from the customer's perspective, every one to two years, the vendor issues a new release that incorporates a slew of new features and enhancements. Between releases, customers can download fixes to any defects detected in the current release, but they usually will not receive any new functionality or performance enhancements.

From the vendor's perspective, this approach is obviously much more complex and onerous. The process starts with an assessment of requirements. The vendor tries, in some way, to appraise all customer needs and wants that its product does not yet fulfill. It prioritizes these items based on perceived importance to the customer and the cost to implement them. The vendor then draws up a list of new features and enhancements that it will deliver with the next release.

After scoping out the requirements, the vendor packages them out to analysts and programmers who work in isolation for a lengthy period to design, write, and unit test their chunks of the new code.

Under this scenario, although developers complete and test their self-contained sections of the code independently, the fully integrated work of all programmers may be built only once a week or, at best, once every few days. Testers then perform integration testing on that build. After fixing any errors, the developers and testers repeat the build/test cycle until they decide that they have a product that is ready to go out to the customer community for beta testing, after which the vendor releases the product for either limited or general availability. Because communications among customers and developers between the requirements definition and beta testing stages are often weak, the resulting product frequently does not fully reflect customer needs and wants.

This methodology presents customers with an all-or-nothing proposition. They cannot get any new functionality until all of it is ready. Even if some particularly valuable new features may be ready early, the code base also contains functionality that is not complete and fully tested and, therefore, cannot be released to customers. Furthermore, if customers identify any new needs after the vendor freezes the design, they must wait until the next release--probably one to two years after the general release of the version under development--to get it.

The traditional methodology's weakness in reducing time-to-market is obvious. The time between identifying a new software requirement and seeing it in a finished product can easily extend to two to three years, an eternity in today's IT environment.

Likewise, although less obviously, traditional models also tend to yield poor results for determining and fulfilling customer requirements. Out of necessity, when following a "big bang" approach to releases, vendors must follow some democratic approach to determining which requirements they will include in the next release and how to deliver them. That sounds as though it would produce a desirable result, but it often does not. In trying to meet the needs of all customers simultaneously, usually none gets exactly what it wants, since working with individual customers to design, build, and test features is difficult using this model. This usually results in marketers claiming that the new release has "something for everyone," which should raise a red flag for customers.

Traditional methodologies also have inherent issues that stand in the way of developing quality software. Even with design reviews, developers write code in isolation. Reviewers typically get their first look at possibly thousands of lines of code only after completion. Programming languages do not make for an easy read as they are optimized for writing, not reading. It requires intense diligence for reviewers to successfully find bugs buried deep in thousands of lines of code that they had no hand in developing and with which they are, therefore, not familiar.

Furthermore, since each release contains many new features and enhancements, beta testers face a daunting task. The probability is low that the beta will thoroughly test all of the less consequential new sections of code.

Extreme Programming to the Rescue

We grappled with these challenges of traditional methodologies at Lakeview Technology. Then, about two years ago, we adopted extreme programming. Extreme programming helped break through the development bottlenecks while simultaneously allowing us to improve software quality and get closer to our customers to fulfill their requirements more quickly.

Our development processes are now much more granular. Rather than work toward a full-blown release with many new and revised features and functions, we look at and develop customer requirements one at a time. Instead of waiting to coordinate code freeze points for several components before producing a product build, we can now create as many as eight to ten new builds a day.

As opposed to waiting for months for everything in a new release to be ready before we begin customer testing, customers can very quickly test new features that they requested. Depending on the complexity of the feature, the customer may see the first cut of it a month after requesting it. The customer can then put the code through its paces before we release it to the broader customer base.

This is all done within the existing code base. All customers get the new feature or function every time they download the latest set of fixes. However, only approved accounts can actually access it. License keys lock those sections of the code so that they are executable only by customers who are testing it. If other customers request the feature before we are ready to make it generally available, we can give them a key that will allow them to gain early access and join the testing process. In effect, we now beta test individual features rather than complete product releases.

Once we finish testing a new feature, we remove the key from the code and announce its availability. Then, whenever anyone downloads a product update, the new feature is available without any restrictions. Consequently, we are now debating our attitude toward product releases. In the future, we may decide to launch a formal new release only if major design changes mandate it. This is not to say that we will never again put out formal releases. We will. However, the traditional development staircase that existed before no longer has the same effect now that we have adopted this new, extreme way of bringing updates to the marketplace.

The benefits of extreme programming in terms of time-to-market have been remarkable. We have found that, on average, we can get new features out into the market three to five times faster than in the past.

Customer Focus

The granularity of extreme programming allows us to focus much more on customer requirements. Because we work on a feature-by-feature basis, we can work closely with just those customers who need it most as we design, build, and test each feature. They are intimately involved in both requirements definition and the testing needed to ensure that the new software fulfills their needs.

For example, a prospective customer recently came to us wanting to use a new iSeries function, Independent Auxiliary Storage Pools (iASPs), to implement a server consolidation. At the time, our MIMIX product did not support iASPs. We understood how they worked and what it would take to include them in the product, but no one had asked us to support iASPs until then.

Rather than turning the customer away, we explained how we would use extreme programming to give them what they needed. By working closely with the new customer, the MIMIX team had iASP support built, tested, and in production within three months. We then made it available to all customers through the normal software update process.

Paired Programming

Another extreme programming technique, paired programming, has introduced a fundamental change to the way we develop software. As the term suggests, under a paired programming regimen, developers work in teams of two or more. One person sits at a terminal writing code, while others on the team perform monitoring and reviewing roles as the code is written. The team constantly discusses the approach being used and possible alternatives to it.

Having two or more people do the job of one would appear to be an unproductive way to develop software, and it would be if that were what was happening. It is not. Code reviews were always a part of our approach to quality control, but we previously did them only after the code was finished. Now, we review code while the programmers are writing it.

This provides at least two benefits. As suggested above, programming languages were designed to implement algorithms efficiently and effectively. They were not optimized to be read. Trying to make sense of hundreds of screens worth of someone else's code is exceptionally difficult. Finding a small error hidden within it is almost impossible. However, watching the programmer write it, line by line, makes comprehension and error identification much easier. Furthermore, if the reviewer doesn't understand something, the programmer is right there to explain it--or to recognize, acknowledge, and correct the flaw in the logic. Therefore, quality is increased, and review time is decreased.

The second benefit of paired programming is that, upon completion, at least two people are intimately familiar with all sections of the code and expert at maintaining it. If one of the two moves to another group or leaves the company, the knowledge remains behind with the other person.

Paired programming was not responsible for all of the quality improvements that we experienced. Extreme programming also incorporates a "test-first" concept, whereby all of the test cases are produced before any code is written. This assures that the customer views and "stories" are tested without being influenced by code designs and algorithms.

("Stories" is an extreme programming term that may be unfamiliar to others. It refers to a customer's or end-user's wish list or requirements definition. A story is written at a very high level and does not provide a detailed description of exactly how a feature will be implemented.)

The theory behind the test-first philosophy is "don't test what you wrote; test what is needed." This was always the goal of testing, and we now have a way to more strongly reinforce the implementation.

Coincident with our adoption of extreme programming and the test-first approach, we also significantly increased the level of automation of our testing efforts. Now, as we scope out each new feature, we also define and create automated tests to ensure that the final product conforms to specifications.

Rapid Prototyping

Extreme programming also significantly alters the development life cycle. In the past, we would scope out and design a whole release down to the nth level of detail before the developers would write a line of code. Extreme programming, on the other hand, follows more of a rapid prototyping strategy. Rather than provide detailed designs for all aspects of a new release, we identify individual features and begin working on them with just a high-level design in place. There is enough design to set estimates and resources and to make sure we can really deliver as discussed.

The benefit of this technique is that it provides something for the customer to test in very short order. It is very difficult for customers to verify that a product will meet their needs based solely on a lengthy design document. By quickly giving them a working prototype to put through its paces, they can immediately identify any inadequacies, which can be corrected without having to write off an enormous investment in detailed design.

In addition, when we produced only full releases, we had to focus on both "must have" and "nice to have" requirements. When what we delivered had to do everything for everyone, we put as much into it as we could. Now, we continue to iterate our development and testing of individual features until the customer team tells us when the new code provides them with useful functionality that they are ready to accept. This also provides an important side benefit in that the customer very actively helps us to distinguish between the must-haves and nice-to-haves.

Using extreme programming, each iteration given to the customer must be self-sufficient. The iteration obviously cannot do everything, but the stories it was written to satisfy must work. Once again, the test-first concept helps us here.

A side effect of this approach is that these standalone iterations force us to do things in a different order than we previously might have. We are forced to build some of the more difficult sections of code sooner than we may have before. This helps prevent the rush of deliverables that typically gushes forth before a major deadline. It also gives us an opportunity to spend more time testing each element in customer environments, resulting in another quality improvement.

Going Extreme

Implementing extreme programming is not easy. Most importantly, it requires a radical cultural shift. Software development was usually a solitary task. In contrast, extreme programming is, by definition, a very collaborative effort. Using it, developers now never program alone, something that some of them find uncomfortable.

Paired programming is not the only thing about extreme programming that may generate a level of discomfort. People in the development community tend, not surprisingly, to be very analytical people. Starting programming without the traditional in-depth analysis and lengthy design documents can be troubling for them.

Change, particularly such fundamental change, is always difficult. Therefore, successfully adopting extreme programming requires considerable advance evangelizing and buy-in at the highest levels.

Persistence is also critical to overcome the initial objections. Because extreme programming appears, at first blush, to be the antithesis of the way that they are accustomed to working, many developers resist the change. However, our experience has been that these objections quickly melt away once the developers gain experience with the extreme programming techniques and see, first-hand, their benefits.

Going extreme also necessitates some physical changes to the work environment. Your workspace is probably set up to enable each developer to work effectively alone in front of a computer in a cubicle or office. When you adopt extreme programming, you need spaces that facilitate greater collaboration. As a minimum requirement, developers must be able to work comfortably in pairs. We did it by creating bullpens throughout our offices with two chairs for every keyboard.

When you adopt extreme programming, you will likely find that its concepts begin to permeate the whole organization. For example, in the past, salespeople would typically not get involved with a new product release until it was well into beta testing. In contrast, one of our recent products was out on the Web, with inside sales already involved, before it was even in beta. Customers, who the salespeople kept fully aware of the product's pre-release nature, could get involved early and influence its final design. Simultaneously, internal release teams and operations teams were fully involved throughout the process so that they would be ready to launch and support the product when it successfully passed through the iterative prototyping and testing phases.

The Extreme Bottom Line

Whether you are a software vendor or an in-house development team, moving to extreme programming will not be easy. The cultural shift alone is difficult for most organizations to undertake. Beyond that, education and environmental support for the new techniques are essential. Nonetheless, the benefits are well worth the costs.

Counter-intuitive though it may seem, paired programming, combined with the other extreme programming tactics, greatly increases development productivity and speed. What's more, while lowering development labor time and costs, the techniques also improve the quality of the resulting software and its responsiveness to end-user requirements. The bottom line is better software faster.

As vice president of research and development at Lakeview Technology, Ken Zaiken is responsible for the company's MIMIX product line. Prior to joining Lakeview in 1994, he was with IBM for 15 years, eight of which were spent at the Rochester Development Laboratory. Ken speaks nationally and internationally in the areas of High Availability, Disaster Recovery, AS/400 Growth Strategies, Advanced Database Technologies, and other related topics at various industry conferences. He received his Bachelor of Science Degree in Liberal Arts from the University of Iowa.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: