Toward Disciplined Rapid Application Development

Stephen E. Cross - Director, Software Engineering Institute

SEI Building

As the director of the Software Engineering Institute, I am often asked to explain why progress in software engineering has not kept pace with progress in other engineering fields. The advances in computer hardware, characterized by phenomenal increases in processor speed and memory capacity associated with decreases in size and cost are the most cited progresses. I contend that similar progress has been made in the engineering of software-intensive systems. I suggest an approach to disciplined Rapid Application Development (RAD) that builds on the progress made in software engineering during the past 30 years.

Consider briefly the progress of the past 30 years. During the 1970s, the age of "programming productivity," the creation of new high-order languages, tools, and development methodologies enabled programmers to improve their productivity by one to two orders of magnitude. During the 1980s, the age of "software quality," the focus was on software processes and continuous process improvement. Quality results have been published in the literature for the past couple of years (for example, see [1], [2], or [3]) and indicate improvements in an order of magnitude range along several dimensions (decreased defects, increased productivity, decreased cycle time, decreased number of personnel required to achieve results, and decreased percent rework after release). The decade of the 1990s is the age of "Internet time." The advent of the Internet and associated new software technologies (for example, Java and the widespread use of object technology) enables software developers to field products in cycle times of 6 months or less. The combination of best practices that have evolved over the past 30 years in productivity approaches, quality improvement, and technology is impressive and matches progress in other fields of engineering. Taken collectively, they form an arsenal of tools (rather than the proverbial "silver bullet") with which to attack software development.

While the progress is real and arguably impressive, the reasons for failures in software development are largely the same today as they were 30 years ago. In a 1988 U.S. Air Force Science Advisory Board Study [4], three common reasons were cited for failure (where failure ranges from excessive cost and/or schedule delays to never fielding a system).

  1. Risks associated with teams. By this was meant that if a team of developers, acquirers, end users, and systems maintainers (and their management) had not worked together before and did not learn to communicate effectively, they were not likely to develop a successful system without schedule delays or cost overruns. Other risks cited were the lack of well-defined or well-understood processes.

  2. Risks associated with technology. Teams that pursued a new technical approach (for example, the first foray into client-server computing) found that the lack of experience with a new technology, architecture, or development approach contributed to failure.

  3. Risk associated with requirements. By far the most often-cited reason for failure was poor management of requirements characterized by frequently changing requirements, requirements that were not well understood, and requirements proliferation.

The bottom line is that experience counts. The study coined the term "unprecedented systems"to describe systems in which these risks were present. An experienced team, developing a similar system to one that it has previously developed, with a customer and end user with whom it can communicate well, is much more likely to produce high-quality software-intensive systems on time and at cost.

With this as backdrop, I contrast my own experience as a computer scientist and software engineer. My formal training (some would say "formal"is too strong given that my graduate work was in machine intelligence) focused on Rapid Prototyping. This was during the late 1970s and early 1980s, the early and exciting days of the first commercial expert systems. In the laboratory, our research prototypes were useful tools for experimental research. To our commercial counterparts, rapidly developed prototypes were often "throwaways." They were often too fragile to scale into a hardened, deliverable system. But they served a critical purpose, they enabled one to quickly capture an explicit and inspectable representation of requirements and depict them in a meaningful way to end users. The tools of the day allowed one to work interactively with end users to evolve a more complete understanding of those requirements. In effect, they provided a means of communication through which a development team (including users, maintainers, and management) could discuss and reach common understanding of the requirements.

Many have criticized Rapid Prototyping, or as it is now more frequently called, Rapid Application Development (RAD)Ńas lacking rigor, leading to fragile systems that do not scale, and serving to raise end user and management expectations to unrealistic levels. These criticisms are valid, unless a more disciplined approach to RAD is followed that couples RAD with the lessons learned in productivity and quality. The approach I propose is based on BoehmŐs spiral model [5]. In the spiral model, a complete representation of the system is produced and tested during each development cycle (or spiral). Each spiral addresses a particular risk, with the most serious risks addressed in the earliest cycles. I have used this approach in several successful systems [6,7], and variations have also been discussed in the literature [8].

A proposed approach to disciplined RAD would entail these steps:
  1. Scenario-based design and analysis
  2. Architecture design and analysis
  3. Component specification with maximum reuse
  4. Rapid development of remaining modules
  5. Frequent testing with end users and systems personnel
  6. Field with support tools to allow for evolution

The progress in software technology now makes this approach much more likely. Step 1 addresses the major source of risk described, requirements. Scenario-building tools allow rapid development of cases to illustrate system operation, which in turn are useful for defining, refining, and communicating an understanding of requirements. Because end users and management often see ways to improve their work processes as a result, this approach has also proven useful in business reengineering. A by-product of this approach is the capture of test cases that can be used for user-centered testing at later stages in the system development. Thus, scenario-based approaches provide a useful way to do requirements analysis.

Steps 2 and 3 address technology risks. As in other engineering fields, it is useful to define the architecture early during system development and to conduct trade-off analysis to assess attributes such as data throughput, usability, and security issues. Too many past failures are be attributed to failure to understand a technical constraint until realization of the software system in executable code. Recent advances in software architecture development and analysis (for example see [9]) provide an engineering basis for early architecture specification. In addition, a lesson learned from reusable software development is the criticality of software architecture in which to embed reusable software components. Components that do not exist or that cannot be easily retrofitted into the architecture can be developed using a rapid prototyping approach (step 4). Requirements and architecture provide design constraints to bound and guide the development of these modules.

Steps 5 and 6 are also very important. It is critical that end users and system maintainers participate regularly in testing. Though I list it as a separate step (a final test before delivery needs to be done) it is also useful to use scenario-based test data to assess the output of each step. Lastly, as requirements will change over the life cycle of the system, it is important to consider how systems will be used and will likely evolve, and then plan for that evolution.

Not explicit in the above approach is mitigation of risks associated with people and process. It is my belief that process improvement, under such approaches as the Capability Maturity Model (sm) for Software [10], is not inconsistent with RAD. The Capability Maturity Model (CMM) suggests what should be done, not how to do it. The discipline in the above approach comes from having well-defined and understood processes. In addition, training for new employees and continuing education for all employees is an important aspect to ensure that the development team can cope with technical change.

So how will we characterize the first decade of the new millennium? Trends suggest we will have more powerful computing coupled with a low-cost, high-bandwidth communication infrastructure. There will be continued downsizing of organizations and more outsourcing. There likely will be marketplaces for reusable objects and software components. My bet is that a disciplined RAD approach will become the de facto approach for the development of software-intensive systems.

About the Author

Stephen E. Cross is the director of the Software Engineering Institute at Carnegie Mellon University.

Stephen E. Cross - Director,
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA 15213-3890
[email protected]

www.sei.cmu.edu/


Previous Table of Contents Next