The word risk means many things to many people just as the word systems conveys many associations. In general, one wishes for high quality software delivered on time within budget. The risk associated with a computer project is either a qualitative or quantitative measure of the probability of meeting the project goals.
The risk is always two fold: a risk to the developer, and a risk to the purchaser of the software. The risk to the purchaser is the risk of obtaining poor quality software with a late delivery. In the most extreme case the developer may fail to deliver the software within any reasonable time period or may deliver a product which is so far below the quality and reliability required, that it almost unusable. The risk to the developer of the software is a cost overrun. This overrun may be due to the last minute addition of resources to meet schedule, perfecting the software when schedule is overrun, or to correcting an excessive number of errors after delivery. In the extreme case, the developer loses future sales because of the poor reputation of his product, or non- competitive future bids which are adjusted upward to reflect the costs of the last project.
For convenience we will separate these issues into software acquisition risk (cost and schedule) and software reliability.
One measure of software acquisition risk is the cost of development. This is the cost to the producer for developing the software. If the scheduled development effort and procedures are inadequate for the project and the requirements, then either a poor quality product ensues or extra cost must be expended for additional testing and/or rewriting of the software. Thus, accurate estimates of the required cost are a necessity for the developer to gauge the risk in not meeting objectives should the estimate be in error, or if unexpected problems ensue. The user of the software, often represented by the government or commercial contract officer in large projects, must also make such calculations. A delayed delivery of a product or delays while the producer tries to eliminate enough bugs so that the product is usable also carries a cost penalty and must be treated as a risk. Sometimes such delays have a moderate impact, while in other cases they may be very costly.
Development Costs
Development costs generally determine the success or failure of a computer project. An estimate must be made of development costs at the beginning of a project and must be updated when needed during the life of the development life cycle. In a Ąclassical textbookĄ project the requirements are agreed to at the beginning of the project, an accurate cost estimate is made, and the project progresses on schedule and within budget. The costs are tracked during the project execution and follow fairly closely the initial estimates. Cost estimates include both the total cost as well as the monthly expenditures during the development cycle. These cost projections are checked with the expenditures monthly (perhaps weekly). This procedure tends to minimize the risk of cost overruns by giving early warnings of major deviations from projected costs. Significant deviations between projections and expenditures, either positive or negative, require careful investigation to determine whether they represent slippage or acceleration of the project.
The primary cost risks are those listed in Table 1. A significant problem occurs when there are significant changes in the team members or the management. A few key members may leave the project, or there may be many major defections across the board due to aggressive hiring tactics of competitors. The contractor is still required to meet the contract objectives despite such changes, unless the customer is willing to renegotiate, perhaps for a no cost time extension.
Changes in requirements, cost, specification and schedule coming from the customer are not uncommon. In each case these must be the subject of a renegotiation between the customer and the contractor. As a practical matter, one must guard against the case of creeping escalation where the developer agrees to a succession of small changes in requirements or specifications which add up to a significant change that avoids renegotiation.
Introduction
Software reliability is the probability that a software product will not fail during a time period, resulting in failure of the larger system in which it is embedded. In general, software failures are the result of residual errors not found in testing that are excited by a confluence of particular inputs and state of the system. The reliability level required of the system must be a function of the task it performs, thus, one can tolerate fewer crashes per year of air traffic control software than for the Windows 95 operating system.
Reliable Software vs. Software Reliability
Many use the term reliable software to refer to the use of various development procedures and processes to develop high quality reliable software on schedule and within budget. Most of the development procedures incorporated in the Software Engineering Institute’s Capability Maturity Model (CMM) or the ISO9000 procedures are techniques for developing high quality software. Software Reliability Modeling is the group of techniques for predicting the actual reliability which the software will achieve when development stops. The two common measures which are predicted are the software failure rate per hour of operation or the related meantime between software failures. Either of these parameters leads to a simple probability function which gives the probability of success or failure within a given time interval. A number of the most promising techniques for modeling software reliability are contained in the ANSI Software Reliability Standard.
Reliability as a Measure of Utility
In general the reliability of the system impacts strongly the usefulness of the system. An unreliable communication system requires duplicate channels, repeated calls, or delays to compensate for the lack of reliability, all of which carry a penalty. Thus, the risk of missing the reliability objective for software is a measure of the utility of the product. Modeling the reliability of the software and its variability as a function of the development parameters allows one to quantify the risk. Such a calculation should be carried out by both the developer and the customer.
Contact InformationMartin L. ShoomanProfessor of Computer Science & Electrical Engineering Polytechnic University Glen Cove, New York 11542 [email protected] Ernest Lofgren Science Application International Corporation (SAIC) 7 West 36St. - 10th. Floor NYC, NY 10018 |
![]() |
![]() |
![]() |
ANSI/AIAA American National Standard Recommended Practice for Software Reliability, R-013-1993, Feb. 23, 1993.
Boehm, Barry W., Software Engineering Economics, Prentice-Hall, NY, 1981.
Musa, J. D, et al., Software Reliability Measurement, Prediction, and Application, McGraw Hill, NY, 1987.
Shooman, Martin L., Software Engineering: Design, Reliability, and Management, McGraw-Hill, NY 1983.
Shooman, Martin L., Reliability of Fault-Tolerant Computer Systems and Networks, John Wiley and Sons, NY 1999.