What Do You Mean You Can’t Tell Me If My Project Is In Trouble?

Joseph Kasser, Ph.D. - University of Maryland and Victoria Williams - Keane Federal Systems, Inc.

Introduction

The SDLC for large systems can take several years to complete. During this time, the:

The reports containing this intermediate information are produced to demonstrate a low risk of non delivery and non compliance to the requirements. (Drucker, 1973, 509) wrote that throughout management science, in the literature as well as in the work in progress, the emphasis is on techniques rather than on principles, on mechanics rather than on decisions, on tools rather than on results, and, above all, on the efficiency of the part rather than on the performance of the whole. Nothing seems to have changed in 25 years. While the SDLC has evolved from the waterfall method through various iterative approaches (e.g., Incremental, Spiral, and Rapid Prototyping), the focus of measurements being made in today’s paradigm are still in the process and product dimensions of the activities (Kasser, 1997). These measurements provide post facto information, namely they report on what has already happened. This causes management to be reactive instead of being proactive. In addition, in spite of all the measurements being made, the supplier is often unable to tell the customer:

Thus, there is little wonder that software projects tend to fail (exceed original estimates for cost and schedule, or terminate prematurely). For example, in the United States alone, in 1995:

The growing international dependency on the ISO standards for the SDLC indicates that this phenomenon of software project failure is not limited to the United States. Anecdotal evidence suggests that most projects do not fail due to technical reasons. Rather, the failure tends to be due to the human element. In addition, while the Standish Group identified ten major causes for project failure along with their solutions, they also stated that it was unclear if those solutions could be implemented (Voyages, 1966). This paper describes the development of several indicators that can be used to identify metrics to predict that a project is at risk of failure.


A Methodology for Developing Metrics for Predicting Risks of Project Failures

The methodology is based on Case Studies written by students in the Graduate School of Management and Technology at the University of Maryland University College. These students wrote and presented term papers describing their experiences in projects that were in trouble. The papers adhered to the following instructions:

  1. Document a Case Study: Students had to write a scenario for the paper based on personal experience.
  2. Analyze the scenario
  3. Document the reasons the project succeeded or ran into trouble
  4. List and comment on the lessons learned from the analysis
  5. Identify a better way with 20/20 hindsight
  6. List a number of situational indicators that can be used to identify a project in trouble or a successful project while the project is in progress
  7. The methodology:

    1. Summarized the student papers to identify common elements
    2. Surveyed systems and software development personnel via the Internet to determine if they agreed or disagreed with the indicators
    3. Summarized and analyzed the results
    4. Summary of Student Papers
      Nineteen students produced papers that identified 34 different indicators. Each indicator identified was a risk or a symptom of a risk that can lead to project failure. Several indicators showed up in more than one student paper; “poor requirements”showed up in all of the papers.

      The Survey
      A survey questionnaire was constructed based on the student provided risk-indicators and sent to systems and software development personnel via the Internet. The survey asked respondents to state if they agreed or disagreed that the student provided indicators were causes of project failure. One hundred and forty-eight responses were received.

      The findings are summarized in Table 1. The first column contains a number identifying the risk-indicator described in the second column. The third column lists the number of students that identified the risk. The fourth column contains the percentage of agreement. The fifth column contains the percentage of disagreement. The sixth column is the ranking of the risk-indicator.

      Survey Results
      The survey results were surprising. Modern Total Quality Management (TQM) theory holds that the Quality Assurance Department is not responsible for the quality of the software. Everybody shares that responsibility. Thus, while it was expected that most respondents would disagree with this risk-indicator, only 60% of the respondents disagreed. It was also anticipated that most respondents would agree with the other risk-indicators, yet the overall degree of agreement was:

      0.7% (one respondent) agreed with all 34 risk-indicators
      8.1% agreed with at least 30 risk-indicators
      51% agreed with at least 20 risk-indicators
      93% agreed with at least 10 risk-indicators

      As for the degree of disagreement:
      0.7% (one respondent) disagreed with 25 risk-indicators
      4.7% disagreed with at least 20 risk-indicators.


      Table 1 Initial Findings

      Risk

      Risk-Indicators

      Students

      Survey

      agree

      Survey

      disagree

      Rank

      1

      Poor requirements

      19

      97

      3

      1

      2

      Failure to use experienced people

      7

      79

      21

      13

      3

      Failure to use Independent Verification and Validation (IV&V) [Note 1]

      6

      38

      62

      31

      4

      Lack of process and standards

      5

      84

      16

      11

      5

      Lack of, or, poor plans

      4

      95

      5

      2

      6

      Failure to validate original specification and requirements

      3

      91

      9

      3

      7

      Lack of Configuration Management

      3

      66

      34

      19

      8

      Low morale

      2

      51

      49

      24

      9

      Management does not understand SDLC

      2

      59

      41

      22

      10

      Management that does not understand technical issues

      2

      56

      44

      23

      11

      No single person accountable/responsible for project

      2

      69

      31

      18

      12

      Client and development staff fail to attend scheduled meetings

      1

      42

      58

      28

      13

      Coding from high level requirements without design

      1

      75

      25

      14

      14

      Documentation is not produced

      1

      63

      38

      21

      15

      Failure to collect performance & process metrics and report them to management

      1

      48

      52

      25

      16

      Failure to communicate with the customer

      1

      88

      12

      5

      17

      Failure to consider existing relationships when replacing systems

      1

      85

      15

      10

      18

      Failure to reuse code

      1

      27

      73

      34

      19

      Failure to stress test the software

      1

      75

      25

      15

      20

      Failure to use problem language

      1

      34

      66

      30

      21

      High staff turnover

      1

      71

      29

      16

      22

      Key activities are discontinued

      1

      74

      26

      17

      23

      Lack of Requirements Traceability Matrix

      1

      67

      33

      19

      24

      Lack of clearly defined organizational (responsibility and accountability) structure

      1

      82

      18

      11

      25

      Lack of management support

      1

      87

      13

      6

      26

      Lack of priorities

      1

      85

      15

      8

      27

      Lack of understanding that demo software is only good for demos

      1

      47

      53

      26

      28

      Management expects a CASE Tool to be a silver bullet

      1

      45

      55

      27

      29

      Political considerations outweigh technical factors

      1

      86

      14

      9

      30

      Resources are not allocated well

      1

      92

      8

      4

      31

      The Quality Assurance Team is not responsible for the quality of the software

      1

      40

      60

      29

      32

      There are too many people working on the project

      1

      36

      64

      32

      33

      Unrealistic deadlines - hence schedule slips

      1

      86

      14

      7

      34

      Hostility between developer and IV&V

      1

      33

      67

      33

      Note 1: The papers were written for a class on IV&V, hence the emphasis on IV&V. However, if the descriptions of tasks that IV&V should have performed (in the papers) are examined, the word “IV&V” could easily be replaced with the word “systems engineering,” and the papers would be equally valid.

      52% disagreed with at least 10 risk-indicators.
      88% disagreed with at least one risk-indicator.



      Further Analysis including; Risk Indicator Priorities, Sensitivity Analysis, and Risk Indicators People Disagreed With are in next section of this article.

      Continue this article by following the next button.


      Previous Table of Contents Next