Testing Software Based Systems: The Final Frontier

Thomas Drake, Coastal Research & Technology Inc. (CRTI)


Where are We? Setting the Stage

The increasing cost and complexity of software development is leading software organizations in the industry to search for new ways through process methodology and tools for improving the quality of the software they develop and deliver. However, the overall process is only as strong as its weakest link. This critical link is software quality engineering as an activity and as a process. Testing is the key instrument for making this process happen.

Software testing has traditionally been viewed by many as a necessary evil, dreaded by both software developers and management alike, and not as an integrated and parallel activity staged across the entire software development life cycle. One thing is clear - by definition, testing is still considered by many as only a negative step usually occurring at the end of the software development process while others now view testing as a “competitive edge” practice and strategy.

The best that can happen under the former perception is that no problems are detected in the software and that none exist for detection until after delivery. We know this is not the case in the real word of application development. In reality, it is testing that finds problems which trigger a feedback loop to development for resolution and retesting to make sure the fix works and has not created other problems. All of this activity invariably happens under extreme time constraints and with significant management visibility. But it is the kind of visibility that no one usually wants because everyone else above in the development food chain could slip.


The Real World of Software Development - A Sobering Perspective

The following scenario is not unusual and represents a composite perspective gleaned from this writer’s 10 years of experience in the information technology industry and DoD environments.

A software test specialist is assigned to work on a multimillion dollar effort to develop a new system. The test specialist knows that the completion date for the program is unrealistic, given the scope and complexity of the development effort.

As a result of testing, the test specialist knows that there are some real serious technical difficulties impacting the software system’s interface performance with a very large relational database system, as well as numerous bugs in the query routines for the graphical user interface.

After months of keeping growing concerns private, the test specialist decides to share these concerns with a fellow colleague. These concerns were not raised earlier because attempts to do so by others with management had resulted in management telling them not to rock the boat. They had learned that viewpoints perceived as negative were unwelcome and not wanted. Fellow colleagues had stopped giving feedback to management because they now felt their views would be ignored and were afraid that additional feedback of this type would affect their careers. So this test specialist told management what we thought they wanted to hear, that there were some minor problems with the software but nothing that could not be resolved in time for the projected delivery date.

However, as the release date loomed ever closer, it was becoming obvious that the software was overly complex, had a lot of functional problems, and most importantly, would not operate as promised at the point of delivery. The test specialist knew it would be a disaster if the system was delivered as scheduled.

The system went through a development test and evaluation (DT&E) period that was aborted, and the program was subsequently canceled by the acquisition organization after multiple tens of millions of dollars had been spent on development. The test organization was later disbanded because it was perceived as part of the problem.

Test professionals who find themselves in similar circumstances are faced with a difficult choice. What should this test specialist have done?

The Association for Computing Machinery (ACM) code of ethics states the following:

“The honest computing professional will not make deliberately false or deceptive claims about a system or system design, but will instead provide full disclosure of all pertinent system limitations and problems.”

And the biggest single obstacle is cultural. Testing is not generally viewed in our software development environments as where the real action is. The general perception is still the following in many development organizations - testers are software developers who could not make it and only real developers become programmers. Testers, in particular, are often regarded as second class citizens and rewarded accordingly. This often leads to high turnover, junior level experience, and no commitment to a comprehensive test program on the part of management.

However, becoming a good test engineer requires a skill set at least as equally complex as that of a good software developer. And the importance of testing is becoming more and more relevant with the dependencies we place on software creating “consequential damages” and legal quicksand when it does not work as advertised. What does it take?


Creating the Right Environment - The People Side of the Equation

Senior managers within information technology must create an environment and foster a professional climate in which their test and development engineers are encouraged to recognize and respond positively within a software development effort and where all project tasking is rigorously and regularly reviewed. It is the job of the tester to “tell it like it is.”

We usually think of testing in software development as something we do when we run out of time or after we have developed code. However, the fundamental approach as presented here focuses on testing as a fully integrated but independent activity with development that has a lifecycle all its own, and that the people, the process and the appropriate automated technology are crucial for the successful delivery of the software based system. Planning, managing, executing, and documenting testing as a key process activity during all stages of development is an incredibly difficult process. By definition, it has to be comprehensive. And finally, who does the testing and the requisite commitment to testing is perhaps as important as the actual testing itself.


Software Quality Engineering - As a Discipline and as a Practice (Process and Product)

Software Quality Engineering is composed of two primary activities - process level quality which is normally called quality assurance, and product oriented quality that is normally called testing. Process level quality establishes the techniques, procedures, and tools that help promote, encourage, facilitate, and create a software development environment in which efficient, optimized, acceptable, and as fault-free as possible software code is produced. Product level quality focuses on ensuring that the software delivered is as error-free as possible, functionally sound, and meets or exceeds the real user’s needs. Testing is normally done as a vehicle for finding errors in order to get rid of them. This raises an important point - then just what is testing?

Common definitions for testing - A Set of Testing Myths:

“Testing is the process of demonstrating that defects are not present in the application that was developed.”

“Testing is the activity or process which shows or demonstrates that a program or system performs all intended functions correctly.”

“Testing is the activity of establishing the necessary “confidence” that a program or system does what it is supposed to do, based on the set of requirements that the user has specified.”

All of the above myths are very common and still prevalent definitions of testing. However, there is something fundamentally wrong with each of these myths. The problem is this - each of these myths takes a positive approach towards testing. In other words, each of these testing myths represents an activity that proves that something works.

However, it is very easy to prove that something works but not so easy to prove that it does not work! In fact, if one were to use formal logic, it is nearly impossible to prove that defects are not present. Just because a particular test does not find a defect does not prove that a defect is not present. What it does mean is that the test did not find it.

These myths are still entrenched in much of how we collectively view testing and this mind-set sets us up for failure even before we start really testing! So what is the real definition of testing?

“Testing is the process of executing a program/system with the intent of finding errors.”

The emphasis is on the deliberate intent of finding errors. This is much different than simply proving that a program or system works. This definition of testing comes from The Art of Software Testing by Glenford Myers. It was his opinion that computer software is one of the most complex products to come out of the human mind.

So why test in the first place? You know you can’t find all of the bugs. You know you can’t prove the code is correct. And you know that you will not win any popularity contests finding bugs in the first place. So why even bother testing when there are all these constraints? The fundamental purpose of software testing is to find problems in the software. Finding problems and having them fixed is the core of what a test engineer does. A test engineer should WANT to find as many problems as possible and the more serious the problems the better. So it becomes critical that the testing process is made as efficient and as cost-effective as possible in finding those software problems. The primary axiom for the testing equation within software development is this:

“A test when executed that reveals a problem in the software is a success.”

The purpose of finding problems is to get them fixed. The benefit is code that is more reliable, more robust, more stable, and more closely matches what the real end-user wanted or thought they asked for in the first place! A tester must take a destructive attitude toward the code, knowing that this activity is, in the end, constructive. Testing is a negative activity conducted with the explicit intent and purpose of creating a stronger software product and is operatively focused on the “weak links” in the software. So if a larger software quality engineering process is established to prevent and find errors, we can then change our collective mind-set about how to ensure the quality of the software developed.

The other problem is that you will really never have enough time to test. We need to change our understanding and use the testing time we do have, by applying it to the earlier phases of the software development life cycle. You need to think about testing the first day you think about the system. Rather then viewing testing as something that takes place after development, focus instead on the testing of everything as you go along to include the concept of operations, the requirements and specifications, the design, the code, and of course, the tests!


The Further Along You Are In The Software Development Life Cycle The More It Costs To Test!

Lesson learned - just test early. Test early and often. Test the design of the system before you build any pseudo-code. Test the specs before you actually code. Review the code during coding before you test the code, and then finally execute actual test cases. By doing the reviews and the code-level analyses during all phases of the development life cycle you will find many, if not most of the problems in the system before the traditional testing period even begins. These activities alone will greatly improve the quality of the delivered system.

“Find out the cause of this effect, Or rather say, the cause of this defect, For this effect defective comes by cause.” - - Hamlet (with thanks to DeMarco)

About the Author

Author Contact Information

Mr. Drake is a software systems quality specialist and management and information technology consultant for Coastal Research & Technology Inc. (CRTI). He currently leads and manages a U.S. government agency-level Software Engineering Knowledge Based Center’s software quality engineering initiative. As part of an industry and government outreach/partnership program, he holds frequent seminars and tutorials covering code analysis, software metrics, Object-Oriented (OO) analysis for C++ and Java, coding practice, testing, best current practices in software development, the business case for software engineering, software quality engineering practices and principles, quality and test architecture development and deployment, project management, organizational dynamics and change management, and the people side of information technology. He is the principal author of a chapter on “Metrics Used for Object-Oriented Software Quality” for a CRC Press Object Technology Handbook published in December of 1998. In addition, Mr. Drake is the author of a theme article entitled: “Measuring Software Quality: A Case Study” published in the November 1996 issue of IEEE Computer. Mr. Drake is listed with the International Who’s Who for Information Technology for 1999, is a member of IEEE and an affiliate member of the IEEE Computer Society. He is also a Certified Software Test Engineer (CSTE) from the Quality Assurance Institute (QAI).

Thomas A. Drake
Coastal Research & Technology Inc.
5063 Beatrice Way
Columbia, MD 21044
Phone: (301) 688-9440
Fax: (301) 688-9436
E-mail: [email protected]

“Software implementation is a cozy bonfire, warm, bright, a bustle of comforting concrete activity. But beyond the flames is an immense zone of darkness. Testing is the exploration of this darkness.” - extracted from the 1992 Software Maintenance Technology Reference Guide


Table of Contents Next