When I hear the word testing, I become stressed. What a headache, I think. Pictures flash through my head of teams burning the midnight oil trying to get a product out. Ed Yourdon's book Death March 1 comes to mind along with visions of sleepless nights and troublesome days. Thoughts about budget and schedule problems clutter my head along with the seemingly ever-present performance problems that always seem to arise near the end of the project. I ponder, "how did we get in so much trouble?" and contemplate "have things gotten any better?"
When you give these questions some thought, you realize that we have made great strides in the realm of testing over the past decade. Shifts to incremental development and delivery that address persistent integration problems. Test management processes have matured and people seem to be paying attention to test issues earlier in their programs. Test methods and tools that we talked about just a decade ago are currently being used and working. Most important, we no longer seem to be beating our heads against the wall as we try to cope with the issues and challenges that seem to pop up the moment we start integrating and testing our products.
Let's look at the improvements by summarizing how we've dealt with the problems that existed just a decade ago. Then, let's look to the future to identify the challenges that testing must face in but the near-term.
To check out where we've made progress, I've consulted some friends, old and new, in my bookcase and periodicals rack2.
For example, I reviewed Fred Brooks's Mythical Man-Month 3, Tom DeMarco's Controlling Software Projects4, Bob Glass's Recollections of Software Pioneers5, Walker Royce's excellent new book Software Project Management 6 and others to see what they said on the subject of testing. They, and the test books that I have reviewed, seem to agree with the following observations- relative to the test wisdom that is summarized in Table 1.
No. | Problem | Current Wisdom |
1 | Testing considered late in the project | Start test planning and preparation the day that you start the project |
2 | Requirements not testable | Validate testability of requirements as you write the specification 7 |
3 | Integrate after all components have been | Build a little, then test a little. Don't wait until thoroughly tested the last moment to test. Try before you buy. |
4 | One step forward, two steps backward | Use repeatable processes to order the manner in which you integrate and test the system |
5 | Regression testing done ad hoc | Automate the test process and use tools to specify, perform and administer the conduct 8 |
6 | Test progress hard to measure (Test until you and/or your budget is exhausted) | Use a variety of standard software metrics to determine whether you have tested enough9 |
Not only have we made progress in the world of testing, we've started tackling a host of new challenges. The world of software development is undergoing a lot of change these days with the influx of new paradigms (spiral, incremental development, drop and ship, etc.), technology (Java, active agents, etc.) and component-based software development. This change has fostered new approaches to evaluating and qualifying software components. For example, agents are now being used as part of several modern development environments to capture metrics data automatically. These metrics trigger actions especially when error rates and other indicators show quality goals are not being realized. Table 2 identifies some of the new challenges the test community faces and summarizes how they are trying to address them.
No. | Challenge | Current Solution/Approach |
1 | Incremental and spiral paradigms 10 | Incremental testing; regression test baseline; early user testing (hopefully with prototype); use cases to define threads through software per usage views |
2 | COTS-based development paradigm 11 | Try before you buy; performance benchmarking; open Application Program Interface (API); preferred package and vendor lists; simplify glue code development. |
3 | Component-based development paradigm 11 | Open API; agent-based testing; use cases to group components into test sets; test harness (with standard instrumentation to test fine-grained passive/active parts). |
4 | Java (active applets)* | Agent-based testing; fuzzy set theory (localization). Neural networks (dynamic instantiation); Java virtual machine restriction and instrumentation. |
5 | Active agents (including web-based Brute force testing using distributed test technology; robots, spiders, etc.)* | Java testing concepts (see 4); knowledge-based extensions (smart agents; ORB based guardians, etc.). |
That's great news, you're probably thinking. We've got the test demon under control. Well, that's not exactly the case. Only leading firms within the industry has put these concepts to work systematically, repeatedly and consistently. The major reason behind this gap between theory and practice is simple, people buckle under deadline pressures. Anyone who's been there understands the problem. Software projects tend to get into trouble a little at a time, not all at once. As things go awry, process improvements, disciplined methods and other good ideas are discarded as efforts are made to stay on schedule. So, it seems we still face the same issues we did a decade ago even in light of the progress we have made. Simply stated, when we become entangled by the crunch mode, testing discipline seems to go out the door.
Test research suffers similar maladies. There still is a push to improve specification technology. The motivation is to get rid of errors early and eliminate the need to test. While philosophically appealing, we still haven't figured out how to tame the specification beast. This emphasizes the need for research to address the test issues identified in Table 2. Unfortunately, the university and research community is not addressing this need. When you review the premier research programs in the United State and abroad, you see that most of their money is being spent on development rather than test topics.
Are my conclusions relative to progress within the testing field still valid? Let me answer this question by posing some questions. What would your management do when faced with a potential schedule slip? Would they have the guts to delay shipment because they are worried about poor product quality? Would they accept the risks inherent in deferring documenting the test results and getting their regression tests in order until after the delivery were made? How would they handle the situation when the user is screaming for results, members of the team are transitioning to new projects, and everyone involved seems over stressed, overworked and tired? What would you do if you were placed in their shoes?
In spite of the advances we seem to have made in the technology, we can conclude that the same pressures to release prematurely persist when it comes to testing. Perhaps, this is an important message. It says to me that we may need to alter path we take as we embark on our quest for new and better ways to handle the test challenges I've outlined. When you perform a root cause analysis to determine the real problem, you can make the following three observations:
We never seem to allocate enough time and effort to testing activities. Even those who do seem to get into trouble because their management tends to reallocate these resources to others as problems arise. In response, maybe we should calibrate our estimation models more precisely to our actual test experience so that we have adequate resources when we start off. Then, we could put processes in place to allocate reserves retained to deal with risk instead of taking funds away from the testing effort.
Management at all levels of the organization doesn't seem to fully understand what it takes to be successful in a test effort. They don't realize that large investment for processes, tools, techniques, facilities, and infrastructure is needed in order to put test technology to work for them. Perhaps, the test community needs to do a better job of educating their management about their needs. But, they need to do so armed with the data that was derived above about what it really takes to get the job done.
Test management tends to pay more attention to the technical than the management issues. For example, they focus on test methods and tools instead of processes and infrastructure. I believe that the order of concentration should be reversed. Weave your test expectations into your standard software develop-ment process. If you use the Capability Maturity Model as your framework12, do this in a way that a test discipline is a natural part of the way that you conduct your business. If you are pursuing ISO certification, define quantitative test expectations for your gate checklists.
I'd like to issue a call to action. Let's do something about this state of affairs. Let's view three these three observations as opportunities, not challenges. Instead of complaining that we don't have the resources to do the job right, let's gather data about our test experience and publish it to set reasonable expectations. Let's ask professional groups like the International Society of Parametric Analysts (ISPA) and the International Function Point Users Group (IFPUG) to prepare benchmarks about testing for the community. Let' stimulate more work on test issues within the universities and research institutions. Most importantly, let's use the data we publish to prepare business cases for improving our processes and inserting a viable test management infrastructure as we initiate our education activities.
About the Author |
Author Contact Information |
Donald J. Reifer is one of the leading figures in the field of software engineering and management with over 30 years of progressive experience in both industry and government. Recently, Mr. Reifer managed the DoD Software Initiatives Office under an Intergovernmental Personnel Act assignment with the Defense Information Systems Agency. As part of this assignment, he also served as the Director of the DoD Software Reuse Initiative and Chief of the Ada Joint Program Office. Previously, while with TRW, Mr. Reifer served as Deputy Program Manager for their Global Positioning Satellite (GPS) efforts. While with the Aerospace Corporation, Mr. Reifer managed all of the software efforts related to the Space Transportation System (Space Shuttle). Currently, as President of RCI, Mr. Reifer supports executives in many Fortune 500 firms who are looking to develop investment strategies and improve their software capabilities and capacity. Mr. Reifer is the Principal Investigator on our best software acquisition practices and information warfare SBIR efforts. He is also helping develop a variety of estimating models as a senior research associate on the USC COCOMO II team led by Dr. Barry Boehm. Mr. Reifer was awarded the Secretary of Defense's Medal for Outstanding Public Service in 1995 for the innovations he brought to the DoD during his assignment. Some of his many other honors include the Hughes Aircraft Company Fellowship, the Frieman Award for advancing the field of parametrics, the NASA Exceptional Service Medal and membership in Who's Who in the West. |
Donald J. Reifer |
![]() |
![]() |