Software Testing as an Art, a Craft and a Discipline

James A. Whittaker, Professor of Computer Science, Florida Tech

The first book on software testing set the tone for software testers and software testing careers. The title of that book The Art of Software Testing identified our discipline as a collection of artists applying their creativity to software quality. Practitioners of software testing and quality assurance have been sold short by such a label.

Artists indeed! Software testing is a far cry from those endeavors that most people accept as art: painting, sculpture, music, literature, drama and dance. In my mind, this is an unsatisfying comparison given that my training as a tester has been more engineering than art. My success as a tester has everything to do with my engineering abilities and little to do with my artistic penchant.

Certainly, I’ll agree that, like artists, software testers need to be creative, but art implies skill without training. Most virtuoso artists were born to the task and those of us unlucky enough to have no artistic talent are unlikely to develop such skill despite a lifetime of practice.

I also understand that two authors attempted to copyright the title The Craft of Software Testing, acknowledging Myers’ title and also implying a growth of the discipline from art to craft. This too sells testers far short of the difficulty of their calling. Indeed, the idea of software testing as a craft is equally unsettling as calling it an art. Craftsmen are carpenters, plumbers, masons and landscape designers. Crafts are exemplified by a lack of a real knowledge base. Most craftsmen learn on the job and mastery of their craft is a given as long as they have the drive to practice. Crafts are two parts dexterity and only one part skill. Indeed, carpenters have no need to understand the biology of trees, only to skillfully mold wood into beautiful and useful things.

Testing as arts or crafts doesn’t begin to describe what we do; and I’ll start a fight with anyone who attempts to call it arts and crafts!

I suggest the most fitting title for a book on software testing would be The Discipline of Software Testing. I would argue that discipline better defines what we do as testers and provides us with a useful model on which to pattern our training and our careers. Indeed, this is the best reason to call it a discipline: by studying other disciplines, we gain more insight into testing than using the analogies of arts or crafts.

A discipline is a branch of knowledge or learning. Mastery of a discipline is achieved through training, not practice. Training is different than practice. Practice requires doing the same thing over and over again, the key being repetition. One can practice throwing a ball for example and even though “practice makes perfect”, simply throwing a ball will not make you a major league pitcher, becoming that good requires training.

Training is much more than just practice. Training means understanding every nuance of your discipline. A pitcher trains by building his muscles so that maximum force can be released when throwing a ball. A pitcher trains by studying the dynamics of the mound, where to land his foot for maximum effect on any given pitch and how to make use of his much stronger leg muscles to propel the ball faster. A pitcher trains by learning how to effectively use body language to intimidate batters and runners. A pitcher trains by learning to juggle, to dance and to do yoga. A pitcher who trains to be at the top of his game does many things that have nothing to do with throwing a ball and everything to do with making himself a better ball thrower. This is why Hollywood’s “karate kid” waxed cars and balanced on fence posts; he wasn’t practicing to fight, he was training to be a better fighter.

Treating software testing as a discipline is a more useful analog than treating it as an art or a craft. We are not artists whose brains are wired at birth to excel in quality assurance. We are not craftsmen who perfect their skill with on-the-job practice. If we are, then it is likely that full mastery of the discipline of software testing will elude us. We may become good, indeed quite good, but still fall short of achieving black belt—dare I say Jedi?—status. Mastery of software testing requires discipline and training.

A software testing training regime should promote understanding of fundamentals. I suggest three specific areas of pursuit to guide anyone’s training:

First and foremost, master software testers should understand software. What can software do? What external resources does it use to do it? What are its major behaviors? How does it interact with its environment? The answers to these questions have nothing to do with practice and everything to do with training. One could practice for years and not gain such understanding.

Software works in an environment best exemplified by the diagram of Figure 1 [2, 3]. This diagram shows four major categories of software users, i.e., entities within an application’s environment that are capable of sending the application input or consuming its output. It is interesting to note that of the four major categories of users,

only one is visible to the human tester’s eye: the user interface. The interfaces to the kernel, the files system and other software components happen without scrutiny. Without understanding these interfaces, testers are taking into account only a very small percentage of the total inputs to their software. By paying attention only to the visible user interface, we are limiting what bugs we can find and what behaviors we can force.

Take as an example the scenario of a full hard drive. How do we test this situation? Inputs through the user interface will never force the code to handle the case of a full hard drive. This scenario can only be tested by controlling the file system interface. Specifically we need to force the files system to indicate to the application that the disk is full. Controlling the UI is only one part of the solution.

Understanding the environment in which your application works is a nontrivial endeavor that all the practice in the world will not help you accomplish. Understanding the interfaces that your application possesses and establishing the ability to test them requires discipline and training. This is not a task for artists and craftspeople.

Second, master software testers should understand software faults. How do developers create faults? Are some coding practices or programming languages especially prone to certain types of faults? Are certain faults more likely for certain types of software behavior? How do specific faults manifest themselves as failures?

There are many different types of faults that testers must study and this forum is too limited to describe them all. For a good start see [2]. However, consider default values for data variables as an example. For every variable used in a program, the variable must be first declared and then given an initial value. If either of these steps is skipped then a fault exists for testers to look for. Failure to declare a variable (as is the case with languages that allow for implicit variable declaration) can cause a single value to be stored in multiple variables. Failure to initialize a variable means that when a variable is used its value is unpredictable. In either case, the software will fail eventually. The trick for the tester is to be able to force the application to fail and then be able to recognize that it has failed.

Testers must understand what software faults are commonly made and be able to identify the potential for a fault to exist in any given feature. This is not an art; this is not a craft. The ability to do this is based on understanding the very nature of that which we test: software and the faults it may contain. Just as construction engineers understand the potential faults in a large building project, so must we understand the potential faults that may be introduced in a large software project.

Figure 1

Third, master software testers should understand software failure. How and why does software fail? Are there symptoms of software failure that give us clues to the health of an application? Are some features systemically problematic? How does one drive certain features to failure?

Recognizing a failure is arguably the most important skill that a tester can possess. After all, if the fault manifests but we fail to notice the failure symptoms then it is unlikely that the bug will get fixed. The problem here relates back to understanding Figure 1. Only some of the symptoms of failure manifest at the user interface level (where the failure is easily seen by a human tester). The others are buried in the file system, kernel calls and API calls to other components. These interfaces are not visible to the human eye and require specialized tools to properly analyze.

Understanding software, faults and failures is the first step to treating software testing as a discipline. Treating software as a discipline is the first step toward mastering software quality. And there is more, always more to learn. Discipline is a lifelong pursuit. If you trick yourself into thinking you have all the answers, then mastery will elude you. But training builds knowledge so the pursuit itself is worthwhile whether or not you ever reach the summit.

References

[1] G. J. Myers, The Art of Software Testing (Wiley, New York, 1979).

[2] J. A. Whittaker, How to Break Software (Addison Wesley, Reading MA, 2002).

[3] J. A. Whittaker, “Software’s invisible users,” IEEE Software, 18, 3, pp. 84-88 (2001).

Although several collections of testing papers were published as books before Myers’ 1979 work [1], his was the first book to be written from scratch as a software testing text.

About the Author

James A. Whittaker is a professor of computer science at the Florida Institute of Technology. He earned his Ph.D. in computer science from the University of Tennessee in 1992. His research interests are software testing, software security, software vulnerability testing and anti cyber warfare technology. He is the author of How to Break Software, How to Break Software Security (with Hugh Thompson) and over 50 peer-reviewed papers on software development and computer security. He holds patents on various inventions in software testing and defensive security applications and has attracted millions in funding, sponsorship and license agreements while a professor at Florida Tech. He also has served as a testing and security consultant for Microsoft, IBM, Rational and many more US companies. In 2001 he was appointed to Microsoft’s Trustworthy Computing Academic Advisory Board and was named a “Top Scholar” by the editors of the Journal of Systems and Software based on his research publications in software engineering. His research team at Florida Tech is known for its testing technologies and tools, which include the highly acclaimed runtime fault injection tool Holodeck. His research group is also well known for their development of exploits against software security, including cracking encryption, passwords and infiltrating protected networks via novel attacks against software defenses.


Previous Table of Contents Next