WebSite Testing

Dr. Edward Miller, Software Research, Inc.


Introduction

The nearly instant worldwide audience makes a WebSite's quality and reliability crucial to its success. The nature of the WWW and of WebSite software pose unique software testing challenges. Webmasters, WWW applications developers, and WebSite quality assurance managers need tools and methods that meet very specific needs. Our technical approach, based on extending existing WWW browsers, offers many attractive benefits in meeting these needs.


Background

Within minutes of going live, a WWW application can have many thousands more users than a conventional, non-WWW application. The immediacy of a WebSite creates immediate expectations of quality, but the technical complexities of a WebSite and variances in the available browsers make testing and quality control that much more difficult, than "conventional" client/server or application testing. Automated testing of WebSites thus is both an opportunity and a significant challenge.


Defining Website Quality and Reliability

Like any complex piece of software there is no single quality measure to fully characterize a WebSite.

There are many dimensions of quality, and each measure will pertain to a particular WebSite in varying degrees. Here are some of them:

Clearly, “Quality” is in the mind of the WebSite user. A poor-quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc. may cost in poor customer relations, lost corporate image, and even in lost sales revenue. Very complex WebSites can sometimes overload the user.


Website Architectural Factors

A WebSite can be quite complex, and that complexity can be a real impediment in assuring WebSite Quality.

What makes a WebSite complex? These are the issues test systems have to contend with:


Website Test Automation Requirements

Assuring WebSite quality automatically requires conducting sets of tests, automatically and repeatably, that demonstrate required properties and behaviors. Here are some required elements of tools that aim to do this.

Tests need to operate from the browser level for two reasons: (1) this is where users see a WebSite, so tests based in browser operation are the most realistic; and (2) tests based in browsers can be run locally or across the Web equally well. Local execution is fine for quality control, but not for performance measurement work, where response time including Web-variable delays reflective of real-world usage is essential.


Website Dynamic Validation

Confirming validity of what is tested is the key to assuring WebSite quality the most difficult challenge of all. Here are four key areas where test automation will have a significant impact.

  1. Operational Testing. Individual test steps may involve a variety of checks on individual pages in the WebSite:

  2. Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run these tests in a variety of modes: unattended, distributed across many machines, background, etc.

  3. Content Validation. Apart from how a WebSite responds dynamically, the content should be checkable either exactly or approximately. Here are some ways that content validation could be accomplished:

  4. Load Simulation. Load analysis needs to proceed by having a special purpose browser act like a human user. This assures that the performance checking experiment indicates true performance -- not performance on simulated but unrealistic conditions. There are many "http torture machines" that generate large numbers of http requests, but that is not necessarily the way real-world users generate requests.


Testing System Characteristics

Considering all of these disparate requirements, it seems evident that a single product that supports all of these goals will not be possible. However, there is one common theme: the majority of the work seems to be based on "...what does it [the WebSite] look like from the point of view of the user?" That is, from the point of view of someone using a browser to look at the WebSite.

This observation led our group to conclude that it would be worthwhile trying to build certain test features into a "test enabled web browser", which we called CAPBAK/Web in the expectation that this approach would let us do the majority of the WebSite quality control functions using that engine as a base.

Browser Based Solution - With this as a starting point, we determined that the browser based solution had to meet these additional requirements:

Taking these requirements into account, and after investigation of W3C's Amaya Browser and the open-architecture Mozilla/Netscape Browser we chose the IE Browser as our initial base for our implementation of CAPBAK/Web.

User Interface - How the user interacts with the product is very important, in part because in some cases the user will be someone very familiar with WebSite browsing and not necessarily a testing expert. The design we implemented takes this reality into account.


Example Uses

Early applications of the CAPBAK/Web system have been very effective in producing experiments and collecting data that is very useful for WebSite checking. While we expect CAPBAK/Web to be the main engine for a range of WebSite quality control and testing activities, we've chosen two of the most typical and most important applications to illustrate how CAPBAK/Web can be used.


Summary

All of these needs and requirements impose constraints on the test automation tools used to confirm the quality and reliability of a WebSite. The CAPBAK/Web approach offers some significant benefits and technical advantages when dealing with complicated WebSites. Better, more reliable WebSites should be the result.

Resources

This article is based on many sources and relies in part on a prior White Paper The WebSite Quality Challenge.

A more complete version of this paper can be found at WebSite Testing

You can learn more about the CAPBAK/WEB system by taking a Tour of CAPBAK.

There is a detailed description of the P4 Family of CAPBAK/Web Examples.

About the Author

Author Contact Information

Dr. Edward Miller is Chairman of Software Research, Inc., San Francisco, California, where he has been involved with software test tools development and software engineering quality questions. Dr. Miller has worked in the software quality management field for 25 years in a variety of capacities, and has been involved in the development of families of automated software and analysis support tools. He was chairman of the 1985 1st International Conference on Computer Workstations, and has participated in IEEE conference organizing activities for many years. He is the author of Software Testing and Validation Techniques, an IEEE Computer Society Press tutorial text. Dr. Miller received a Ph.D. (Electrical Engineering) degree from the University of Maryland, an M.S. (Applied Mathematics) degree from the University of Colorado, and a BSEE from Iowa State University.



Dr. Edward Miller
Software Research, Inc.
901 Minnesota Street
San Francisco, CA 94107 USA
Phone: (415) 550-3020, Tollfree (USA): 800-942-SOFT
Fax: (415) 550-303
E-mail: [email protected]
www.soft.com

Previous Table of Contents Next