Volume 6, Number 1 - Topics in Software Engineering
It is no surprise to anyone that software systems frequently change during their lifetime. Users' demand of more features in addition to changing environments often lead to constant change of the software. It is apparent that software change is unavoidable and necessary, but it is important to remember that change can actually do more harm than good, and may eventually ruin the system. As a matter of fact, repeated changes to a software system often lead to its degeneration; the system becomes more complex than necessary and less maintainable. This is especially true in environments where developers unfamiliar with the system are making the changes. Developers new to the system need to understand the software in order to implement the changes in the right places and in the right way, but often they are under intense time pressure. Frequently, developers have to make the changes without having the time to first develop a good understanding of the system, making the risk for structural damage even greater. Considering that a great deal of effort was spent designing the architecture and that this big investment is lost if the architecture degenerates, it is valid to ask the question:
How do we allow for software changes while preventing architectural degeneration?
We asked ourselves this question as we experienced the problem of architectural degeneration in our organization. We realized that one of our software products had indeed degenerated and was unmaintainable. We invested in a considerable redesign of the software system, but were still concerned that the new version might degenerate again as soon as changes were made to it. The reason for our concerns is that most of our developers are students that spend a couple of months at our center and then leave. As students they don't know the original intentions behind the architecture and they never have the time needed to become familiar with the software while here. Despite our concerns, we did not want to conduct time consuming code reviews to check the software after each change. Instead, we created a process that allows us to quickly check whether or not the code conforms to our architectural guidelines. In this article, we describe the process we created and have been using to prevent architectural degeneration while allowing developers unfamiliar with the system to change it.
Before we introduce the process, we need to discuss our view on software architecture. The most basic building blocks of software architecture are its components and their interrelationships. Architectural components exist on several different abstraction levels depending on system size. On the highest level, the components are the subsystems, which in the case of a very large system can be a complex system and can have subsystems of their own. Subsystems are often formed by lower level components which are, in the case of an object-oriented system, collections of classes. In order for the architectural components to form a system, they must communicate with each other, creating interrelationships. The division of a system into smaller building blocks is typically based on the philosophy of "divide and conquer" which lets the implementer divide the problem into smaller and less complex pieces. The architecture is made of macro and micro levels. Architectural styles such as Client-Server and Layered architectures represent macro architectural levels. Design patterns such as the mediator pattern represent micro architectural levels.
The author and others identify three reasons why it is important to consider the software architecture of a system. First, an architectural representation is useful for allowing the various stakeholders to understand and communicate about a system that is not yet built. Second, the architecture captures the early design decisions of the architect. In the design phase, these decisions can be analyzed to determine if they are appropriate for the requirements. In the development and maintenance phases, developers need to understand the decisions when adding and modifying source code. Third, an architectural representation can be used as a basis for other systems that are similar [1].
Our goal for the process was to evaluate the software architecture in order to avoid architectural degeneration in a cost-effective and efficient way. To reach this goal, we designed an evaluation process based on the following steps illustrated in Figure 1:
We will now discuss the process by describing each of the process steps and the information sources needed to perform the architectural evaluation.
A system can be evaluated from many different perspectives. An evaluation can verify that the system implements the specified functional requirements or, more suitable for an architectural evaluation, whether it fulfills the non-functional requirements, i.e. the system qualities. In our case we selected the perspective of maintainability. Other examples of perspectives are evaluation for security, reliability, and dependability. Selecting a perspective is important for identifying appropriate goals and measurements.
Design guidelines (DG) and metrics are defined. Design guidelines can be used to validate that the architecture possesses the desired properties and to define metrics that can be used to evaluate the architecture. In our case, for example, guidelines for evaluating maintainability include that coupling between the components should be low and that it is desirable to reduce the amount of inter-module coupling without increasing internal coupling. A metric measuring coupling, for example, was derived from this guideline. Guidelines and metrics can also be defined based on the architectural styles and design patterns used in the system. The selected set must capture the properties that are most important while, at the same time, being cost-efficient to collect and analyze.
The planned architecture is defined by architectural requirements, by implicit and explicit architectural guidelines and design rules and implications stemming from the use of architectural styles and design patterns. In reality, the planned architecture is more of a goal for what the architecture should look like rather than how it is actually implemented. One reason for this inconsistency between the planned and actual architectures is the constant change of software systems. The different aspects of the planned architecture need to be recovered and a model of it should be created to guide the evaluation. Guidelines and metrics are often reiterated and updated in parallel with this step as more is discovered about the planned architecture.
The actual architecture is the high-level structure of the implemented system, its architectural components and their interrelationships, and its architectural styles and design patterns. Studying the implementation of the system identifies the actual architecture. It should be noted that the architectural evaluation is not equal to source code analysis, but identifies the architectural components of the actual system. Analysis tools have to be defined based on programming language and other factors in the development environment. For example when using Java, a tool that identifies packages and classes and their inter- and intra-package dependencies would be used. One important task is to identify architectural styles and design patterns because they play an important role in the evaluation of the architecture.
Architectural deviations are differences between the planned architecture and the actual implementation. These can be violations of design rules and guidelines or values of metrics that exceed a certain threshold. Each identified deviation and the circumstances under which it was detected are noted. If necessary, a more detailed analysis of the deviation is conducted in order to determine its possible cause and degree of severity.
Based on the results from the previous step, high-level change recommendations are formulated. Deviations can result in requests for source code changes and changes of planned architecture or guidelines change.
The identified changes require an extra step to verify that the actual architecture complies with the planned one. This step repeats two process steps; identifying the actual architecture and any architectural deviations. This verification is done to make sure that the changes have been done correctly and that no new violations have been introduced.
We have been applying this methodology to different versions of one of our software systems. We chose to evaluate the architecture from a maintainability perspective. The process has been very effective in identifying deviations from our architectural guidelines without time consuming code reviews. For each version of the system, the new application of the methodology gets more efficient, making it easier to avoid architectural degeneration. One of our observations is that the architecture can degenerate quickly and easily and this process can help avoid the degeneration. In one case, a student implemented a new requirement. Before the implementation, we explained the architecture and the guidelines to be followed in the implementation. Design guidelines were derived from the properties of inter-component class coupling, the client-server architectural style and the mediator design pattern [2]. The design guidelines included the following examples:
- [DG2] The Server should contain no references to the Client
(since the server never initiates communication with the client).- [DG6] The Mediator should be coupled with exactly one class per component and vice versa.
After the new requirement was implemented, we compared the actual architecture with the planned and uncovered 15 violations of the design guidelines. Ten of the 15 violations were metric guideline violations and nine were design pattern violations. One violation was a violation of a general architectural guideline. The quick architectural degeneration was thus substantial and, thanks to the process, easily detectable.
During this iteration, we learned several lessons about the process. For example, once the high-level design and guidelines are defined, evaluating new versions of the system becomes more efficient. We also learned that discussing the architecture with the developer was useful in uncovering some architectural problems, but these discussions do not find all architectural problems. See [3] for more information.
We are experimenting with applying the process to other systems and for other purposes. Software system security is, for example, an area where the process could be used. The process could be used to assist security engineers perform security audits as "inconsistency is a large source of software security risk" [4]. The architectural evaluation process would allow security engineers to quickly identify inconsistencies between the actual and planned design. We are also developing a tool that will automate more of the process, making it less time consuming to identify deviations. The tool displays graphically the architecture and points out potential problems. This tool will greatly impact steps 4 and 5 of the methodology.
Patricia Costa is a Scientist at the Fraunhofer Center for Experimental Software Engineering, Maryland. She has a B.Sc. (1996) and a M.Sc. (1999) in Computer Science from the Federal University of Minas Gerais, Brazil and a M.Sc. (2001) in Telecommunications Management from University of Maryland University College. She has experience in Software Development and in the areas of Agile Methods, Knowledge Management and evaluation of Software Architectures. She is currently interested in using evaluation of Software Architectures as a tool to assess/assure quality attributes like security and maintainability on software systems.
Dr. Mikael Lindvall is a Scientist at Fraunhofer Center for Experimental Software Engineering, Maryland. Dr. Lindvall specializes in work on Software Architecture evaluation and evolution and experience and Knowledge Management in Software Engineering, as well as Agile Software Development. He is currently working on tools and methods to quickly understand an architecture and identify the architectural deviations, as well as ways of building experience bases to attract users to both contribute and use experience bases. Dr. Lindvall received his PhD in computer science from Linköpings University, Sweden in 1997. Lindvall's PhD work focused on evolution of object-oriented systems and was based on a commercial development project at Ericsson Radio in Sweden.
Dr. Roseanne Tesoriero Tvedt is an Assistant Professor in the Department of Mathematics and Computer Science at Washington College in Chestertown, Maryland and a Scientist at the Fraunhofer Center for Experimental Software Engineering in College Park, Maryland. She received a Ph.D. in Computer Science from the University of Maryland. Her research interests include Software Architecture Evaluation, Agile Methods, and Computer Science Education.
Patricia Costa
Fraunhofer Center for Experimental Software Engineering Maryland
4321 Hartwick Rd, Suite 500
College Park, MD 20742
[email protected]
Dr. Mikael Lindvall
Fraunhofer Center for Experimental Software Engineering Maryland
4321 Hartwick Rd, Suite 500
College Park, MD 20742
[email protected]
Dr. Roseanne Tesoriero Tvedt
Washington College
Mathematics/Computer Science
215 Goldstein Hall
300 Washington Ave.
Chestertown, MD 21620
Phone: (410) 810-7173
Email:
[email protected]
http://faculty.washcoll.edu/bios/tesoriero_roseanne.html
The authors are currently looking for organizations that are interested in the problem of architectural degeneration and would be willing to serve as test-beds for our technologies.
Contact any of them with questions or if you are interested in participating.
![]() |
![]() |
![]() |