Developing Secure Software

Noopur Davis
Software Engineering Institute

Abstract

Most security vulnerabilities result from defects that are unintentionally introduced in the software during design and development. Therefore, to significantly reduce software vulnerabilities, the overall defect content of software must be reduced. Defect reduction is a pre-requisite for secure software development, but it is not enough. Security must also be deeply integrated into the full software development life cycle (SDLC).

 

Introduction

Most security vulnerabilities result from defects that are unintentionally introduced in the software during design and development. Therefore, to significantly reduce software vulnerabilities, the overall defect content of software must be reduced. Today's common software engineering practices lead to a large number of defects in released software. However, data from dozens of real-world software projects that have systematically applied improved software development practices show one to two orders of magnitude reduction in the number of defects in released software. Applying these improved practices should lead to a similar reduction in the defects that lead to vulnerabilities. Furthermore, by focusing on the specific types of defects that lead to vulnerabilities, even greater reduction in vulnerabilities could be achieved. Organizations that have applied these practices have realized additional benefits of reduced cycle times and reduced software development costs.

Along with defect reduction, Security must be deeply integrated into the full software development life cycle (SDLC). Security must be "built-in" while the product is being developed, and not just "bolted-on" after the fact.

This article begins with a discussion of why defective software is seldom secure, why defective software is not inevitable, and why reducing defects is less costly than responding to released vulnerabilities. Next, security throughout the software development life cycle will be discussed. The paper closes with a brief description of the Software Engineering Institute's (SEI's) Team Software ProcessSM for Secure Software Development (TSP-Secure).

 

Defective Software Is Seldom Secure

SEI analysis of thousands of programs produced by thousands of developers show that even experienced developers inject numerous defects as they perform activities for understanding requirements, developing designs, coding, and testing software. One defect is injected for every 7 to 10 lines of new and changed code produced. Even if 99% of these defects are removed before the software is released, this leaves 1 to 1.5 defects in every thousand lines of new and changed code produced. Software benchmark studies conducted on hundreds of software projects show that the average defect content of released software varies from about 1 to 7 defects per thousand lines of new and changed code [Jones].

According to preliminary analysis done by the SEI's CERT® group, over 90% of software security vulnerabilities are caused by known software defect types. The analysis also showed that most software vulnerabilities arise from common causes: the top ten causes account for about 75% of all vulnerabilities. Another analysis of forty-five e-business applications showed that 70% of the security defects were software design defects [Jacquith]. Some problems are caused by sophisticated architectural and design issues such as inadequate authentication, invalid authorization, incorrect use of cryptography, failure to protect data, and failure to carefully partition applications. But most are caused by simple oversight that leads to defect types such as declaration errors, logic errors, loop control errors, conditional expressions errors, failure to validate input, interface specification errors, configuration errors, and failure to understand basic security issues. In a recent interview, Alan Paller, director of research at the SANS Institute, "expressed frustration with the fact that everything on the [SANS Institute Top 20 Internet Security] vulnerability list is a result of poor coding, testing and sloppy software engineering. These are not Ôbleeding edge' problems, as an innocent bystander might easily assume. Technical solutions exist to them all, but they are simply not implemented."

It is clear that software development practices in common use today lead to defective software, that software defects are a principal cause of software security vulnerabilities; therefore, to reduce vulnerabilities the overall defect content of software must be reduced.

 

Defective Software Is Not Inevitable

When presented with the security problems caused by defective software, a common response is that software development is inherently prone to defects, and that defective software is somehow inevitable. Many people believe that trying to figure out how to build better software is "a no-win situation and just beating a dead horse" [Computer World]. However, data from dozens of real-world projects have shown that when developers follow defined, measured, and quality controlled practices, they produce products with very few overall defects. A recent study found that the defect content of such products can be reduced to an average of .06 defects per thousand lines of new and changed code [Davis]. This represents 10 to 100 times fewer defects when compared to industry averages of 1 to 7 defects per thousand lines of new and changed code.

 

The Cost of Reducing Defects

The next question usually asked is "doesn't it cost too much to reduce defects in software"? The simple answer is that software projects that produce near defect-free software also consistently meet their schedules (thus avoiding costs associated with delayed releases), and spend less time on software repair (thus improving overall productivity). For example, the average schedule error for projects using best practices was just 6%, the average time spent on software repair was just 4%, and the average increase in productivity was 78% [Davis]. Another large scale study showed a near perfect corelation between schedule and quality: the fewer the defects in the software, the lesser the schedule error [Jones].

When discussing costs, it is also fair to discuss the costs of releasing software with vulnerabilities. Producers of vulnerable software face the tangible costs of fixing and releasing patches for vulnerabilities, as well as the intangible costs of bad press, customer dissatisfaction, and threat of legal action. For consumers, the costs are even higher. A recent analysis conducted at a major corporation determined that the cost to deploy a single patch was close to half a million dollars. This cost was incurred just by the corporate infrastructure team: it did not include costs incurred by other teams such as the development teams. When these costs are multiplied by hundreds of patches that need to be applied by thousands of corporations, the overall costs to the consumers are enormous.

 

Secure Software Development

What can be done to reduce defects in software, and thus reduce vulnerabilities in software? Two things must be done: defects must be managed throughout the software development life cycle, and security must be addressed throughout the software development life cycle.

Managing Defects throughout the Software Development Life Cycle

Defects delivered in released software are a percentage of the total defects introduced during the software development life cycle. To reduce defects in released software, defects must be managed throughout the software development life cycle. Defect management includes both defect removal and defect measurement.

There should be multiple defect removal points in the software development life cycle. The more defect removal points there are, the closer one is to finding problems right after they are introduced. So the problems can be more easily fixed, and the root cause more easily determined and addressed.

Each time defects are removed, they should be measured. Every defect removal point becomes a measurement point. Defect measurement leads to something even more important than defect removal and prevention: it tells you where you stand against your goals now, helps you decide whether to move to the next step or to stop and take corrective action, and indicates where to fix your process to meet your goals.

The following questions must be considered when managing defects: where are the places in the software development life cycle where defects should be measured? What work products should be examined for defects? What tools and methods should be used to measure the defects? How many defects can be removed at each step? How many estimated defects remain after each removal step?

Suppose an organization has determined that it wants to produce software with less than 1 vulnerability per million lines of code. Also suppose that 25% of all software defects can lead to software vulnerabilities. Thus the quality goal for the organization is to release software with less than 4 defects per million lines of code. How will the organization know it can deliver an acceptably small number of defects to meet its quality goals? Like most organizations, suppose the first time this organization measures defects in the software development life cycle is during test. If testing exposes 100 defects per million lines of code, and like most organizations, testing in this organization is 50% effective, 100 defects per million line of code would remain in the software after testing, and would be released with the software (200 defects per million lines of code existed before the software entered test, 50% of these defects were found and fixed during test, while another 50% remained unfound and unfixed). Not only will the organization not meet its quality goal, but few options will be available for corrective action at this late stage in the development life cycle.

On the other hand, if this organization had several defect removal points in the software life cycle, each 50% effective, the defects in the released software would be much fewer. Each defect removal activity can be thought of as a filter that removes some percentage of defects that can lead to vulnerabilities from the software product, while others defects that can lead to vulnerabilities escape the filter and remain in the software (see Figure 1). The more defect removal filters there are in the software development life cycle, the fewer defects that can lead to vulnerabilities will remain in the software product when it is released. More importantly, because the defects were being measured earlier, the organization would have time to take corrective action early in the software development life cycle.

Figure 1: Vunerability Removal Filters

Some examples of defect removal and measurement points in the software development life cycle are architectural analysis, threat modeling, design verification, design review, code review, static code analysis, unit test, penetration test, and system test.

Addressing Security throughout the Software Development Life Cycle

Although defect reduction is the key to vulnerability reduction, more is needed to produce secure software.

First, common causes of security vulnerabilities must be understood. Some common causes include buffer overflows, SQL injection, race conditions, and cross-site scripting. Understanding involves much more than reading a laundry list of causes and examples: some organizations have 700-page documents to teach developers about common causes of vulnerabilities and how to avoid them. No one should expect developers to use such a volume of information as they perform their day to day software development activities. Although an overall knowledge of security issues is important, eliminating common causes of vulnerabilities requires defining a set of operational best practices that development teams can use in their day to day work: scripts, tools, checklists, and methods that focus on the particular job the developer is doing at a particular time.

For example, consider buffer overflows, the most common and arguably the best understood cause of software vulnerabilities. Teaching developers about buffer overflows, showing them examples of code that leads to overflows, and cataloging library calls that are prone to buffer overflows are all good ways to sensitize developers to this problem. But what are some best practices that would address not only buffer overflows, but other potential defects as well? A specific design practice may be input validation via custom typed classes. A specific verification practice may be state machine verification for session management. A specific coding practice may be language specific, checklist-based security code reviews. A specific tool may be a static code analyzer that scans the code for potentially unsafe library calls. A specific testing method may be Fuzz testing. Just as important as defining best practices is deciding when in the secure software development process these practices should be used (process scripts), how they should be measured (in-process as well as predictive measures), and how their use can be ensured.

Once the best practices have been defined, they must be applied throughout the software development life cycle. Figure 2 shows some best practices that address security through different phases of a software development life cycle. No life cycle model is implied. For spiral, incremental, or iterative development, best practices will be cycled through more than once as the software product evolves.

Figure 2: Addressing Security Throughout the Software Development Lifecycle

Examples of SDLC best practices include security risk analysis, secure design principles (such as defense in depth, application partitioning, and least privilege), threat modeling, static code analysis, checklist based inspections and reviews, and testing methods such as Fuzz testing, Ballista, or penetration testing.

Since schedule pressures and lack of senior management sponsorship can get in the way of implementing best practices, organizational support is needed for setting security policies, providing management oversight for security activities, and for providing security training and resources. Project management is needed to ensure that security activities are planned and tracked. Risk management is needed to ensure security risks are identified, assessed, and managed.

Finally, the secure software development process should be measured to determine its effectiveness, and to determine which measures are predictive measures for latent vulnerabilities in released software.

 

The Team Software Process for Secure Software Development

The Software Engineering Institute developed the Team Software Process (TSP)SM as a set of defined and measured best practices for use by individual software developers and software development teams [Humphrey]. Teams using the TSP:

1) use common sense software engineering practices

2) manage defects throughout the developed life cycle

3) control the process through measurement

4) monitor the process

5) address defect prevention as well as removal

6) use predictive measures for remaining defects

Since schedule pressures and people issues get in the way of implementing best practices, the TSP helps build self-directed development teams, and then puts these teams in charge of their own work. TSP teams:

1) develop their own plans

2) make their own commitments

3) track and manage their own work

4) take corrective action when needed

The TSP includes a systematic way to train software developers and managers, to introduce the methods into an organization, and to involve management at all levels.

The Team Software Process for Secure Software Development (TSP-Secure) augments the TSP with security practices throughout the software development life cycle. The research objectives of TSP-Secure are to reduce or eliminate software vulnerabilities that result from software design and implementation defects, and to provide the capability to predict the likelihood of latent vulnerabilities in delivered software. Areas of exploration include vulnerability analysis by defect type, operational process for secure software production, predictive process metrics and checkpoints, quality management practices for secure programming, design patterns for common vulnerabilities, verification techniques, and removing vulnerabilities in legacy software.

Teams using TSP-Secure are first trained in fundamental software engineering practices. They then attend a workshop where they are introduced to common causes of vulnerabilities and practices they should use to address the common causes of vulnerabilities. Next, the teams plan their product development work. Along with business and feature goals, teams define the security goals for their product, and then measure and track the security goals throughout the product development life cycle. At least one team member assumes the role of Security Manager. This role is responsible for ensuring that the team is addressing security through all their product development activities.

To date, the TSP has been used by many organizations. A recent study showed that teams using the TSP produced software with an average delivered defect level of 0.06 defects per thousand lines of new and changed code, with an average schedule error of just 6%. The average productivity improvement was 78%. TSP-Secure is still under development, but an initial proof-of-concept pilot produced near defect free software with no security defects found during security audits and in several months of use.

 

Conclusion

Since common software defects are a leading cause of vulnerabilities, the overall defect content of software must be reduced. Next, security must be systematically addressed throughout the software development life cycle. There must be a shift in attitude from "bolting security on" after the fact, to "building security in" as the product is being developed. This requires that good software engineering practices are followed while the software is being developed, including multiple defect removal activities.

 

Biography of Noopur Davis

Noopur Davis is a Visiting Scientist at the Software Engineering Institute of Carnegie Mellon University, where she works with Watts Humphrey on his Team Software Process initiative. She is also Principal of Davis Systems, a company that has been providing Software Engineering Process Improvement consulting and training services for over twelve years. Noopur has been involved in the software field for over twenty years as a developer, a manager, and a consultant. Her experience ranges from real-time embedded systems software to commercial desktop products. She has launched and coached dozens of teams at major industry and government organizations.

She has authored several reports and articles on software process and software security.

Noopur has a Masters in Computer Science and a Bachelors in Electrical Engineering. She is an SEI-Authorized Team Software Process coach, an SEI-Authorized Personal Software Process instructor, program committee member for the 2003 XP/Agile Universe conference, program committee member for International Symposium on Secure Software Engineering, member of IEEE working group for draft recommended ractices for Establishing and Managing Software Development Efforts Using Agile Methods, and a member of the IEEE and the ACM.

 

References

[Computer World] "Congress' role in IT security debated", November 6, 2003.

http://www.computerworld.com/securitytopics/security/story/0,10801,86902,00.html?nas=PM-86902

[Davis] Davis, Noopur and Mullaney, Julia, "The Team Software Process in Practice: A Summary of Recent Results.", Technical Report CMU/SEI-2003-TR-014, September 2003.

[Kirwan] Kirwan, Mary, "The Quest For Secure Code". Global Technology, October 12, 2004. http://www.globetechnology.com/servlet/story/RTGAM.20041001.gtkirwanoct1/BNStory/Technology/

[Humphrey] Humphrey, Watts S. Winning with Software: An Executive Strategy. Reading, MA: Addison-Wesley, 2002.

[Jacquith] Jacquith, Andrew. "The Security of Applications: Not All Are Created Equal." At Stake Research.

http://www.atstake.com/research/reports/acrobat/atstake_app_unequal.pdf

[Jones] Jones, Capers. Software Assessments, Benchmarks, and Best Practices. Reading, MA: Addison-Wesley, 2000.

[Viega] Viega, Jones and McGraw, Gary Building Secure Software Building Secure Software: How to Avoid Security Problems the Right Way, Reading, MA: Addison Wesley, 2001.

 

SM Team Software Process and TSP are service marks of Carnegie Mellon University.

®CERT is a registered trademark of Carnegie Mellon University.

 

 

July 2005
Vol. 8, Number 2

Secure Software Engineering
 

Articles in this issue:

Developing Secure Software

The Challenge of Low Defect, Secure Software

Enhancing Customer Security

Software Development Security

User Comment

Lessons Learned
 

Download this issue (PDF)

Get Acrobat

Receive the Software Tech News