Tech Views

By Ellen Walker, DACS Analyst

The theme for his issue of Software Tech News (STN) is measurement, especially as it relates to integration and software process maturity. Integration is capturing more and more of our focus and our effort, regardless of what we call it. At the project level, within the boundaries of a single software system, we often focus more on integrating existing systems and COTS than developing a system from scratch. At the organizational level and across organizations there is the need to integrate existing software systems, as components of an enterprise, in order to enable business intelligence (sometimes called "knowledge management"). While the commercial community uses the terminology of "enterprise", in the military domain, integration is often addressed from the perspective of the System-of-Systems concept (or a federation), enabled by "Net-Centricity". There is also the trend to integrate management functions, as evidenced by the growth of Project Management Offices (PMOs) to oversee the projects and integrate the management (and measurement) of multiple projects. This growing focus on integration, and its counterparts, may evolve into a discipline that is distinct from the Software Engineering discipline, as we know it today. It also creates new demands for measurement. While the measurement methodologies of the past decade are still relevant, we have to find better ways to plan for and specifically address integration related issues.

In their article titled “The Measurement Challenge of High Maturity”, Domzalski and Card describe BAE’s journey to the Capability Maturity Model Integration (CMMI) Level 5 and how they evolved their measurement program to support that journey. Noteworthy is the fact that they actually discuss what failed as well as what worked.

Their article focuses on actionable measurement, that is, using measurement data with confidence, to predict, and to communicate variances from the predictions, in a way that alerts decision makers to potential problems in a timely manner so that they can act and affect the outcome. This capability is possible because of several years of building a measurement data repository.

Note that CMMI itself has evolved from several separate maturity models of the past decade (Capability Maturity Model for Software (SW-CMM), Systems Engineering Capability Model (SECM), and the Integrated Product Development Capability Maturity Model (IPD-CMM), and others). CMMI integrates a set of processes, which combine Systems Engineering, Software Engineering, Project Management, and Organizational Process Improvement along with many support functions and dwas intended for use by organizations in pursuit of enterprise-wide process improvement.

Domzalski comments that much of the improvement activity (predictive capability) of higher maturity organizations is based on historical measurements previously recorded for “possible future use”. Yet he also states that lower maturity organizations tend to collect too much data and use too little of it. This is reiterated in Hawald’s article as well. Most measurement paradigms tell us to collect only what we need, because collecting and storing the data is time consuming and costly. How then, does a fledgling organization make this determination about what constitutes too much data? How do they determine the potential future value of their data? How far out should an organization be looking when planning its measurement program? Perhaps this is a well-kept secret of those mature organizations --- knowing what to collect and when to start.

Domzalski also talks about “noise” in the measurement data collected. Recognizing that data noise exists is more important perhaps than trying to eliminate it. For example, in order to collect effort data, does one require staff to separately report exact hours actually worked on a particular task, or, is it better to automate the collection by using time card data? The noise relates to staff time not actually working (coffee breaks, discussions that go off target, etc). The noise level may differ among groups, or even individuals, but the overriding goal, for measurement purposes, is to achieve consistency over time and across projects. Accepting the noise may be a tradeoff for the benefits of automated and consistent data collection.

In the 2nd article titled “Estimating System-of-Systems(SoS) Development Effort”, Jo Ann Lane asserts that existing cost models do not handle integration of SoS well. Her article then focuses on recent work in developing a cost model that specifically addresses how to budget for SoS integration activities including the up-front efforts associated with SoS abstraction, architecting, source selection and systems acquisition, as well as the effort associated with integration, test, and change management. This new model, called the Constructive SoS Integration Model (COSOSIMO), is an addition to the COCOMO suite of estimation tools. It is also distinct from the Center for Software Engineering (CSE) System Engineering Cost Model (COSYSMO), which is used to estimate the system engineering effort at the single system level.

In the 3rd article, titled “Measurements: Managing a Large Complex Modernization Effort --- While Protecting Your Project From the ‘Katrina Factor’”, Steven Hawald asserts that Project Management Offices (PMOs) need to adopt a strategic view of their measurement program that addresses how measurement will be needed in the future. His analogy to the Katrina tragedy hints at the serious impact that a lack of appropriate measurement planning can have on an organization. He indicates that few organizations are investing enough time or money into developing a comprehensive project performance measurement program and he discusses three key points for reinforcing your data and measurement “levees”.

The last article, by Robin Ying, titled “Building Systems Using Software Components”, discusses metrics that are critical for building trustworthy systems, namely, reliability, security, and safety. He cautions that we cannot simplistically assume (based on mathematics) that if all the components we select are reliable, then the resulting system will be reliable. The act of integration introduces errors. As the number of components in a system increase, the number of integration errors increases as well. He sites careless integration and careless reuse as responsible for several disasters (Mars Climate Orbiter disaster), since a single integration error can make the whole system unreliable, even if all of its components are fully reliable. He asserts that the reliability of a software system should be judged holistically, emphasizing the functional relation between the components and the system as a whole. He concludes that, among other things, developing trustworthy software systems requires continuous and undivided attention to quality.

About the Author
Ellen Walker, a DACS Analyst, is currently developing a series of publications on software “best practices” as part of the DACS Gold Practice Initiative. She has spent the past 20 years as a software developer in various roles spanning the entire software life cycle including project management of multiple business process re-engineering efforts within the DoD community. She is also experienced with assessment initiatives such as the Capability Maturity Model for Software (CMM-SW) and the quality management practices of the New York State Empire State Advantage program. Ellen has an MS in Management Science (State University of New York (SUNY) at Binghamton), and bachelor degrees in both Computer Science (SUNY – Utica/Rome) and Mathematics (LeMoyne College).

Ph: 315-334-4936
Email: [email protected]

March 2006
Vol. 9, Number 1

Measurement
 

Articles in this issue:

Tech Views

The Measurement Challenge of High Maturity

Estimating System-of-Systems Development Effort

Measurements: Managing a Large Complex Modernization Effort

Building Systems Using Software Components
 

Download this issue (PDF)

Get Acrobat

Receive the Software Tech News