Model-Based Improvement - Improving the Test Process: Implementing Improvement and Change - A Study Guide for the ISTQB Expert Level Module (2014)

Improving the Test Process: Implementing Improvement and Change - A Study Guide for the ISTQB Expert Level Module (2014)

Chapter 3. Model-Based Improvement

Using a model to support test process improvement is a tried and trusted approach that has acquired considerable acceptance in the testing industry. In fact, the success of this approach has unfortunately led some people to believe that this is the only way to approach test process improvement. Not so. As you will see in other chapters of this book, using models is just one way to approach test process improvement, and models are certainly not a substitute for experience when it comes to recommending test process improvements. However, models do offer a structured and systematic way of showing where a test process currently stands, where it might be improved, and what steps can be taken to achieve a better, more mature test process. The success and acceptance of using models calls for a thorough consideration of the options available to test process improvers when they are choosing a model-based approach.

The objective of this chapter is to help test process improvers make the right choices when it comes to test process improvement models, give them a thorough overview of the issues concerning model use, and provide an overview of the most commonly used models: TMMi, TPI NEXT, CTP, and STEP. We will also be looking at what the software process improvement models CMMI and ISO/IEC 15504 can offer the test process improver. Of course, it would not be practical for this chapter to consider these models right down to the last detail; that would add around 500 pages to the book! The reader is kindly requested to consult the referenced works for more detailed coverage.

This chapter first looks at model-based approaches in general and asks the fundamental questions, What would we ideally expect to see in a model used for test process improvement? and, What assumptions are we taking when using a model?

Test process improvement models are not all the same; they adopt different approaches, they are structured differently, and they each have their own strengths and weaknesses. The remainder of the chapter focuses on these aspects with regard to the models previously mentioned.

3.1 Introduction to Test Process Improvement Models

Syllabus Learning Objectives

LO 3.1.1

(K2) Understand the attributes of a test process improvement model with essential generic attributes.

LO 3.1.2

(K2) Compare the continuous and staged approaches including their strengths and weaknesses.

LO 3.1.3

(K2) Summarize the assumptions made in using models in general.

LO 3.1.4

(K2) Compare the specific advantages of using a modelbased approach with their disadvantages.

3.1.1 Desirable Characteristics of Test Process Improvement Models

Fundamentally, a test process improvement model should provide three essential benefits:

Image Clearly show the current status of the test process in a thorough, structured, and understandable manner

Image Help to identify where improvements to that test process should take place

Image Provide guidance on how to progress from the current to the desired state

Let’s be honest. A first look at the available models for test process improvement can be confusing; there are so many styles, approaches, and formal issues to consider. With all this, it can be difficult to appreciate the fundamental attributes we should be looking for in such a model. This section describes a number of specific areas that can help us understand the aspects of a test process improvement model that can generally be considered as “desirable.” The areas are covered under the following headings:

Image Model Content

Image Model Design

Image Formal Considerations

Model Content

Image Practicality and ease of use are high on the list of desirable model characteristics. A model that is overcomplicated, difficult to use, and impractical will struggle to achieve a general level of acceptability from both its users and the stakeholders who should be benefitting from its use. Convincing others to adopt an impractical model will be difficult and may even place a question mark over the use of a model-based approach.

Image A well-researched, empirical basis is essential. Models must be representative of a justified “best practice” approach to testing. Without any solid basis for this justification, models may be seen as simply a collection of notions and ideas put forward by a limited number of individuals. Because such models have unproven validity, the test process improver must look for examples where value has been demonstrated in actual practice.

Image Details are important. A strong mover behind the development of test process improvement models was the low level of testing detail provided by software development models (e.g., CMMI). This was often considered inadequate for thorough and practical test process improvement. A model for test process improvement must therefore provide sufficient detail to allow in-depth information about test processes to be obtained and permit a wide range of specific testing issues to be adequately covered. Models that deal only in generalities will be difficult to apply in practice and may need considerable subjective interpretation when used for test process assessment purposes.

Image The user must be supported by the model in identifying, proposing, and quantifying improvements that are specific to identified test process weaknesses. A variety of improvement suggestions for specific testing problems is desirable. A model that proposes only a single solution to a problem or only focuses on the assessment part of test process improvement will be of only limited value.

Model Design

Image Models should help us achieve test process improvements in small, manageable steps rather than great leaps. By providing a mechanism for small, evolutionary improvements, models for test process improvement should support a wide range of improvement scopes (e.g., minor adjustments or major programs) and ensure that those improvements are clearly defined in the model.

Image The prioritization of improvements is an essential aspect in determining an acceptable test improvement plan. Models should support the definition of priorities and enable businesses to understand the reasons for those priorities.

Image Models must support the different activities found in a test process improvement program (the IDEAL approach discussed in chapter 6 identifies, for example, five principal activities). The model should be considered as a tool with which to support the implementation of a test process improvement program. The model itself should not demand major changes to the chosen approach to test process improvement.

Image Flexibility is a highly desirable characteristic of test process improvement models. Within a given organization, a variety of different project types may be found, such as large projects, projects using standard software, and projects using a particular software development life cycle. The model must be flexible enough to deal with all of these project types. In addition, the model must cater to the different business objectives being followed by an organization. This calls for a high level of model flexibility, including the possibility to apply tailoring.

Image Suggestions for test process improvement may be given (prescribed) by the model or depend on the judgment of the user. These approaches have their advantages and disadvantages, so a decision on which is considered desirable has to be made by the model user within their own project and organizational context.

Image As discussed in section 3.1.3, some models represent improvements to test process maturity in predefined steps, or stages, and others consider improvement of particular test process aspects in a nonstaged, or continuous, manner. Together with the “degree of prescription” aspect mentioned in the preceding point, this is perhaps the most significant characteristic that defines test process improvement models. The user must determine which is most desirable in their specific project and organizational context.

Formal Considerations

Image To provide value for projects and organizations, a test improvement model must be publicly known, supported (e.g., by external consultants and/or websites), and available to all potential users (e.g., via the Internet and/or as a book). “Home-made,” unpublished models that have little or no support may be useful within a very limited scope, but generally speaking they are limited in value compared to publicly available models that are supported (e.g., by consultants or user groups).

Image The merits of a model may be judged by the level of acceptance, recognition, and “take up” shown by both professional bodies and the software testing industry in general. Using a model that is not tried and tested presents an avoidable risk.

Image Models that are simply promoted as a marketing vehicle for commercial organization may not exhibit the degree of independence required when suggesting test process improvements. Models must show strong independence from undesired commercial influences and be clearly unbiased.

Image The use of a model may require the formal accreditation of assessors. This level of formality may be considered as beneficial to organizations seeking a formal certification of their test process maturity. The level of formal accreditation required for assessors and the ability to certify an organization distinguishes some models from others.

3.1.2 Using Models: Benefits and Risks

Using a model for test process improvement can be highly beneficial, but the user must be aware of particular risks in order to achieve those benefits. As you will see, many of the risks are associated with the assumptions that are frequently made when using a model. Just as with any risk, lack of awareness and failure to take the necessary mitigation actions may result in failure of the overall test process improvement program.

In this section, the benefits associated with using models are covered, followed by a description of individual risk factors and how they might be mitigated.

Benefit: Structured Approach

As mentioned in the introduction to this chapter, models permit the adoption of a structured approach to test process improvement. This not only benefits the users of the model, it also helps to communicate the test process improvement approach to stakeholders. Managers benefit from the ease with which test process improvements can be planned and controlled. Resources (people, time, money) can be clearly allocated and prioritized, progress can be transparently communicated, and return on investment can be more easily demonstrated. Business owners benefit from a structured, model-based approach by having clearly established objectives that can be prioritized, monitored, and where necessary, adjusted. This is not to say that test improvements that are not modelbased are unplanned, difficult to manage, and hard to communicate; models do, however, provide an valuable instrument with which these issues can be effectively addressed.

Benefit: Leveraging of Testing Best Practices

At the heart of all models for test process improvement is the concept of best practices in testing, as proposed by the model’s developers. Aligning a project’s test process to a particular model will leverage these best practices and, it is assumed, benefit the project. This is one of the fundamental benefits of adopting a model-based approach to test process improvement, but it needs to be balanced with the risks associated with ignoring issues of project or organizational context, which are discussed later.

Benefit: Thorough Coverage

Testing processes have many facets, and a wide range of individual aspects (e.g., test techniques, test organization, test life cycle, test environment, test tools) need to be considered if broad-based test process improvement is established as the overall objective. Such objectives can be achieved only if all relevant aspects of the test process are covered. Test process improvement models should provide this thorough coverage; they identify and describe individual testing aspects, and they give stakeholders confidence that important aspects have not been missed. Note that some test improvement models (e.g., TOM) do not provide full coverage, making this a desirable characteristic of a test improvement model.

Of course, if the objective of test process improvement is highly focused on a particular problem area (e.g., test team skills), this benefit will fully apply and a decision to use a model-based approach would need careful consideration.

Benefit: Objectivity

Models provide the test process improver with an objective body of knowledge from which to identify weaknesses in the test process and propose appropriate recommendations. This objectivity can be a decisive factor in deciding on an approach to test process improvement. An approach that is not model-based and relies on particular people will be perceived as less objective by stakeholders (especially management), even if this is not justified.

Objectivity can be an important benefit when conducting assessment interviews; it reduces the risk that particular questions are challenged by the interviewee, especially where the interviewee is more experienced than the interviewer in particular areas of testing. Similarly, the backing of an objective model can be of use when “selling” improvement proposals to stakeholders. Objectivity cannot, of course, be total. Models are developed by people and organizations. If all of those people come from the same organization, the model will be less objective than one developed by many people from different organizations and countries.

Benefit: Comparability

Consistently using a test process improvement model allows organizations to compare the maturity levels achieved by different projects in their organization. Within an organization, this enables particular projects to be defined as reference points for other projects and provides a convenient, company-specific baseline.

Industry-wide baselines are a potential benefit of using models, but only a few such baselines have been established [van Veenendaal and Cannegieter 2013], [van Veenendaal 2011].

Risk: Failure to Consider Project Context

Models are, by definition, a representation of reality. They help us to understand complexities more easily by creating a generic view that can then be applied to a specific situation. The risk with any model is that the generic representation may not always fit the specific situations in your project or organization. All projects have their own particular context, and there is no way to define a “standard” project, except at a (very) high level. Because of this, the model you are using for test process improvement must match the specific project context as closely as possible. The less precise this match is, the higher the risk that the best practices are no longer appropriate for the specific project.

Mitigating these risks is a question of judgment by the model user. What is best for the specific project? Where should I make adjustments to the model? Where do I need to make interpretations when assessing test process maturity? These are the questions the test process improver needs to answer when applying a particular test improvement model. Some of the typical areas to be considered are listed here:

Image Project criticality. Projects with a high level of criticality often need to show compliance with standards. Any aspects of the model that would prevent this compliance need to be filtered out.

Image Technology and architecture used. Certain technologies and architectures may favor particular best practices over others. Applications that are implemented using, for example, multisystem architectures may place relatively more emphasis on non-functional testing than simple web-based applications. Similarly, the importance of a representative test environment may be different for the two architectures. The model used must reflect these different project contexts. If the model used suggests that a fully representative test environment should be available, this may be “over the top” in the context of a simple application needing just functional testing.

Image Development using a sequential software development life cycle (SDLC) places a different emphasis on testing compared to software developed according to agile practices. Test improvement models may be used in both contexts, but considerably more interpretation and tailoring will be required for the agile context (more on this in chapter 10).

TPI NEXT and TMMi permit aspects of the test process assessment to be filtered out as “not applicable” without affecting the achieved maturity level. This goes some way toward eliminating aspects that do not conform to the project or organizational context.

Risk: “Model Blindness”

This is a very common risk that we have observerd on many occasions. An inexperienced user might believe that a test process improvement model does the thinking for them and relieves them of the need to apply their own judgment. This false assumption presents a substantial risk to gaining approvals and recognition for improvement recommendations. If test process improvers are questioned by stakeholders about improvement proposals or assessment results, weak responses such as “because the model says so” show a degree of “model blindness” that reveals their inexperience in improving test processes. Stakeholders generally want to hear what the test process improver thinks and not simply what the model prescribes as a result of a mechanical checklist-based assessment.

A model cannot replace common sense, experience, and judgment, and users of models should remember the famous quote by W.E. Deming and G. Box: “All models are wrong: some are useful.” The models described in this book are considered “useful,” but if the user cannot explain the reasoning behind results and recommendations, the level of confidence in the results and proposals they present may erode to an extent that they are not adopted.

Mitigating the risks of model blindness means validating the results and recommendations suggested by the model and making the appropriate adjustments if these do not make sense within the context of the project or organization. Does it make sense, for example, to propose the introduction of weekly test status reporting when the project has a web-based dashboard showing all the relevant information “on demand”? Users of models should consider them as valuable instruments to support their judgments and recommendations.

Risk: Wrong Approach

This book examines a number approaches to test process improvement, one of which involves using a test process improvement model. Risks arise when a model-based approach becomes “automatic” and is applied without due consideration for the particular objectives to be achieved. This could fundamentally result in an inappropriate approach being adopted and resources being ineffectively allocated. As discussed in chapter 5, a combination of a model-based and analytical-based approach is often preferred.

Mitigating this risk requires awareness. Test process improvers must be aware that models sometimes oversimplify complex issues of cause and effect and, indeed, that improvements may not be required in testing but in other processes, such as software development. These issues to be considered when choosing the most appropriate approach to test process improvement are described in chapter 5.

Risk: Lack of Skills

A general lack of skills in performing test process improvements and in applying a specific model may lead to one or more of the risks mentioned previously.

Mitigation of this risk is not just simply an issue of training (although that is surely one measure to be considered). Roles and responsibilities for those involved in test process improvements must include a description of required experience and qualifications (see section 7.2.). Any deviations from these requirements must be balanced by the potential impact of the risks mentioned previously on the project or organization (e.g., setting inappropriate priorities, failure to take project context into account).

3.1.3 Categories of Models

Models for test process improvement can be broadly categorized as shown in figure 3–1. The diagram also indicates the four models considered in this book.


Figure 3–1 Test process improvement models considered in this book

Process Reference Models and Content Reference Models

Reference models in general provide a body of information and testing best practices that form the core of the model. The primary difference between process reference models and content reference models lies in the way in which this core of test process information is leveraged by the model, although content reference models tend to provide more details on the various testing practices.

In the case of process reference models, a predefined scale of test process maturity is mapped onto the core body of test process information. Different maturity levels are defined, which range from an initial level up to an optimizing maturity level, depending both on the actual testing tasks performed and on how well they are performed. The progression from one maturity level to another is an integral feature of the model, which gives process reference models their predefined “prescribed” character. The process reference models discussed in this book are the Test Maturity Model integration (TMMi) model and the Test Process Improvement model TPI NEXT.

Content reference models also have a core body of best testing practices, but they do not implement the concept of different process maturity levels and do not prescribe the path to be taken for improving test processes. The principal emphasis is placed on the judgment of the user to decide on where the test process is and where it should be improved. The content reference models discussed in this book are the Critical Testing Process (CTP) model [Black 2003] and the Systematic Test and Evaluation Process (STEP) model [Craig and Jaskiel 2002].

Continuous and Staged Representations

As noted, process reference models define a scale of test process maturity. This is of practical use in showing the achieved test process maturity, in comparing the maturity levels of different projects or organizations, and in showing improvements required or achieved.

The method chosen to represent test process maturity can be classified as either staged or continuous. With a staged representation, the model describes sucessive maturity levels. Test Maturity Model integration (TMMi), which is described in section 3.3.2, defines five such maturity levels. Achieving a given maturity level (stage) requires that specific testing activities (TMMi refers to these as process areas) are performed as prescribed by the model. For example, test planning is one of the testing activities that must be perfomed to achieve the TMMi maturity level 2. Test process improvement is represented by progressing from one maturity level to the next highest level (assuming there is one).

Staged representations are easy to understand and show a clear step-by-step path toward achieving a given level of test process maturity. This simplicity can also be beneficial when discussing test process maturity with senior management or where a simple demonstration of achieved test process maturity is required, such as with tendering for testing projects. It is probably this simplicity that makes staged models popular. A recent survey showed that 90 percent of the CMMI implemenations were stage based and only 10 percent were using continuous representation.

When a model is used that implements a staged representation of test process maturity, the user should be aware of certain aspects that may be seen as limiting. One of these aspects is the relatively course-grained definition of the maturity levels. As noted, each maturity level requires that capability be demonstrated for several testing activities and that all of those activities must be present for the overall maturity level to be achieved. If a test process can demonstrate, for example, that only one of the required testing activities assigned to maturity level 2 is performed and compliant, the overall test process maturity is still rated as level 1. This is reasonable. However, when all but one of the maturity level 2 activities are performed, the model still places the test process at maturity level 1. This “all or nothing” approach can result in a negative impression of the test process maturity.

Staged representation A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [Chrissis, Konrad, and Shrum 2004]

Continuous representations of process maturity (such as used in the TPI NEXT model) are generally more flexible and finer grained than staged representations. Unlike with the staged approach, there are no prescribed maturity levels through which the entire test process is required to proceed, which makes it easier to target the particular areas for improvement needed to achieve particular business goals.

The word continuous is applied to this representation because the models that use this approach define not only the various key areas of a testing process (e.g., defect management) but also the (continuous) scale of maturity that can be applied to each key area (e.g., basic managed defect management, efficient defect management using metrics, and optimizing defect management featuring root-cause analysis for defect prevention).

Assessing the maturity of a particular key area is achieved by answering specific questions that are assigned to a given maturity level. There might be, for example, four questions assigned to the managed maturity level for defect management. If all questions can be answered positively, the defect management aspect of the test process is assessed as managed. Other key areas may be assessed at different levels of maturity; for example, reporting may be at an efficient level and test case design at an optimizing maturity level.

Clearly, the continuous representation provides a more detailed and differentiated view of test process maturity compared to the staged representation, but the simplicity offered by a staged approach cannot be easily matched. Using a continuous representation model can easily lead to a complex assessment of a test process, where stakeholders may find it hard to “see the forest, and not all the trees.” Models that use a continuous representation, such as TPI NEXT (and particularly its earlier version, TPI) suffer from the perception of being too technical and difficult to communicate to non-testers.

Continuous representation A capability maturity model structure wherein capability levels provide a recommended order for approaching process improvement within specified process areas. [Chrissis, Konrad, and Shrum 2004]

3.2 Software Process Improvement (SPI) Models

Syllabus Learning Objectives

LO 3.2.1

(K2) Understand the aspects of the CMMI model with testing-specific relevance.

LO 3.2.2

(K2) Compare the suitability of CMMI and ISO/IEC 15504-5 for test process improvement to models developed specifically for test process improvement.

3.2.1 Capability Maturity Model Integration (CMMI)


In this section, we’ll consider the CMMI model. The official book on CMMI provides a full description of the framework and its use (see CMMI, Guidelines for Process Integration and Product Improvement [Chrissis, Konrad, and Shrum 2004]). Users will gain additional insights into the framework by consulting this publication and the other sources of information available at [URL: SEI].

Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It is an approach that helps organizations improve their processes. The model provides a clear definition of what an organization should do to promote behaviors that lead to improved performance.

With five maturity levels (for the staged representation) and four capability levels (for the continous representation), CMMI defines the most important elements that are required to build better product quality, or deliver greater services, and wraps them all up in a comprehensive model. The goal of the CMMI project is to improve the usability of maturity models for software engineering and other disciplines by integrating many different models into one overall framework of best practices. It describes best practices in managing, measuring, and monitoring software development processes. The CMMI model does not describe the processes themselves; it describes the characteristics of good processes, thus providing guidelines for companies developing or honing their own sets of processes.

Capability Maturity Model Integration (CMMI) A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best practices for planning, engineering and managing product development and maintenance. [Chrissis, Konrad, and Shrum 2004]

The CMMI helps us understand the answer to the question, How do we know?

Image How do we know what we are good at?

Image How do we know if we’re improving?

Image How do we know if the process we use is working well?

Image How do we know if our requirements change process is useful?

Image How do we know if our products are as good as they can be?

CMMI helps you to focus on your processes as well as on the products and services you produce. This is important because many people who are working in a position such as program manager, project manager, or a similar product creation role are paid bonuses and given promotions based on whether they achieve deadlines, not whether they follow or improve processes. Essentially, using CMMI reminds us to focus on process improvement.

Commercial and government organizations use the CMMI models to assist in defining process improvements for systems engineering, software engineering, and integrated product and process development.


CMMI comes with two different representations: staged and continuous (see section 3.1.3). The staged version of the CMMI model identifies five levels of process maturity for an organization (figure 3–2):

1. Initial (chaotic, ad hoc, heroic): The starting point for use of a new process.

2. Managed (project management, process discipline): The process is used repeatedly.

3. Defined (institutionalized): The process is defined and confirmed as a standard business process.

4. Quantitatively Managed (quantified): Process management and measurement take place.

5. Optimizing (process improvement): Process management includes deliberate process optimization and improvement.


Figure 3–2 CMMI staged model: five maturity levels with their characteristics

There are process areas (PAs) within each of these maturity levels that characterize that level (more about process areas later). Organizations are supported with the CMMI to improve the maturity of their software process through an evolutionary path of the five maturity levels, from “ad-hoc and chaotic” to “mature and disciplined” management. As organizations become more mature, risks are expected to decrease and productivity and quality are expected to increase. The staged representation provides more focus for the organization and has by far the largest uptake in the industry.

Using the continuous representation, an organization can also pick and choose the process areas that make the most sense for them to work on. The continuous representation defines capability levels within each process area (see figure 3–3). In the continuous representation, the organization is thus allowed to concentrate its improvement efforts on its own primary areas of need without regard to other areas and is therefore generally considered to be more flexible but more difficult to use.


Figure 3–3 CMMI continuous model: four capability levels per process area

The differences in the CMMI representations are solely organizational; the content is equivalent. When choosing the staged representation, an organization follows a predefined pattern of process areas that are organized by maturity level. When choosing the continuous representation, organizations pick process areas based on their interest in improving only specific areas.

There are multiple “flavors” of the CMMI, called constellations, that include CMMI for Development (CMMI-DEV), CMMI for Services (CMMI-SVC), and CMMI for Acquisition (CMMI-ACQ). The three constellations share a core set of 16 process areas. CMMI-DEV commands the largest market share, followed by CMMI-SVC and then CMMI-ACQ. CMMI for Development has 22 process areas, or PAs (see table 3–1). Each process area is intended be adapted to the culture and behaviors of your own company.

Table 3–1 Process areas in CMMI-DEV for each maturity level


CMMI uses a common structure (set of components) to describe each of the process areas. The process area components are grouped into three types: required, expected, and informative.

Image Required components describe what an organization must achieve to satisfy a process area. This achievement must be visibly implemented in an organization’s processes. The required components in CMMI are the specific and generic goals. Goal satisfaction is used in assessments as the basis for deciding if a process area has been achieved and satisfied.

Image Expected components describe what an organization will typically implement to achieve a required component. Expected components guide those who implement improvements or perform assessments. Expected components include both specific and generic practices. Either the practices as described or acceptable alternatives to the practices must be present in the planned and implemented processes of the organization before goals can be considered satisfied.

Image Informative components provide details that help organizations get started in thinking about how to approach the required and expected components. Sub-practices, example work products, notes, examples, and references are all informative model components.

A process area has several different components (see figure 3–4 and table 3–2).


Figure 3–4 Structure of a CMMI process area

Each PA has one to four goals, and each goal is made up of practices. Within each of the PAs, these are called specific goals and practices because they describe activities that are specific to a single PA. There is one additional set of goals and practices that apply in common across all of the PAs; these are called generic goals and practices. There are 12 generic practices (GPs) that provide guidance for organizational excellence and institutionalization, including behaviors such as setting expectations, training, measuring quality, monitoring process performance, and evaluating compliance. Organizations can be rated at a capability level (continuous representation) or maturity level (staged representation) based on over 300 discreet specific and generic practices.

Table 3–2 Components of CMMI Process Areas



With CMMI-DEV, process areas are organized by so-called categories: Process Management, Project Management, Engineering, and Support. The grouping into categories is a way to discuss their interactions and is especially used with a continuous approach (see figure 3–5). For example, a common business objective is to reduce the time it takes to get a product to market. The process improvement objective derived from that could be to improve the project management processes to ensure on-time delivery.

The following two diagrams summarize the structural components of CMMI described so far. The first diagram (figure 3–4) shows how the structural elements are organized with the staged representation.


Figure 3–5 CMMI staged model: structural elements

The diagram in figure 3–6 shows how the structural elements of CMMI are organized in the continuous representation.


Figure 3–6 CMMI continuous model: structural elements

To conclude the description of the CMMI structure, the allocation of CMMI-DEV process areas to each category is shown in table 3–3. Note that the process areas shown in italic are particularly relevant for testing. Some of these will be discussed in more detail in the following pages.

Table 3–3 Process areas of CMMI-DEV grouped by category


Testing-Related Process Areas

The two principal process areas with testing relevance shown in table 3–3 are Validation and Verification. These process areas specifically reference both static and dynamic test processes.

Although still considered “not much” and “too high-level” by many testers, having these two dedicated process areas does make a difference. It means that process improvement initiatives using the CMMI shall also address testing. However, the number of pages dedicated by CMMI to testing is approximately 20 pages out of 400!

The Validation process area addresses testing to demonstrate that a product or product component fulfills its intended use when placed in its intended environments. Table 3–4 shows the specific goals and specific practices that make up this process area.

Table 3–4 CMMI Process Area: Validation


The purpose of the Verification process area is to ensure that selected work products meet their specific requirements. It also includes static testing, that is, peer reviews. Table 3–5 shows the specific goals and specific practices that make up this process area.

Table 3–5 CMMI Process Area: Verification


The process areas Technical Solution and Product Integration also deal with some testing issues, although again this is very lightweight.

The Technical Solution process area mentions peer reviews of code and some details on performing unit testing (in total, four sentences). The specific practices identified by the CMMI are as follows:

Image Conduct peer reviews on the selected components

Image Perform unit testing of the product components as appropriate

The Product Integration process area includes several practices in which peer reviews are mentioned (e.g., on interface documentation), and there are several implicit references to integration testing. However, none of this is specific and explicit.

In addition to the specific testing-related process areas, some process areas also provide support toward a more structured testing process, although the support provided is generic and does not address any of the specific testing issues.

Test planning can be addressed as part of the CMMI Project Planning process area. The goals and practices for test planning can often be reused for the implementation of the test planning process. Project management practices can be reused for test management.

Test monitoring and control can be addressed as part of the CMMI process area Project Monitoring and Control. The goals and practices for project planning and control can often be reused for the test monitoring and control process. Project management practices can be reused for test management.

Performing product risk assessments within testing to define a test approach and test strategy can partly be implemented based on goals and practices provided by the CMMI process area Risk Management.

The goals and practices of the CMMI process area Configuration Management can support the implementation of configuration management for test deliverables. Testing will also benefit if configuration management is well implemented during development (e.g., for the test object).

Having the CMMI process area Measurement and Analysis in place will support the task of getting accurate and reliable data for test reporting. It will also support testing if you’re setting up a measurement process on testing-related data, such as, for example, defects and the testing process itself.

The goals and practices of the Causal Analysis and Resolution CMMI process area provide support for the implementation of defect prevention, a typical test improvement objective at higher maturity levels.


An assessment can give an organization an idea of the maturity of its processes and help it create a road map toward improvement. After all, you can’t plan a route to a destination if you don’t know where you currently are. The SEI does not offer certification of any form. It simply licenses and authorizes lead appraisers to conduct appraisals (commonly called assessments).

There are three different types of appraisals: Class A, B, and C (see table 3–6). The requirements for CMMI appraisal methods are described in Appraisal Requirements for CMMI (ARC). The Standard CMMI Assessment Method for Process Improvement (SCAMPI) is the only appraisal method that meets all of the ARC requirements for a Class A appraisal method. Only the Class A appraisal can result in a formal CMMI rating. A SCAMPI Class C appraisal is typically used as a gap analysis and data collection tool, and the SCAMPI Class B appraisal is often employed as a user acceptance or “test” appraisal.

The SEI certifies so-called lead appraisers. Only a Certified SCAMPI Lead Appraiser can conduct a SCAMPI A appraisal. Especially the staged representation is used to achieve a CMMI Level Rating from a SCAMPI appraisal. The results of the appraisal can then be published on the SEI website [URL: SEI].

Table 3–6 Characteristics of CMMI appraisals



To understand what the benefit of CMMI might be to your organization, you need to think about what improved processes might mean for you. What would be the impact to your organization if project predictability was improved by 10 percent? What would be the impact if the cost of finding and fixing defects was reduced by 10 percent? By benchmarking before beginning process improvement, you can compare any process improvements to the benchmark to ensure a positive impact on the bottom line.

Turning to a real-world example of the benefits of CMMI, Lockheed Martin, between 1996 and 2002, was able to increase software productivity by 30 percent while decreasing the unit software cost by 20 percent [Weska 2004]. Another organization reported that achieving CMMI maturity level 3 allowed it to reduce its costs from rework by 42 percent over several years, and yet another described a 5:1 return on investment for quality activities in a CMMI maturity level 3 organization.

The SEI states that it measured increases of performances in the categories cost, schedule, productivity, quality, and customer satisfaction for 25 organizations (see table 3–7). The median increase in performance varied between 14 percent (customer satisfaction) and 62 percent (productivity).

However, the CMMI model mostly deals with what processes should be implemented and not so much with how they can be implemented. SEI thus also mentions that these results do not guarantee that applying CMMI will increase performance in every organization. A small company with few resources may be less likely to benefit from CMMI.

Table 3–7 Results reported by 25 organizations in terms of performance change over time



CMMI is not a process, it is a book of “whats,” not a book of “hows,” and it does not define how your company should behave. More accurately, it defines what behaviors need to be defined. In this way, CMMI is a behavioral model as well as a process model.

Like any framework, CMMI is not a quick fix for all that is wrong with a development organization. SEI cautions that improvement projects will likely be measured in months and years, not days and weeks. Because they usually have more knowledge and resources, larger organizations may find they get better results, but CMMI process changes can also help smaller companies.

3.2.2 ISO/IEC 15504

Just like CMMI, the ISO/IEC 15504 standard [ISO/IEC 15504] (which was known in its pre-standard days as SPICE) is a model for software process improvement that includes specific testing processes. The standard comprises several parts, as shown in figure 3–7.


Figure 3–7 Parts of ISO/IEC 15504

Several parts of the standard have relevance for test process improvers. Part 5 is of particular relevance because it describes a process assessment model that is made up of two specific dimensions:

Image The process dimension describes the various software processes to be assessed.

Image The capability dimension enables us to measure how well processes are being performed.


Figure 3–8 ISO/IEC 15504-5: Process and Capability dimensions

To understand how ISO/IEC 15504 can be used for test process improvement, the two dimensions are summarized in the following sections with particular focus on the testing-specific aspects. Further details can be obtained from Process Assessment and ISO/IEC 15504: A Reference Book [van Loon 2007] or, of course, from the standard itself.

Process Dimension

The process dimension identifies, describes, and organizes the individual processes within the overall software life cycle. The activities (base practices) and the work products used by and created by the processes are the indicators of process performance that an assessor evaluates.

ISO/IEC 15504 allows any compliant process model to be used for the process dimension, which makes it easy for specific schemes to adopt. For example, the assessment processes and schemes of CMMI (SCAMPI) and TMMi (TAMAR) are ISO 15504 Part 4 compliant. To provide the assessor with a usable “out of the box” standard, part 5 of ISO/IEC 15504 also describes an “exemplar” process model, which is principally based on the ISO/IEC 12207 standard “Software Life Cycle Processes” [ISO/IEC 12207]. The process categories and groups defined by the exemplar process model are shown in figure 3–9; this is the process model that assessors use most often.


Figure 3–9 ISO/IEC 15504-5: Process categories and groups

The ISO/IEC 15504-5 Process Assessment Model shown in figure 3–9 contains a process dimension with three process categories and nine process groups, three of which (marked “test” in the figure) are of particular relevance for the test process improver. Let’s now take a closer look at those testing-specific process groups, starting with Engineering (see figure 3–10).


Figure 3–10 ISO/IEC 15504-5: Engineering processes

The software testing and system testing processes relate to testing the integrated software and integrated system (hardware and software). Testing activities are focused on showing compliance to software and system requirements prior to installation and productive use.

The process category Supporting Life-Cycle Processes contains only a single group of processes (see figure 3–11). These may be employed by any other processes from any other process category.


Figure 3–11 ISO/IEC 15504-5: Supporting processes

The processes Verification and Validation are very similar to the CMMI process areas with the same names covered in section 3.2.1. Verification, according to the ISTQB definition, is “confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. This process area checks that work products such as requirements, design, code, and documentation reflect the specified requirements.

A closer look at the verification of designs shows the level of detail provided in ISO/IEC 12207. A list of criteria is defined for the verification of designs:

Image The design is correct and consistent with and traceable to requirements.

Image The design implements proper sequence of events, inputs, outputs, interfaces, logic flow, allocation of timing and sizing budgets, and error definition, isolation, and recovery.

Image The selected design can be derived from requirements.

Image The design implements safety, security, and other critical requirements correctly as shown by suitably rigorous methods.

The standard also defines the following list of outcomes:

Image A verification strategy is developed and implemented.

Image Criteria for verification of all required software work products is identified.

Image Required verification activities are performed.

Image Defects are identified and recorded.

Image Results of the verification activities are made available to the customer and other involved parties.

Validation, according to the ISTQB definition, is “confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.” The validation process focuses on the tasks of test planning, creation of test cases, and execution of tests.

The process category Organizational Life-Cycle Processes is aimed at developing processes (including the test process) across the organization (see figure 3–12).


Figure 3–12 ISO/IEC 15504-5: Organizational processes

These processes emphasize the organizational policies and procedures used for establishing and improving life cycle models and processes. Test process improvers should take note of these processes in ISO/IEC 15504, Part 5, although the overall process of test process improvement is covered more thoroughly in the IDEAL model described in chapter 6.

Capability Dimension

ISO/IEC 15504 defines a system for evaluating the maturity level (capability) achieved by the processes being assessed. This is the “how well” dimension of the model. It applies a continuous representation approach (see section 3.1.3).

The overall structure of the capability dimension is shown in figure 3–8 earlier in this chapter. The diagram in figure 3–13 shows the specific capability levels and the process attributes assigned to those levels.


Figure 3–13 ISO/IEC 15504-5: Capability levels and process attributes

The capability of any process defined in the process dimension can be evaluated according to the capability scheme shown in figure 3–13. This is similar but (regrettably) not identical to the scheme defined by CMMI. Process attributes guide the assessor by describing generic indicators of process capability; these include generic descriptions of practices, resources, and work products. Further details can be obtained. Please refer to Process Assessment and ISO/IEC 15504: A Reference Book [van Loon 2007] or the ISO/IEC 15504 standard for further details.

Using ISO/IEC 15504-5 for Assessing Test Processes

Test process improvers will tend to focus on the processes just described. The assessment provides results regarding the achieved capability for each assessed process and features a graded scale with four levels: fully achieved, largely achieved, partially achieved, and not achieved (per capability level). The diagram shown in figure 3–14 is an example of the assessment results for two testing-related processes that also describes the rules to be applied when grading the capability of a process.


Figure 3–14 ISO/IEC 15504-5: Assessment of capability (example)

3.2.3 Comparing CMMI and ISO/IEC 15504

A full evaluation of the differences between these two software process improvement models is beyond the scope of our book on test process improvement. Some of the comparisons shown in table 3–8 may be useful background information, especially if you are involved in an overall decision on software process improvement models.

Table 3–8 Comparison between CMMI and ISO/IEC 15504-5



The suitability of using software process improvement models compared to those dedicated to test process improvement is discussed in section 3.5..

3.3 Test Process Improvement Models

3.3.1 The Test Process Improvement Model (TPI NEXT)

Syllabus Learning Objectives

LO 3.3.1

(K2) Summarize the background and structure of the TPI NEXT test process improvement model.

LO 3.3.2

(K2) Summarize the key areas of the TPI NEXT test process improvement model.

LO 3.3.8

(K3) Carry out an informal assessment using the TPI NEXT test process improvement model.

LO 3.3.10

(K5) Assess a test organization using either the TPI NEXT or TMMi model.


We will now consider two principal areas of the TPI NEXT model. First we’ll describe the structure of the model and its various components. Then we’ll consider some of the issues involved in using the model to complete typical test process improvement tasks.

The official book on TPI NEXT provides a full description of the model and its use (see TPI NEXT – Business Driven Test Process Improvement [van Ewijk, et al. 2009]). Users will gain additional insights into the model by consulting this publication and the other sources of information avilable at the website for the TPI NEXT model [URL: TPI NEXT].

TPI NEXT A continuous business-driven framework for test process improvement that describes the key elements of an effective and efficient test process.

Overview of Model Structure

The TPI NEXT model is based upon the elements shown in figure 3–15. Users of the previous version of the model (called simply TPI) will recognize a basic structural similarity between the models, although “clusters” (described in the next paragraph) are a new addition to the structure. Different aspects of the test process are represented in TPI NEXT by 16 key areas (e.g., reporting), each of which can be evaluated at a given maturity level (e.g., managed).

Achieving a particular maturity level for a given key area (e.g., managed test reporting) requires that the specific checkpoints (questions) are all answered positively (including those of any previous maturity levels). The checkpoints are also grouped into clusters, which are made up of a number of checkpoints from different key areas. Clusters are like the “stepping stones” along the path of test process improvement. We will be looking at all these model elements in more detail in the sections that follow.

TPI NEXT uses a continuous representation to show test process maturity. This can be shown on the test maturity matrix, which visualizes the achieved test process maturity for each key area. Again, some examples of the test maturity matrix are shown in the following sections.


Figure 3–15 Structure of the TPI NEXT model

Maturity Levels

The TPI NEXT model defines the following maturity levels per key area:

Image Initial:

No process. Ad-hoc activities

Image Controlled:

Performing the right test process activities

Image Efficient:

Performing the test process efficiently

Image Optimizing:

Continuously adapting to ever-changing circumstances

The continuous representation used by the TPI NEXT model means that each key area (see the following section) can achieve a particular maturity level. Note that key areas are what other improvement models refer to as process areas.

If all key areas have achieved a specific maturity level, then the test process as a whole is said to have achieved that maturity level. Note that this permits staged objectives to be set for test process improvement if this is desired (e.g., the test process shall achieve the efficient maturity level). In practice, many organizations use TPI NEXT in this staged way as well.

Key Areas

The body of testing best practices provided by the TPI NEXT model is based on the TMap NEXT methodology [Koomen, et al. 2006] and the previous version of the methodology simply called TMap [Pol, Teunnissen, and van Veenendaal 2002]. They are organized into 16 key areas, each of which covers a particular aspect of the test process (e.g., test environment). The key areas are further organized into three groups: stakeholder relations, test management, and test profession. This organization is particularly useful when reporting at a high level about assessment results and improvement suggestions (e.g., “the testing profession is only weakly practiced within small projects” or “stakeholder relations must be improved by organizing the test team more efficiently”).

Figure 3–16 shows the key areas and their groups.


Figure 3–16 Key areas and groups in the TPI NEXT model

Table 3–9, table 3–10, and table 3–11 provide an overview of the principal areas considered by the key areas in the controlled, efficient, and optimizing maturity levels for the three groups shown in figure 3–16. More detailed information is available in the book TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009].

Table 3–9 Summary of key areas in the stakeholder relations group


Table 3–10 Summary of key areas in the test management group


Table 3–11 Summary of key areas in the test profession group



The maturity level of a particular key area is assessed by answering checkpoints. These are closed yes/no-type questions that should enable a clear judgment to be made by the interviewer.

Table 3–12, from TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009], shows the four checkpoints that must all be answered yes for the key area stakeholder commitment to achieve the controlled maturity level. Note that checkpoints are identified using a format that contains the number of the key area (01 to 16), a single letter representing the maturity level (c, e, or o), and the number of the checkpoint within the key area (1 to 4).

Table 3–12 Examples of checkpoints


When interviewing a test manager, for example, we may pose the question, “What kind of information do you receive from your principal stakeholder?” (Note that it is not recommended to use checkpoints directly for interviewing purposes; see section 7.3.1 regarding interviewing skills). If the words product risk analysis do not occur in the answer, we might mention this explicitly in a follow-up question, and if the answer is negative, the result is a clear “no” to the fourth checkpoint (01.c.4) shown in table 3–12. The stakeholder commitment key area has not been achieved at the controlled maturity level (or indeed the other higher-level maturity levels, efficient and optimizing) since all checkpoints need to be addressed positively.

Evaluating checkpoints is not always as straightforward as this. Take, for example, the third checkpoint (01.c.3). To evaluate this checkpoint, we need the following information:

Image A list of stakeholders

Image A list of the resources they have committed to testing (e.g., staff, infrastructure, budget, documents)

Image Some form of proof that these resources were delivered

Let’s assume we have eight stakeholders and, in total, 20 individual resources they have committed to. It has been established that all resources except for one were delivered, but one (e.g., a document) was delivered late. How might an assessor evaluate this checkpoint? A strict assessor would simply evaluate the checkpoint as no. A pragmatic assessor would check on the significance of the late item, whether this was a one-off or recurring incident, and what the impact was on the test process. A pragmatic assessment may well be yes, if a minor item was slightly late and the stakeholder has otherwise been reliable. This calls for judgment and a particular attitude on the part of the assessor. As these typical examples show, some checkpoints are easy and clear to assess, but some need more work, experience, and judgment to enable an appropriate conclusion to be reached.

Test Maturity Matrix

The test maturity matrix is a visual representation of the overall test process that combines key areas, test maturity, and checkpoints. Figure 3–17 shows the test maturity matrix.


Figure 3–17 Test maturity matrix: checkpoint view

The individual numbered cells in the matrix represent the checkpoints for specific key areas and maturity levels. As shown earlier in table 3–12, for example, the key area stakeholder commitment has four checkpoints at the controlled level of test process maturity.

The test maturity matrix is a useful instrument for showing the “big picture” of current and desired test process maturity across all key areas. This is discussed later in “Using TPI NEXT: Overview.”

There are two ways to show the test maturity matrix. The checkpoint view is shown in figure 3–17. The cluster view is shown in figure 3–18, after the concept of clusters is introduced.


The purpose of clusters is to ensure that a sensible, step-by-step approach is adopted to test process improvement. It makes no sense, for example, to focus all our efforts on achieving an optimizing maturity level in the key area of metrics when defect management is way back at the initial stage.

Clusters are collections of individual checkpoints taken from different key areas. The TPI NEXT model defines 13 clusters, which are identified with a single letter; A is the very first cluster and M the final cluster. Figure 3–18 shows clusters on the test maturity matrix (the cluster view).


Figure 3–18 Test maturity matrix: cluster view

In the cluster view, the checkpoints associated with a particular base cluster are represented by the letter of the cluster to which they belong (table 3–13 shows the checkpoints in base cluster A).

The path from cluster A to M represents the recommended progression of the test process and assists the test process improver in setting improvement priorities. The highest-priority recommendation would typically be assigned to fulfilling a checkpoint in the earliest cluster not yet fully completed.

Note that base clusters are allocated to a specific overall test process maturity level. Completing base clusters A thru E achieves an overall “controlled” test process maturity, for “efficient” the base clusters F thru J are required, and for “optimizing” the base clusters K, L, and M must be completed.

The TPI NEXT model permits specific business objectives (e.g., cost reduction) to be followed in test process improvement. The mechanism for applying this is to adjust the checkpoints contained in a given cluster using a procedure described later (see “Using TPI NEXT: Overview”). The predefined, unchanged clusters in TPI NEXT are referred to as base clusters to distinguish them from clusters that may have been modified to meet specific business objectives.

Base cluster A is shown in table 3–13 as an example (from TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009]).

Table 3–13 Example of a cluster


As can be seen, base cluster A consists of 12 checkpoints drawn from eight different key areas. The locations of the 12 checkpoints that belong to base cluster A can be clearly identified in the test maturity matrix shown below in figure 3–19. The checkpoints in this cluster will be the first items to be implemented if our test process needs to be built up from the initial level of maturity and if no adjustment of the cluster contents has been performed to address specific business objectives.


In section 3.1.2, “Using Models: Benefits and Risks,” a particular risk was discussed that highlighted the possibility of choosing the “wrong approach” when adopting a model-based solution to test process improvement. The risk originates from a failure to recognize the strong interrelationships between the testing process and other processes involved in software development. By focusing only on the test process, there is a risk that problems that are not within the test process itself get scoped out of the analysis. We end up treating symptoms (i.e., effects) instead of the root causes.

The idea behind enablers is to provide a link between the test process and other relevant processes within the software development life cycle to clarify how they can both benefit from exchanging best practices and working closely together. In some instances, enablers also provide links to software process improvement models such as CMMI and ISO/IEC 15504.

The following example applies to the key area “test strategy” at the controlled maturity level. This includes the following two checkpoints (from TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009]) with relevance to product risk analysis:

Image Checkpoint 03.c.2: The test strategy is based on product risk analysis.

Image Checkpoint 03.c.3: There is a differentiation in test levels, test types, test coverage and test depth, depending on the analyzed risks.

Performing product risk analysis within TPI NEXT is not a part of the test process, so the TPI NEXT model simply suggests “use risk management for product risks” as an enabler to achieving the two checkpoints. The test process improver can check up on risk management practices by considering any source of information they choose. This may be the implementation of a software process improvement model, such as CMMI (risk management is a specific process area with CMMI), or it could be a knowledgeable person in their department.

Note that the TPI NEXT model does not consider enablers formally in the same way as checkpoints. Enablers are simply high-level hints and links that increase awareness and help to mitigate the risks mentioned previously.

Improvement Suggestions

There are two principal sources of test process improvements supported by the TPI NEXT model. The first source is to simply consider any checkpoints that have not yet been achieved in order to reach the desired test process maturity. This source of information covers the direct “what needs to be achieved” aspect of required improvements.

Test improvement plans (see section 6.4.) should not focus entirely on listing unachieved checkpoints; there must be an indication of how particular business objectives can be achieved through test process improvements. The TPI NEXT model supports this by providing a second source of test process improvements. These are improvement suggestions, which are described for the three maturity levels of each key area. They give advice on how particular maturity levels might be achieved by implementing the suggestions.

The following example taken from TPI NEXT shows the four improvement suggestions for achieving controlled test process maturity for the stakeholder commitment key area:

Image Locate the person who orders test activities or is heavily dependant on test results.

Image Research the effect of poor testing on production and make this visible to stakeholders. Show which defects could have been found earlier by testing. Indicate what the stakeholders could have done to avoid the problems.

Image Keep to simple, practical examples.

Image Focus on “quick wins.”

Using TPI NEXT: Overview

We will now describe some of the frequently performed activities in test process improvement with regard to using the TPI NEXT model. The following activities are described:

Image Representing business objectives

Image Evaluating assessment results

Image Setting improvement priorities

Representing Business Objectives

If the objective followed by test process improvement is a general improvement in maturity across the entire range of key areas, then the base clusters defined in the TPI NEXT model can be used as a guide for planning step-by-step improvements. This is also the case for an organization that obviously has a low test process maturity where it is probably better to initially start improving right away using the base clusters. Generally speaking, however, there is more benefit to be gained for an organization by defining business objectives and not relying entirely on the default base clusters.

If more specific business objectives have been established (e.g., as part of a test policy), then a more focused approach to test process improvement is required. This is achieved with the TPI NEXT model by prioritizing key areas according to their impact on the business objective(s) and then reallocating checkpoints to clusters according to a defined procedure. The actual procedure is described in detail in TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009] and can be applied either manually or (recommended) by using the “scoring tool” provided as a free download from the TPI NEXT website [URL: TPI NEXT]. This Microsoft Excel–based tool provides a mechanism for setting key area priorities to high, neutral, or low and adjusting clusters as shown in figure 3–19.

Assigning key area priorities according to business objectives requires careful consideration and much expertise, especially where more than one objective is to be followed and potential for conflicting priorities exists. Support for this task is provided in the previously mentioned book [van Ewijk, et al. 2009] by describing some typical business objectives (e.g., reduce the cost of testing) and indicating which key area priorities should be set. Figure 3–19 shows the test maturity matrix (cluster view) with all priorities set to their default neutral priority. Figure 3–20 shows the test maturity matrix with key area priorities set according to the recommendations given in the book [van Ewijk, et al. 2009] for reducing the cost of testing.


Figure 3–19 Unprioritized test maturity matrix


Figure 3–20 Prioritized test maturity matrix: Reduce costs of testing

Comparing the two diagrams reveals the mechanism used by TPI NEXT when setting priorities:

Image The priority for each key area is assigned by placing an X in one of the priority columns marked H (high), N (neutral). and L (low).

Image The clusters assigned to checkpoints in high-priority key areas are shifted forward by one cluster. Cluster C becomes B, cluster D becomes C, and so on. Cluster A cannot be shifted forward and therefore remains unchanged. Since TPI NEXT assumes a sequential implementation of clusters (i.e., from A to M), a higher priority is achieved for those checkpoints that have been reassigned to an earlier cluster.

Image The clusters assigned to checkpoints in low-priority key areas are shifted backwards by one cluster. Cluster B becomes C, cluster C becomes D, and so on. Cluster M cannot be shifted backwards and therefore remains unchanged.

Note that the reallocation of clusters does not influence the allocation of individual checkpoints to key areas and maturity levels. They remain unchanged. Prioritization is accomplished by reassigning checkpoints to particular clusters.

A manual adjustment may be performed to take into account any logical inconsistencies introduced by the reallocation procedure and to balance the number of checkpoints in clusters if a particular cluster should have a disproportionately large number of checkpoints assigned (detailed information on this subject is in section B.3.3 in TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009]).

A set of clusters is available for organizations that want to comply to a certain CMMI level. These four clusters (see TPI NEXT – Business-Driven Test Process Improvement [van Ewijk, et al. 2009] and “TPI NEXT Clusters for CMMI” [Marselis and van der Ven 2009]) indicate which checkpoints must be satisfied to make sure that the testing activities, as part of all SDLC activities, comply with the requirements to meet CMMI levels 2, 3, 4, or 5.

Evaluating Assessment Results

Individual checkpoints are evaluated after the test process assessment has been completed and the results are presented on the test maturity matrix. If the TPI NEXT scoring tool is used, the assessor simply enters the yes/no result for each checkpoint and this is automatically shown in the overall test maturity matrix. It is good practice to also enter a comment that supports the rationale for choosing yes or no. This is especially useful if there is any discussion about whether the checkpoint is met or not.

The diagram in figure 3–21 shows the achieved test process maturity (i.e., the checkpoints answered with yes) in black and the required but not achieved maturity (i.e., the checkpoints answered with no) in gray. Checkpoints that are not required (in this example, those assigned to the optimizing maturity level) are shown in white.


Figure 3–21 Assessment results: Reduce costs of testing

Presenting results on the test maturity matrix provides a visual overview of achieved and required test maturity levels and enables high-level statements such as the following examples to be made:

Image Only four key areas have achieved a controlled process maturity level.

Image Fundamental aspects of the test strategy and test process management are not implemented (a reference to the unachieved checkpoints in clusters A and B).

Image Achieving the required “efficient” test process maturity is likely to need significant investment and several improvement stages.

Setting Improvement Priorities

Once the results have been entered on the test maturity matrix, improvement priorities can be quickly identified. As a rule, the earliest cluster that has not yet been completed represents the highest priority. In the case of figure 3–21 two checkpoints in cluster A have not been achieved (these are shown in table 3–14).

Table 3–14 Required improvements with highest priority


Any further improvements proposed would be aligned to the cluster sequence and assigned successively lower priorities.

3.3.2 Test Maturity Model integration (TMMi)

Syllabus Learning Objectives

LO 3.3.3

(K2) Summarize the background and structure of the TMMi test process improvement models.

LO 3.3.4

(K2) Summarize the TMMi level 2 process areas and goals.

LO 3.3.5

(K2) Summarize the TMMi level 3 process areas and goals.

LO 3.3.6

(K2) Summarize the relationship between TMMi and CMMI.

LO 3.3.9

(K3) Carry out an informal assessment using the TMMi test process improvement model.

LO 3.3.10

(K5) Assess a test organization using either the TPI NEXT or TMMi model.


We will now consider the TMMi model. We’ll first describe the background to the model and the TMMi maturity levels. Thereafter, the structure of the model, the relationship with CMMI, and TMMi assessments are discussed. The official book on TMMi provides a full description of the model and its use (see Test Maturity Model integration (TMMi) – Guidelines for Test Process Improvement [van Veenendaal and Wells 2012]). Users will gain additional insights into the model by consulting this publication and the other sources of information available at the TMMi Foundation’s website [URL: TMMi].

Test Maturity Model integration (TMMi) A five-level staged framework for test process improvement (related to the Capability Maturity Model Integration [CMMI] model) that describes the key elements of an effective test process.

TMMi is a noncommercial, organization-independent test maturity model. With TMMi, organizations can have their test processes objectively evaluated by certified assessors, improve their test processes, and even have their test processes and test organizations formally accredited if they comply with the requirements. Many organizations worldwide are already using TMMi for their internal test improvement process. Other organizations already have formally achieved TMMi level 2, level 3, and even level 4. TMMi has been developed by the TMMi Foundation, a nonprofit organization based in Dublin (Ireland) that has as main objectives to develop and maintain the TMMi model, create a benchmark database, and facilitate formal assessment by accredited lead assessors. Testers can (free of charge) become members of the TMMi Foundation, and from that membership a board is being elected.

TMMi is aligned with international testing standards such as IEEE and the syllabi and terminology of the International Software Testing Qualifications Board (ISTQB). The TMMi Foundation has consciously not introduced new or their own terminology but reuses the ISTQB terminology. This is an advantage for all those test professionals who are ISTQB certified (approximately 300,000 worldwide at the time of this writing). TMMi is also an objective- and business-driven model. Testing is never an activity on its own. By introducing the process area Test Policy and Goals already at TMMi level 2, testing becomes aligned with organizational and quality objectives early in the improvement model. It should be clear to all stakeholders why there is a need to improve as well as an understanding of the business case.

A difference between TMMi and other test improvement models is the strict conformity of TMMi to the CMMI framework (see section 3.2.1). The structure and the generic components of CMMI have been reused within TMMi. This has two main advantages: first, the structure already has been shown in practice to be successful, and second, organizations that use CMMI are already familiar with the structure and terminology, which makes it easier to accept TMMi and simplifies the application of TMMi in these organizations. TMMi is positioned as a complementary model to CMMI version 1.3 [Chrissis, Konrad, and Shrum 2004], addressing those issues important to test managers, test engineers, and software quality professionals. Testing as defined in the TMMi is applied in its broadest sense to encompass all software product quality-related activities. Just like the CMMI staged representation, the TMMi uses the concept of maturity levels for process evaluation and improvement. The staged model uses predefined sets of process areas to define an improvement path for an organization. This improvement path is with TMMi described by a model component called a maturity level. A maturity level is a well-defined evolutionary plateau toward achieving improved organizational processes. Furthermore, process areas, goals, and practices are identified. Applying the TMMi maturity criteria will improve the test process and have a positive impact on product quality, test engineering productivity, and cycle-time effort [van Veenendaal and Cannegieter 2011]. The TMMi has been developed to support organizations in evaluating and improving their test process. Within the TMMi, testing evolves from a chaotic, ill-defined process with a lack of resources, tools, and well-educated testers to a mature and controlled process that has defect prevention as its main objective.

Practical experiences are positive and show that TMMi supports the process of establishing a more effective and efficient test process. Testing becomes a profession and a fully integrated part of the development process. The focus of testing will ultimately change from defect detection to defect prevention.

TMMi Maturity Levels

TMMi has a staged architecture for process improvement. It contains stages, or levels, through which an organization passes as its testing process evolves from one that is ad hoc and unmanaged to one that is managed, defined, measured, and optimized. Achieving each stage ensures that an adequate improvement has been laid as a foundation for the next stage. The internal structure of the TMMi is rich in testing practices that can be learned and applied in a systematic way to support a quality testing process that improves in incremental steps.

There are five levels in the TMMi that prescribe a maturity hierarchy and an evolutionary path to test process improvement (see figure 3–22). Each level has a set of process areas that an organization needs to implement to achieve maturity at that level.

Experience has shown that organizations do their best when they focus their test process improvement efforts on a manageable number of process areas at a time and that those areas require increasing sophistication as the organization improves. Because each maturity level forms a necessary foundation for the next level, trying to skip a maturity level is usually counterproductive. At the same time, you must recognize that test process improvement efforts should focus on the needs of the organization in the context of its business environment and the process areas at higher maturity levels may address the current needs of an organization or project. For example, organizations seeking to move from maturity level 1 to maturity level 2 are frequently encouraged to establish a test department that is addressed by the Test Organization process area that resides at maturity level 3. Although the test department is not a necessary characteristic of a TMMi level 2 organization, it can be a useful part of the organization’s approach to achieve TMMi maturity level 2.

Maturity level Degree of process improvement across a predefined set of process areas in which all goals in the set are attained. [van Veenendaal and Wells 2012]

Figure 3–22 shows the maturity levels and the process areas for each level.


Figure 3–22 TMMi maturity levels and process areas

The process areas for each maturity level of the TMMi as shown in figure 3–22 are also listed in the sections that follow, along with a brief description of the characteristics of an organization at each TMMi level. The description will introduce the reader to the evolutionary path prescribed in the TMMi for test process improvement.

Note that the TMMi does not have a specific process area dedicated to test tools and/or test automation. Within TMMi, test tools are treated as a supporting resource (practices) and are therefore part of the process area where they provide support; for example, applying a test design tool is a supporting test practice within the process area Test Design and Execution at TMMi level 2 and applying a performance testing tool is a supporting test practice within the process area Non-functional Testing at TMMi level 3.

For each process area, improvement goals have been defined that in turn are supported by testing practices as shown in figure 3–23. Two types of improvement goals are distinguished within the TMMi: specific goals and generic goals. Specific goals describe the unique characteristics that must be present to satisfy the process area; for example, “establish test estimates” is a specific goal for the Test Planning process area. The specific practices describe the activities expected to result in achievement of the specific goals for a process area. The generic goals describe the characteristics that must be present to institutionalize the processes that implement a process area; for example, “institutionalize a managed process” is an example of a generic goal applicable to all TMMi process areas. The generic practices in turn describe the activities expected to result in achievement of the generic goals.


Figure 3–23 TMMi maturity levels and process areas

A more detailed description of the TMMi structure is provided later.

Level 1 – Initial

At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The organization usually does not provide a stable environment to support the processes. Success in these organizations depends on the competence and heroism of the people in the organization and not the use of proven processes. Tests are developed in an ad hoc way after coding is completed. Testing and debugging are interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the product often does not fulfill its needs, is not stable, and/or is too slow. Within testing there is a lack of resources, tools, and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1 organizations are characterized by a tendency to overcommit, abandonment of processes in a time of crisis, and an inability to repeat their successes. In addition, products tend not to be released on time, budgets are overrun, and delivered quality is not according to expectations.

Level 2 – Managed

At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. However, testing is still perceived by many stakeholders as being a project phase that follows coding.

In the context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are also developed. Within the test plan a test approach is defined, whereby the approach is based on the result of a product risk assessment. Risk management techniques are used to identify the product risks based on documented requirements. The test plan defines what testing is required and when, how, and by whom. Commitments are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure that it is going according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of testing services are visible to management. Test design techniques are applied for deriving and selecting test cases from specifications. However, testing may still start relatively late in the development life cycle, such as, for example, during the design or even during the coding phase.

In TMMi level 2, multilevel testing exists: there are component, integration, system, and acceptance test levels. For each identified test level there are specific testing objectives defined in the organization-wide or program-wide test strategy. The processes of testing and debugging are differentiated. The main objective of testing in a TMMi level 2 organization is to verify that the product satisfies the specified requirements. Many quality problems at this TMMi level occur because testing occurs late in the development life cycle. Defects are propagated from the requirements and designed into code. There are no formal review programs as yet to address this important issue. Post code, execution-based testing is still considered by many stakeholders as the primary testing activity.

The process areas and their specific goals at TMMi level 2 are listed in table 3–15.

Table 3–15 TMMi Level 2


Level 3 – Defined

At TMMi level 3, testing is no longer confined to a phase that follows coding. It is fully integrated into the development life cycle and the associated milestones. Test planning is done at an early project stage (e.g., during the requirements phase) and is documented in a master test plan. The development of a master test plan builds on the test planning skills and commitments acquired at TMMi level 2. The organization’s set of standard test processes, which is the basis for maturity level 3, is established and improved over time. A test organization and a specific test training program exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of the test organization’s accepted practices.

Organizations at level 3 understand the importance of reviews in quality control; a formal review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place across the life cycle. Test professionals are involved in reviews of requirements specifications.

Whereas the test designs at TMMi level 2 focus mainly on functional testing, test designs and test techniques are expanded at level 3 to include nonfunctional testing, e.g., usability and/or reliability, depending the business objectives.

A critical distinction between TMMi maturity levels 2 and 3 is the scope of the standards, process descriptions, and procedures. At maturity level 2, these may be quite different in each specific instance—for example, on a particular project. At maturity level 3, these are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines. Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at maturity level 2. As a consequence, at maturity level 3, the organization must revisit the maturity level 2 process areas.

The process areas and their specific goals at TMMi level 3 are listed in table 3–16.

Table 3–16 TMMi level 3


Level 4 – Measured

Achieving the goals of TMMi levels 2 and 3 has the benefits of putting into place a technical, managerial, and staffing infrastructure capable of thorough testing and providing support for test process improvement. With this infrastructure in place, testing can become a measured process to encourage further growth and accomplishment. In TMMi level 4 organizations, testing is a thoroughly defined, well-founded, and measurable process. Testing is perceived as evaluation; it consists of all life cycle activities concerned with checking products and related work products.

An organization-wide test measurement program will be put into place that can be used to evaluate the quality of the testing process, to assess productivity, and to monitor improvements. Measures are incorporated into the organization’s measurement repository to support fact-based decision-making. A test measurement program also supports predictions relating to test performance and cost. With respect to product quality, the presence of a measurement program allows an organization to implement a product quality evaluation process by defining quality needs, quality attributes, and quality metrics. (Work) products are evaluated using quantitative criteria for quality attributes such as reliability, usability, and maintainability. Product quality is understood in quantitative terms and is managed to the defined objectives throughout the life cycle.

Reviews and inspections are considered to be part of the test process and are used to measure product quality early in the life cycle and to formally control quality gates. Peer reviews as a defect detection technique are transformed into a product quality measurement technique in line with the process area Product Quality Evaluation. TMMi level 4 also covers establishing a coordinated test approach between peer reviews (static testing) and dynamic testing and the use of peer review results and data to optimize the test approach, aiming to make the testing both more effective and more efficient. Peer reviews are now fully integrated with the dynamic testing process—for example, as part of the test strategy, test plan, and test approach.

The process areas and their specific goals at TMMi level 4 are listed in table 3–17.

Table 3–17 TMMi level 4


Level 5 – Optimization

The achievement of all previous test improvement goals at levels 1 through 4 of TMMi has created an organizational infrastructure for testing that supports a completely defined and measured process. At TMMi maturity level 5, an organization is capable of continually improving its processes based on a quantitative understanding of statistically controlled processes. Improving test process performance is carried out through incremental and innovative process and technological improvements. The testing methods and techniques are optimized and there is a continuous focus on fine-tuning and process improvement. An optimized test process, as defined by the TMMi, is one that has the following characteristics:

Image Managed, defined, measured, efficient, and effective

Image Statistically controlled and predictable

Image Focused on defect prevention

Image Supported by automation as much as is deemed an effective use of resources

Image Able to support technology transfer from the industry to the organization

Image Able to support reuse of test assets

Image Focused on process change to achieve continuous improvement

To support the continuous improvement of the test process infrastructure, and to identify, plan, and implement test improvements, a permanent test process improvement group is formally established and is staffed by members who have received specialized training to increase the level of skills and knowledge required for the success of the group. In many organizations this group is called a Test Process Group. Support for a Test Process Group formally begins at TMMi level 3 when the test organization is introduced. At TMMi levels 4 and 5, the responsibilities grow as more high-level practices are introduced, such as, for example, identifying reusable test (process) assets and developing and maintaining the test (process) asset library.

The Defect Prevention process area is established to identify and analyze common causes of defects across the development life cycle and define actions to prevent similar defects from occurring in the future. Outliers to test process performance, as identified as part of process quality control, are analyzed to address their causes as part of Defect Prevention. The test process is now statistically managed by means of the Quality Control process area. Statistical sampling, measurements of confidence levels, trustworthiness, and reliability drive the test process. The test process is characterized by sampling-based quality measurements.

At TMMi level 5, the Test Process Optimization process area introduces mechanisms to fine-tune and continuously improve testing. There is an established procedure to identify process enhancements as well as to select and evaluate new testing technologies. Tools support the test process as much as is effective during test design, test execution, regression testing, test case management, defect collection and analysis, and so on. Process and testware reuse across the organization is also common practice and is supported by a test (process) asset library.

The three TMMi level 5 process areas—Defect Prevention, Quality Control, and Test Process Optimization—all provide support for continuous process improvement. In fact, the three process areas are highly interrelated. For example, Defect Prevention supports Quality Control by analyzing outliers to process performance and by implementing practices for defect causal analysis and prevention of defect reoccurrence. Quality Control contributes to Test Process Optimization, and Test Process Optimization supports both Defect Prevention and Quality Control, for example, by implementing the test improvement proposals. All of these process areas are, in turn, supported by the practices that were acquired when the lower-level process areas were implemented. At TMMi level 5, testing is a process with the objective of preventing defects.

The process areas and their specific goals at TMMi level 5 are listed in table 3–18.

Table 3–18 TMMi level 5


Structure of the TMMi

The structure of the TMMi is largely based on the structure of the CMMI. This is a major benefit because many people/organizations are already familiar with the CMMI structure. Like the CMMI structure, TMMi makes a clear distinction between practices that are required (goals) and those that are recommended (specific practices, example work products, etc.). The TMMi model required and expected components are summarized to illustrate their relationship in figure 3–24. For a full description of this structure and its components, see section 3.2.1, which describes the structure of CMMI; the structure and components for the staged version of CMMI also apply to TMMi.

To provide the reader with some understanding of how all of these components are put together within the TMMi, a small part of the actual TMMi model is provided here (text box next page). The example provided comes from the TMMi level 3 process area Peer Reviews and shows the components related to the specific goals.


Figure 3–24 Structure of a TMMi process area



Although TMMi can be also used in isolation, it was initially positioned and developed as a complementary model to the CMMI. As a result. in many cases a given TMMi level needs specific support from process areas at its corresponding CMMI level or at higher CMMI levels. Process areas and practices that are elaborated within CMMI generally are not repeated within TMMi; they are only referenced. For example, the process area Configuration Management, which is also applicable to test (work) products/testware, is not elaborated upon in detail within the TMMi; the practices from CMMI are referenced and implicitly reused.

Table 3–19 and table 3–20 summarize the CMMI process areas that complement and/or overlap with the TMMi process areas required for TMMi level 2 and TMMi level 3. Note that, as shown in the tables, supporting process areas (S) and parallel process areas (P) are denoted. Supporting process areas (S) encompass those process areas and related practices that should ideally be in place to support achievement of the TMMi goals. Parallel process areas (P) are those that are similar in nature in TMMi and CMMI and can be simultaneously pursued. Further details and background regarding these relationships can be found in Test Maturity Model integration (TMMi) – Guidelines for Test Process Improvement [van Veenendaal and Wells 2012].

Table 3–19 Support for TMMi maturity level 2 from CMMI process areas


Table 3–20 Support for TMMi maturity level 3 from CMMI process areas


Note that the test-specific Verification and Validation process areas of the CMMI are not listed as supporting or parallel process areas for the dynamic testing processes within TMMi. For these CMMI process areas, the relationship is complementary. The TMMi process areas provide support and a more detailed specification of what is required to establish a defined verification and validation process. Practical experiences have shown that an organization that complies with the TMMi level 2 requirements will, at the same time, largely or fully fulfill the requirements for the CMMI process areas Verification and Validation (with the exception of the peer review specific goal and related practices within the Verification process area). This is a good example of how satisfying the goals and implementation of the practices in one model (TMMi) may lead to satisfactory implementation in the other (CMMI). It is also a good example of how the TMMi test-specific improvement model complements the more generic software and system development improvement models (e.g., CMMI).

TMMi Assessments

Many organizations find value in benchmarking their progress in test process improvement both for internal purposes and for external customers and suppliers. Test process assessments focus on identifying improvement opportunities and understanding the organization’s position relative to the selected model or standard. The TMMi provides a reference model to be used during such assessments. Assessment teams use TMMi to guide their identification and prioritization of findings. These findings, along with the guidance of TMMi practices, are used to plan improvements for the organization. The assessment framework itself is not part of the TMMi. Requirements for TMMi assessments are described by the TMMi Foundation in the document “TMMi Assessment Method Application Requirements” (TAMAR) [TMMi Foundation 2009]. These requirements are based upon the ISO/IEC 15504 standard and SCAMPI (see section 3.2.2). (Note that formal TMMi assessments follow the requirements for SCAMPI class B appraisals.) The achievement of a specific maturity level must mean the same thing for different assessed organizations. Rules for ensuring this consistency are contained in the TMMi assessment method requirements. The TMMi assessment method requirements contain guidelines for various classes of assessments (e.g., formal assessments, quick scans, and self-assessments).

Because the TMMi can be used in conjunction with the CMMI (staged version), TMMi and CMMI assessments are often combined, evaluating both the development process and the testing process. Since the models are of similar structure, and the model vocabularies and goals overlap, parallel training and parallel assessments can be accomplished by an assessment team. The TMMi can also be used to address testing issues in conjunction with continuous models. Overlapping process areas that relate to testing can be assessed and improved using the TMMi, while other process areas fall under the umbrella of the broader-scope model.

During the assessment, the evidence is gathered to determine the process maturity of the organization or project being assessed. To be able to draw conclusions, a sufficient amount of evidence is needed, and that evidence must be of sufficient depth. Whether the amount and depth of evidence is sufficient depends on the goal of the assessment.

The objective evidence is used to determine to what extent a certain goal has been reached. To be able to classify a specific or generic goal, the classification of the underlying specific and generic practices needs to be determined. A process area as a whole is classified in accordance with the lowest classified goal that is met. The maturity level is determined in accordance with the lowest classified process area within that maturity level.

The level to which an organization achieves a particular process area is within TMMi measured using a scale that consists of the following levels:

Image N – Not Achieved

Image P – Partially Achieved

Image L – Largely Achieved

Image F – Fully Achieved

To score N (Not Achieved), there should be little or no evidence found of compliance with the goals of the process area. The percentage of process achievement that would score N would be from 0 percent to 15 percent. To score P (Partially Achieved), there should be some evidence found of compliance, but the process may be incomplete, not widespread, or inconsistently applied. The percentage of process achievement for processes that would score P would be over 15 percent and up to 50 percent. To score L (Largely Achieved), there should be significant evidence found of compliance, but there may still be some minor weaknesses in implementation, application, or results of this process. The percentage of process achievement for processes that would score L would be 50 percent and up to 85 percent. To score F (Fully Achieved), there should be consistent evidence found of compliance. The process has been implemented both systematically and completely and there should be no obvious weaknesses in implementation, application, or results of this process. The percentage of process achievement for processes that would score F would be over 85 percent and up to 100 percent. Note that “only” 85 percent is needed to be rated Fully Satisfied; this implies that practices with little or no added value for the project or organization can be discarded and nevertheless an F rating can be achieved.

There are two additional ratings that can be utilized:

Image Not Applicable: This classification is used if a process area is not applicable to the organization and is therefore excluded from the results. As long as at least two-thirds of the process areas at a maturity level are applicable, an organization can nevertheless formally achieve a TMMi level despite the fact that one or more process areas are not applicable.

Image Not Rated: This classification is used if the process area is not ratable due to insufficient evidence.

More information on assessments in general can be found is section 6.3..

Improvement Approach

The TMMi provides a full framework to be used as a reference model during test process improvement. It does not provide an approach for test process improvement such as IDEAL (see chapter 6). Practical experiences with TMMi have shown that the most powerful initial step to test process improvement is to build strong organizational sponsorship before investing in test process assessments. Given sufficient senior management sponsorship, establishing a specific, technically competent Test Process Group that represents relevant stakeholders to guide test process improvement efforts has proven to be an effective approach. Some other ideas and guidelines regarding an approach for test process improvement can be found in The Little TMMi [van Veenendaal and Cannegieter 2011].

3.3.3 Comparing TPI NEXT to TMMi

Syllabus Learning Objectives

LO 3.3.7

(K5) Recommend which is appropriate in a given scenario, either the TPI NEXT or the TMMi model

Making choices about the approach to take for assessing test processes and implementing improvements is one of the fundamental tasks for a test process improver. In section 2.5., the following overall approaches were introduced:

Image Model-based

Image Analytical-based (see chapter 4)

Image Hybrid approaches

Image Other approaches (e.g., skills, tools; see chapter 2)

In chapter 5 we will consider the overall issue of choosing the most appropriate of these test improvement approaches for a particular project or organization.

In this section the two principal test process reference models described in the previous sections are compared according to the following attributes:

Image Representation

Image Main focus

Image Overall approach

Image Test methods

Image Terminology

Image Relationship to SPI models


Section 3.1.3 describes the two representations used by process reference models: continuous and staged.

TMMi uses a staged representation:

Image Five successive maturity levels are defined, each of which requires that specific testing activities (process areas) are performed.

Image The sequence to follow for improving test process maturity is simple to understand.

Image An “all or nothing” approach is followed, meaning that all the items within a particular maturity level (stage) must be achieved before the next one can be reached.

TPI NEXT uses a continuous representation:

Image Sixteen key areas of a testing process are defined, each of which can be assessed at a managed, efficient, or optimizing level of maturity.

Image A differentiated view of test process maturity can be achieved, but this may be more difficult to communicate to certain non-testers (e.g., management).

Image Steps to follow for implementing process improvement are represented by clusters. These are logical groups of checkpoints from more than one key area.

Main Focus


Image The improvement focus is on detailed coverage of a limited number of process areas per maturity level. This supports the organization in having a clear focus during the improvement program.

Image Each process area can be assessed at a detailed level based on scoring the specific and generic practices.

Image The interactions from TMMi to CMMI are covered in detail. This enables testing to be considered within the context of the software development process.

Image At higher maturity levels, testing issues such as testability reviews, quality control, defect prevention, and test measurement programs are covered in detail.


Image An overview across the entire test process is achieved with the 16 key areas.

Image Interactions with other processes are covered. This enables testing to be considered within the context of the entire software development process.

Image Each key area can be assessed at a detailed level with specific checkpoints.

Overall Approach


Image There is a strong focus on obtaining management commitment and defining business objectives upfront to drive the test improvement program.

Image The formal assessment approach is described in a separate guideline: the TMMi Assessment Method Application Requirements (TAMAR).

Image Assessments may be conducted formally with certified TMMi assessors or informally.

Image Conducting an assessment requires that performance of all process areas that are applicable and practices at a certain maturity level are evaluated. Depending on the formality of the TMMi assessment, this can result in substantial effort being required. A formal assessment can result in certifying a specific organization at a certain TMMi maturity level.


Image A thorough, business-driven and test engineering approach is adopted.

Image The model is occasionally perceived as emphasizing the technical test engineering aspects too much and therefore appealing principally to experienced testers. To a large degree, this is a legacy from the previous version of the model (TPI). The TPI NEXT model takes a more business-driven approach than its predecessor and achieves this by enabling key areas to be prioritized according to particular business goals.

Image Conducting an assessment requires that the answers to specific questions (checkpoints) are evaluated. The checkpoints may be for particular key areas and maturity levels or for a defined group (cluster) of checkpoints. The number of checkpoints per key area and maturity level is between two and four, making the evaluation task relatively quick.

Test Methods


Image The model is based on independent research and is not aligned to a specific testing methodology. The model is aligned with international testing standards such as, for example, IEEE.


Image Generic testing practices described in the TMap NEXT methodology form the terms of reference for the model. This does not mean that TMap NEXT must be used for test process improvements.



Image Standard testing terminology is used that is strongly aligned to the ISTQB glossary of testing terms.


Image The terminology used is based on the TMap methodology [Pol, Teunnis-sen, and van Veenendaal 2002] and its successor TMap NEXT.

Relationship to SPI Models


Image The model structure and maturity levels are highly correlated to the CMMI model.


Image No formal relationship to a specific SPI model exists.

Image Mappings are described to the SPI models CMMI and ISO/IEC 15504.

Other Issues Not Covered in the Syllabus

Tool support:

Image TMMi: A full assessment tool and workbench is available through the TMMi assessment training.

Image TPI NEXT: A free scoring tool is available.

Published benchmark:

Image TMMi: Yes [van Veenendaal and Cannegieter 2013]

Image TPI NEXT: No


Image TMMi: Rapidly increasing, strong in Europe, India, Korea, China, and Brazil

Image TPI NEXT: Strong European base but also international (especially in United States, India, and China)

3.3.4 Systematic Test and Evaluation Process (STEP)

Syllabus Learning Objectives

LO 3.4.1

(K2) Summarize the background and structure of the STEP content-based model.

LO 3.4.2

(K2) Summarize the activities, work products and roles of the STEP model.

Overview of Structure

Systematic Test and Evaluation Process (STEP) is a content-based test improvement model introduced in 2002 in a book written by Rick Craig and Stefan Jaskiel, Systematic Software Testing [Craig and Jaskiel 2002]. In this section, we’ll refer to that book as the principal source of information.

Systematic Test and Evaluation Process A structured testing methodology, also used as a content-based model for improving the testing process. Systematic Test and Evaluation Process (STEP) does not require that improvements occur in a specific order.

The structure of the STEP model is shown in figure 3–25.


Figure 3–26 Systematic Test and Evaluation Process (STEP) structure

Figure 3–25 shows the hierarchical structure of the STEP model with the components listed in table 3–21.

Table 3–21 STEP structural elements


Summary of STEP Content

In general, STEP takes a requirements-based life cycle approach to testing, in which testing activities add value to the overall software development effort. The early design of test cases in the development life cycle enables early verification of requirements and benefits software design by providing models of intended software usage. STEP places strong emphasis on early defect detection and the use of systematic defect analysis to prevent defects from entering life cycle products. Being developed back in 2002, STEP is a rather traditional model mainly related to the V-model SDLC.

Table 3–22 summarizes the structure of a test level. Note that STEP expects the user to adapt this structure to meet project needs.

Table 3–22 Structure of a test level in STEP



The STEP model provides detailed support for performing the tasks and defines the work products created. It basically provides a full test process model description. For example, the following actions support the “perform test analysis to create a list of prioritized test objectives” task shown in table 3–22 (part of the major activity “list test objectives” within the “acquire testware” phase):

1. Gather reference materials

2. Form a brainstorming team

3. Determine test objectives

4. Prioritize objectives

5. Parse objectives into lists

6. Create an inventory tracking matrix

7. Identify tests for unaddressed conditions

8. Evaluate each inventory item

9. Maintain the testing matrix


STEP describes four roles, which are summarized in figure 3–26.


Figure 3–27 Roles described in STEP

STEP includes detailed descriptions of the test manager and the software tester roles (in fact, Systematic Software Testing [Craig and Jaskiel 2002] devotes an entire chapter to each role).

Using STEP for Test Process Improvement

As shown earlier in table 3–22, the STEP model explicitly identifies test process improvement as one of the activities within the “measure behavior” phase. The description of the generic steps to apply that is provided in STEP is similar to the IDEAL approach described in chapter 6.

STEP does not require that improvements occur in a specific order, but high-level descriptions of the CMM and TPI process assessment models are given to enable STEP to be blended with these models (note that these two models have since been superceded by CMMI and TPI NEXT but no update has taken place for STEP).

3.3.5 Critical Testing Processes (CTP)

Syllabus Learning Objectives

LO 3.4.3

(K2) Summarize the CTP content-based model.

LO 3.4.4

(K2) Summarize the critical test processes within the CTP content-based model.

LO 3.4.5

(K2) Summarize the role of metrics within the CTP content-based model.

Overview of Structure

Critical Testing Processes (CTP) is a content-based test improvement model introduced in 2003 in a book written by Rex Black [Black 2003]. This book is the principal source of information regarding CTP and serves as the basis for the description in this section.

At the highest level, CTP defines a test process with the following four principal steps (see figure 3–27):

Image Plan Understand the testing effort

Image Prepare Understand the people and tests

Image Perform Do the testing and gather the results

Image Perfect Guide adaption and improvement


Figure 3–28 Critical Testing Processes (CTP) test process steps

Even though the CTP steps are presented in a logical sequence, their execution may overlap. In the “classic” style of the Deming improvement cycle, they are linked together to form an iterative process.

Critical Testing Processes (CTP) A content-based model for test process improvement built around 12 critical processes. These include highly visible processes, by which peers and management judge competence, and mission-critical processes, in which performance affects the company’s profits and reputation.

The basic concept of CTP is that certain activities within the test process can be classified as “critical” to performing effective and efficient testing. A test process becomes critical when it has the following characteristics:

Image Repeated frequently

→ affects efficiency

Image Highly cooperative

→ affects team cohesion

Image Visible to peers and superiors

→ affects creditability

Image Linked to project success

→ affects effectiveness

In other words, it becomes critical when it directly and significantly affects the test team’s ability to find bugs and reduce risks. These critical testing processes are each associated with a particular step, as shown in figure 3–28.


Figure 3–29 Critical Testing Processes (CTP)

Figure 3–28 shows 11 critical testing processes. In addition, the test process itself is also considered to be an overarching critical testing process. Each of the critical testing processes is described in the CTP model together with examples and a description of the steps and sub-steps to be performed, from which checklists can be constructed.

The checklists may be tailored to suit a particular project context and then used to assess the status of a test process. CTP does not implement the highly structured approach to assessing test process maturity used in process models such as TMMi and TPI NEXT. The user of CTP must combine their own experience with guidance from the model to make statements about test process maturity and where improvements should be made.

Summary of Critical Testing Processes

The 11 initial critical testing processes are summarized in table 3–23 through table 3–26 according to the test process steps defined in CTP. Please note that the text used in the tables is based on Critical Testing Processes [Black 2003]. Readers requiring more in-depth information should consult the referenced book.

Table 3–23 CTP activities in planning the testing


Table 3–24 CTP activities in preparing the tests


Table 3–25 CTP activities in performing the tests


Table 3–26 CTP activities in perfecting the test process


The Role of Metrics within CTP

CTP describes a number of metrics relevant to the critical testing processes. These may be applied to test management or for improving the test process. Some examples are provided in table 3–27.

Table 3–27 Examples of metrics used in CTP


Using CTP for Test Process Improvement

In general, each critical testing process has a dedicated section, “Implement Improvements,” that can be used as a guide.

The critical testing process “adjust to context changes and improve the test process” suggests the following high-level approach to test process improvement:

Image Identify the three or four critical testing processes that account for approximately two-thirds of resource usage.

Image Gather detailed metrics on these critical testing processes.

Image Propose improvements based on these metrics and develop an improvement plan.

3.4 Comparing Process Models and Content Models

Adopting a model-based approach presents the user with options that need careful consideration. Table 3–28 identifies some of the principal aspects that distinguish process models from content models.

Process-based model A framework wherein processes of the same nature are classified into a overall model; e.g., a test improvement model.

Content-based model A process model providing a detailed description of good engineering practices; e.g., test practices.

Table 3–28 Aspects of process models and content models



As mentioned in chapter 2, model-based and analytical approaches may be combined to achieve the objectives placed on the test process. By combining particular aspects of process- and content-based models, this form of hybrid approach can also be considered when performing model-based test process improvement. For example, the TPI NEXT process model may be used for obtaining an assessment of the test process maturity and the CTP content model can then be consulted to gain further insights in formulating improvement suggestions. Similarly, TMMi may be used to perform an informal test process assessment and the templates suggested in STEP used to support particular improvement suggestions regarding, for example, test documentation.

3.5 Suitability of SPI Models and Test Process Improvement Models

The previous sections of this chapter have considered the use of software process improvement (SPI) models and those dedicated entirely to test process improvement. Comparing the suitability of the two for conducting test process improvement depends on a careful consideration of the factors described in the following sections.

Level of Detail

Using SPI models can be advantageous if only a relatively high-level consideration of the testing process is desired. If details are important, a test process improvement model is more appropriate. By way of example, let’s just consider the way that CMMI (an SPI model) and TMMi (a test process improvement) consider test planning.

CMMI describes two process areas with direct relevance for testing (section 3.2.1): Validation (VAL) and Verification (VER). The Validation process area contains two specific goals, one of which, “prepare for validation (SG1),” is where we might expect to find test planning. This is not the case. The generic practice “plan the process (GP 2.2)” is where planning in the context of verification is described, and here we find a general statement that a plan has to be established for verification. This high-level statement may be sufficient for some assessments, but for a detailed view of test planning we need to consult a dedicated model for test process improvement, such as TMMi.

TMMi describes test planning as a specifc process area (PA2.2) within the Managed maturity level (level 2). The Test Planning process area describes five specific goals (e.g., SG4, “Develop a test plan”), each with between three and five specific practices (e.g., SG4.5, “Establish the test plan”). These specific practices define work products and provide detailed descriptions of sub-practices (refer to Test Maturity Model integration (TMMi) – Guidelines for Test Process Improvement [van Veenendaal and Wells 2012] for further details). Assessing test planning using the TMMi model will require more effort than with CMMI. The level of detail provided by TMMi enables a much more precise evaluation of test planning to take place and, perhaps even more important, provides a more solid basis for making specific improvement recommendations.

The example just described illustrates why the need for test process improvement models arose. SPI models are simply not detailed enough for a thorough consideration of the test process. If a detailed consideration is required, using a test process improvement model is essential. If you are happy with a high-level view and don’t want to dive into the details, an SPI model may be more appropriate.

Scope of Improvement

The scope of an improvement initiative strongly influences the type of model to be used (assuming a model-based approach is to be followed). If the purpose of the initiative is to consider the software process as a whole, then clearly an SPI model will be more suitable. Under these circumstances the test process will most likely be considerd at a relatively high level.

Improvements initiatives with a testing scope will gain better insights into the test process by using a specific model for test process improvement. Care has to be taken here not to consider the test process as an “island” and ignore other processes such as development and release management. This is why test process improvement models also consider non-testing subjects (e.g., TPI NEXT includes enablers; see section 3.3.1)

Consistency of Approach

Organizations that have adopted an SPI model as part an overall process improvement policy may also favor using this model for test process improvement. In this sense, the “suitability” of the SPI model is driven by considerations such as availability of skills in using the SPI model and in having a common, consistent approach to process improvements.

Organizations may be encouraged to use dedicated test process improvement models that address this consistency issue. The TMMi model, for example, allows organizations that apply CMMI to perform test process improvement with a dedicated test process improvement model, without losing consistency.

Marketing Considerations

Like it or not, marketing plays a role in deciding whether to use an SPI or a test process improvement model. In particular, in countries where test process improvement models are not yet widely used, or in countries with a large user base for a particular SPI (e.g., CMMI in India and the United States), the desire or requirement to market an organization at a particular level of maturity cannot be overlooked. In some cases, an organization may be required to demonstrate a given capability based on a certain SPI’s process maturity scale. These are powerful marketing arguments that cannot be ignored and may even take priority over more technical considerations.

Organizations that are offering test outsourcing may therefore choose TMMi since they want to become formally certified and use this certification (also) as a marketing vehicle.

3.6 Exercises

The following multiple-choice questions will give you some feedback on your understanding of this chapter. Appendix F lists the answers.

3-1: Which of the following is not a desirable characteristic of models?

A: Identify where improvements should take place

B: Show the current status

C: Gather test process metrics

D: Provide guidance on how to progress

3-2: Which of the following is a benefit of using models?

A: Ability to capture project best practices

B: Provide an industry benchmark

C: Show solutions to testing problems

D: Compare the maturity status between projects

3-3: Which of the following is a risk of using models?

A: Belief that the model is always correct

B: Time taken for the assessment

C: Prevents real root causes from being identified

D: Results difficult to discuss with business owners

3-4: Which of the following models cannot be used for test process improvement?

A: Process reference model

B: Continuous reference model

C: Content reference model

D: Software process improvement model

3-5: Which CMMI first process maturity level is not in the correct place in the following sequence?

A: Initial

B: Defined

C: Managed

D: Optimizing

E: Quantitatively Managed

3-6: Which process area is found at the CMMI process maturity level Managed?

A: Measurement and Analysis

B: Verification

C: Risk Management

D: Validation

3-7: Which process category in ISO/IEC 15504 contains the process “process improvement”?

A: Primary Life-Cycle Processes

B: Supporting Life-Cycle Processes

C: Organizational Life-Cycle Processes

D: Secondary Life-Cycle Processes

3-8: Which statement is true regarding the process dimension in ISO/IEC 15504 ?

A: The process dimension enables us to measure how well processes are being performed.

B: The process dimension defines generic practices for each process.

C: The process dimension identifies, describes, and organizes the individual processes within the overall software life cycle.

D: The process dimension provides indicators of process capability.

3-9: Which statement is true regarding clusters in the TPI NEXT model?

A: Clusters are used only in the staged representation.

B: A cluster combines improvement suggestions with enablers.

C: A cluster contains several checkpoints from a specific key area.

D: Clusters help show the required sequence of improvements.

3-10: Which of the following specific goals would you be targeting during the implementation of the Test Policy and Strategy process area of the TMMi model?

A: Perform a product risk assessment

B: Establish test performance indicators

C: Establish a test approach

D: Establish a test organization

3-11: Which of the following is not an element of the STEP model structure?

A: Phase

B: Step

C: Activity

D: Level

3-12: Which of the following activities is not defined in the CTP planning stage?

A: Establish a test team

B: Understand context

C: Estimate resources

D: Establish a risk-based test strategy