Glossary - Software Testing Foundations: A Study Guide for the Certified Tester Exam (2014)

Software Testing Foundations: A Study Guide for the Certified Tester Exam (2014)

Glossary

This glossary contains terms from the area of software testing as well as additional software engineering terms related to software testing. The terms are marked with an arrow at their first place of occurrence in the book. The terms not listed in the ISTQB glossary are underlined in this glossary.

The definitions of most of the following terms are taken from the “Standard Glossary of Terms used in Software Testing” Version 2.2 (2013), produced by the Glossary Working Party of the International Software Testing Qualifications Board (ISTQB). You can find the current version of the glossary here: [URL: ISTQB]. The ISTQB glossary refers to further sources of definitions.

acceptance test(ing)

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers, or other authorized entity to determine whether or not to accept the system.

This test may be

1. A test from the user viewpoint

2. A partial set of an existing test, which must be passed, and an entry criterion (start criterion) for the test object to be accepted into a test level

actual result

The behavior produced/observed when a component or system is tested.

alpha testing

Simulated or actual operational testing by potential customers/users or an independent test team at the developer’s site but outside of the development organization. Alpha testing is used for off-the-shelf software as a form of internal acceptance testing.

analytical quality assurance

Diagnostic based measures (for example, testing) to measure or evaluate the quality of a product.

anomaly

Any condition that deviates from expectations based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] Also called bug, defect, deviation, error, fault, failure, incident, problem.

atomic (partial) condition

Boolean expression containing no Boolean operators (AND, OR, NOT, etc.), maximally containing relational operators like < or >.

audit

An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify the following:

§ The form or content of the products to be produced

§ The process by which the products shall be produced

§ How compliance to standards or guidelines shall be measured

back-to-back testing

Testing in which two or more variants of a component or system are executed with the same inputs, and then the outputs are compared and analyzed in case of discrepancy.

bespoke software

See special software.

beta testing

Operational testing by representative users/customers in the production environment of the user/customer. With a beta test, a kind of external acceptance test is executed in order to get feedback from the market and in order to create an interest with potential customers. It is done before the final release. Beta test is often used when the number of potential production environments is large.

black box test design techniques

Repeatable procedure to derive and/or select test cases based on an analysis of the specification, either functional or nonfunctional, of a component or system without reference to its internal structure.

blocked test case

A test case that cannot be executed because the preconditions for its execution cannot be fulfilled.

boundary value analysis

Failure-oriented black box test design technique in which test cases are designed based on boundary values (at boundaries or right inside and outside of the boundaries of the equivalence classes).

branch

The expression branch is used as follows:

§ When a component uses a conditional change of the control flow from one statement to another one (for example, in an IF statement).

§ When a component uses a nonconditional change of the control flow from one statement to another one, with the exception of the next statement (for example, using GOTO statements).

§ When the change of control flow is through more than one entry point of a component. An entry point is either the first statement of a component or any statement that can be directly reached from the outside of the component.

A branch corresponds to a directed connection (arrow) in the control flow graph.

branch coverage

The percentage of branches or decisions of a test object that have been executed by a test suite.

branch testing

A control-flow-based white box test design technique that requires executing all branches of the control flow graph (or every outcome of every decision) in the test object.

bug

See defect.

business-process-based testing

An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

capture/playback tool, capture-and-replay tool

A tool for supporting test execution. User inputs are recorded during manual testing in order to generate automated test scripts that are executable and repeatable. These tools are often used to support automated regression testing.

cause-effect graphing

A function-oriented black box test design technique in which test cases are designed from cause-effect graphs, a graphical form of the specification. The graph contains inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

change

Rewrite or new development of a released development product (document, source code).

change order

Order or permission to perform a change of a development product.

change request

1. Written request or proposal to perform a specific change for a development product or to allow it to be implemented.

2. A request to change some software artifact due to a change in requirements.

class test

Test of one or several classes of an object-oriented system.

See also component testing.

code-based testing

See structural test and white box test design technique.

commercial off-the-shelf (COTS) software

A software product that is developed for a larger market or the general market (i.e., for a large number of customers) and that is delivered to many customers in identical form. Also called standard software.

comparator

A tool to perform automated comparison of actual results with expected results.

complete testing

See exhaustive testing.

component

1. A minimal software item that has its own specification or that can be tested in isolation.

2. A software unit that fulfills the implementation standards of a component model (EJB, CORBA, .NET).

component integration test(ing)

See integration test(ing).

component test(ing)

The testing of individual software components.

concrete (physical or low level) test case

A test case with concrete values for the input data and expected results.

See also logical test case.

condition test(ing)

Control-flow-based white box test design technique in which every (partial) condition of a decision must be executed both TRUE and FALSE.

condition determination testing

A white box test design technique in which test cases are designed to execute single-condition outcomes that independently affect a decision outcome.

configuration

1. The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

2. State of the environment of a test object, which must be fulfilled as a precondition for executing the test cases.

configuration item

Software object or test environment that is under configuration management.

configuration management

Activities for managing the configurations.

constructive quality assurance

The use of methods, tools, and guidelines that contribute to making sure the following conditions are met:

§ The product to be produced and/or the production process have certain attributes from before.

§ Errors and mistakes are minimized or prevented.

control flow

An abstract representation of all possible sequences of events (paths) during execution of a component or system. Often represented in graphical form, see control flow graph.

control flow anomaly

Statically detectable anomaly in execution of a test object (for example, statements that aren’t reachable).

control-flow-based test

Dynamic test, whose test cases are derived using the control flow of the test object and whose test coverage is determined against the control flow.

control flow graph

§ A graphical representation of all possible sequences of events (paths) in the execution through a component or system.

§ A formal definition: A directed graph G = (N, E, nstart, nfinal). N is the finite set of nodes. E is the set of directed branches. nstart is the start node. nfinal is the end node. Control flow graphs are used to show the control flow of components.

coverage

Criterion for the intensity of a test (expressed as a percentage), differing according to test method. Coverage can usually be found by using tools.

cyclomatic number

Metric for complexity of a control flow graph. It shows the number of linearly independent paths through the control flow graph or a component represented by the graph.

data quality

The degree to which data in an IT system is complete, up-to-date, consistent, and (syntactically and semantically) correct.

data flow analysis

A form of static analysis that is based on the definition and usage of variables and shows wrong access sequences for the variable of the test object.

data flow anomaly

Unintended or unexpected sequence of operations on a variable.

data flow coverage

The percentage of definition-use pairs that have been executed by a test suite.

data flow test techniques

White box test design techniques in which test cases are designed using data flow analysis and where test completeness is assessed using the achieved data flow coverage.

data security (security)

Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization [ISO 25010].

From [ISO 9126]: The capability of the software product to protect programs and data from unauthorized access, whether this is done voluntarily or involuntarily.

dead code

See unreachable code.

debugger

A tool used by programmers to reproduce failures, investigate the state of programs, and find the corresponding defect. Debuggers enable programmers to execute programs step-by-step, to stop a program at any program statement, and to set and display program variables.

debugging

The process of finding, analyzing, and removing the causes of failures in software.

decision coverage

The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

See also branch coverage.

decision table

A table showing rules that consist of combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects). These tables can be used to design test cases.

decision test(ing)

Control-flow-based white box test design technique requiring that each decision outcome (TRUE and FALSE) is used at least once for the test object. (An IF statement has two outcomes; a CASE or SWITCH statement has as many outcomes as there are given.)

defect

A flaw in a component or system that can cause it to fail to perform its required function, such as, for example, an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

defect database

1. List of all known defects in the system, a component, and their associated documents as well as their states.

2. A current and complete list with information about known failures.

Defect Detection Percentage (DDP)

The number of defects found at test time or in a test level divided by the number found altogether in that period plus the number found until a future defined time point in time (for example, six months after release).

defect management

The process of recognizing, investigating, taking action, and disposing of detected defects. It involves recording defects, classifying them, and identifying their impact.

defect masking

An occurrence in which one defect prevents the detection of another.

development model

See software development model.

developer test

A test that is under the (sole) responsibility of the developer of the test object (or the development team). Often seen as equal to component test.

driver

A program or tool that makes it possible to execute a test object, to feed it with test input data, and to receive test output data and reactions.

dummy

A special program, normally restricted in its functionality, to replace the real program during testing.

dynamic analysis

The process of evaluating the behavior (e.g., memory performance, CPU usage) of a system or component during execution.

dynamic tests

Tests that are executing code. Tests in a narrow sense, i.e., how the general public understands testing. The opposite are static tests. Static and dynamic tests taken together form the whole topic of testing.

efficiency

A set of software characteristics (for example, execution speed, response time) relating to performance of the software and use of resources (for example, memory) under stated conditions (normally increasing load).

equivalence class

See equivalence partition.

equivalence partition

A portion of an input or output domain for which the behavior of a component or system is assumed to be the same; judgment being based on the specification.

equivalence class partitioning

Partitioning input or output domains of a program into a limited set of classes, where the elements of a class show equivalent functional behavior.

From the ISTQB glossary: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once.

error

A human action that produces an incorrect result. Also a general, informally used term for terms like mistake, fault, defect, bug, failure.

error guessing

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made and to design tests specifically to expose them.

exception handling

Behavior of a component or system in response to wrong input, from either a human user or another component or system, or due to an internal failure.

exhaustive testing

A test approach in which the test suite comprises all combinations of input values and preconditions. Usually this is not practically achievable.

exit criteria

The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task that have not been finished. Achieving a certain degree of test coverage for a white box test is an example of an exit criterion.

expected result

The predicted or expected output or behavior of a system or its components, as defined by the specification or another source (for every test case).

exploratory testing

An informal test design technique where the tester actively controls the design of the tests as those tests are performed. The tester uses the information gained while testing to design new and better tests.

Extreme Programming

A lightweight agile software engineering methodology used whereby a core practice is test-first programming.

failure

1. Deviation of the component or system from its expected delivery, service, or result. (For a test result, the observed result and the expected [specified or predicted] result do not match.)

2. Result of a fault that, during test execution, shows an externally observable wrong result.

3. Behavior of a test object that does not conform to a specified functionality that should be suitable for its use.

failure class / failure classification / failure taxonomy

Classification of the found failures by their severity from a user point of view (for example, the degree of impairment of product use).

failure priority

Determination of how urgent it is to correct the cause of a failure by taking into account failure severity, necessary correction work, and the effects on the whole development and test process.

fault

An alternative term for defect.

fault masking

A fault in the test object is compensated by one or more faults in other parts of the test object in such a way that it does not cause a failure. (Note: Such faults may then cause failures after other faults have been corrected.)

fault revealing test case

A test case that, when executed, leads to a different result than the specified or expected one.

fault tolerance

1. The capability of the software product or a component to maintain a specified level of performance in case of wrong inputs (see also robustness).

2. The capability of the software product or a component to maintain a specified level of performance in case of software faults (defects) or of infringement of its specified interface.

field test(ing)

Test of a preliminary version of a software product by (representatively) chosen customers with the goal of finding influences from incompletely known or specified production environments. Also a test to check market acceptance.

See also beta testing.

finite state machine

A computation model consisting of a limited number of states and state transitions, usually with corresponding actions. Also called state machine.

functional requirement

A requirement that specifies a function that a component or system must perform.

See also functionality.

functional testing

1. Checking functional requirements.

2. Dynamic test for which the test cases are developed based on an analysis of the functional specification of the test object. The completeness of this test (its coverage) is assessed using the functional specification.

functionality

The capability of the software product to provide functions that meet stated and implied needs when the software is used under specified conditions. Functionality describes WHAT the system must do. Implementation of functionality is the precondition for the system to be usable at all. Functionality includes the following characteristics: suitability, correctness, interoperability, compliance, and security [ISO 9126].

GUI

Graphical user interface.

incident database

A collection of information about incidents, usually implemented as a database. An incident database shall make it possible to follow up incidents and extract data about them.

informal review

A review not based on a formal (documented) procedure.

inspection

A type of review that relies on visual examination of documents to detect defects—for example, violations of development standards and nonconformance to higher-level documentation. Inspection is the most formal review technique and therefore always based on a documented procedure.

instruction

See statement.

instrumentation

The insertion of additional logging or counting code into the source code of a test object (by a tool) in order to collect information about program behavior during execution (e.g., for measuring code coverage).

integration

The process of combining components or systems into larger assemblies.

integration test(ing)

Testing performed to expose defects in the interfaces and in the interactions between integrated components.

level test plan

A plan for a specified level of testing. It identifies the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the associated risk(s). In the title of the plan, the word level is replaced by the organization’s name for the particular level being documented by the plan (e.g., Component Test Plan, Component Integration Test Plan, System Test Plan, and Acceptance Test Plan).

load test(ing)

Measuring the behavior of a system as the load increases (e.g., increase in the number of parallel users and/or number of transactions) in order to determine what load can be handled by the component or system. (Load testing is a kind of performance testing.)

logical test case

A test case without concrete values for the input data and expected results. Usually, value ranges (equivalence classes) are given.

maintainability

The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.

maintenance / maintenance process

Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.

management review

1. A review evaluating project plans and development processes.

2. A systematic evaluation of software acquisition, supply, development, operation, or maintenance process performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.

master test plan

A detailed description of test objectives to be achieved and the means and schedule for achieving them, organized to coordinate testing activities for some test object or set of test objects. A master test plan may comprise all testing activities on the project; further detail of particular test activities could be defined in one or more test subprocess plans (for example, a system test plan or a performance test plan or level test plans).

metric

1. A value from measuring a certain program or component attribute. Finding metrics is a task for static analysis.

2. A measurement scale and the method used for measurement.

milestone

This marks a point in time in a project or process at which a defined result should be ready.

mistake

See error.

mock-up, mock, mock object

A program in the test environment that takes the place of a stub or dummy but contains more functionality. This makes it possible to trigger desired results or behavior.

moderator

The leader and main person responsible for an inspection or a review meeting.

module testing

Test of a single module of a modular software system.

See component testing.

monitor

A software tool or hardware device that runs concurrently with the component or system under test and supervises, records, analyzes, and/or verifies its behavior.

multiple-condition testing

Control-flow-based white box test design technique in which test cases are designed to execute all combinations of single-condition outcomes (true and false) within one decision statement.

negative test(ing)

1. Usually a functional test case with inputs that are not allowed (following the specification). The test object should react in a robust way, such as, for example, rejecting the values and executing appropriate exception handling.

2. Testing aimed at showing that a component or system does not work, such as, for example, a test with wrong input values. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique.

nonfunctional requirement

A requirement that does not directly relate to functionality but to how well or with which quality the system fulfills its function. Its implementation has a great influence on how satisfied the customer or user is with the system. The attributes from [ISO 9126] are reliability, efficiency, usability, maintainability, and portability.

The attributes from [ISO 25010] are performance efficiency, compatibility, usability, reliability, security, maintainability, and portability.

nonfunctional testing

Testing the nonfunctional requirements.

nonfunctional tests

Tests for the nonfunctional requirements.

patch

1. A modification made directly to an object code without modifying the source code or reassembling or recompiling the source program.

2. A modification made to a source program as a last-minute fix or as an afterthought.

3. Any modification to a source or object program.

4. To perform a modification such as described in the preceding three definitions.

5. Unplanned release of a software product with corrected files in order to, possibly in a preliminary way, correct special (often blocking) faults.

path

1. A path in the program code: A sequence of events (e.g., executable statements) of a component or system from an entry point to an exit point.

2. A path in the control flow graph: An alternating sequence of nodes and branches in a control flow graph. A complete path starts with the node “nstart” and ends with the node “nfinal” of the control flow graph. Executable and not executable paths can be distinguished.

path testing, path coverage

Control-flow-based white-box test design technique that requires executing all different complete paths of a test object. In practice, this is usually not feasible due to the large number of paths.

performance

The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. In [ISO 25010] called performance efficiency.

performance testing

The process of testing to determine the performance of a software product for certain use cases, usually dependent on increasing load.

Point of Control (PoC)

Interface used to send test data to the test object.

Point of Observation (PoO)

Interface used to observe and log the reactions and outputs of the test object.

postconditions

Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

precondition

Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

preventive software quality assurance

Use of methods, tools, and procedures that contribute to designing quality into the product. As a result of their application, the product should then have certain desired characteristics, and faults are prevented or their effects minimized.

Note: Preventive (constructive) software quality assurance is often used in early stages of software development. Many defects can be avoided when the software is developed in a thorough and systematic manner.

problem

See defect.

problem database

1. A list of known failures or defects/faults in a system or component and their state of repair.

2. A database that contains current and complete information about all identified defects.

process model

See software development model.

production environment

The hardware and software products, as well as other software with its data content (including operating systems, database management systems and other applications), that are in use at a certain user site.

This environment is the place where the test object will be operated or used.

quality

1. The totality of characteristics and their values relating to a product or service. They relate to the product’s ability to fulfill specified or implied needs.

2. The degree to which a component, system, or process meets user/customer needs and expectations.

3. The degree to which a set of inherent characteristics fulfills requirements.

quality assurance

All activities within quality management focused on providing confidence that quality requirements are fulfilled.

quality attribute

1. A characteristic of a software product used to judge or describe its quality. A software quality attribute can also be refined through several steps into partial attributes.

2. Ability or characteristic which influences the quality of a unit.

quality characteristic

See quality attribute.

random testing

A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile in the production environment. Note: This technique can, among others, be used for testing nonfunctional attributes such as reliability and performance.

regression testing

Testing a previously tested program or a partial functionality following modification to show that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed.

release

A particular version of a configuration item that is made available for a specific purpose, such as, for example, a test release or a production release.

reliability

A set of characteristics relating to the ability of the software product to perform its required functions under stated conditions for a specified period of time or for a specified number of operations.

requirement

A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

requirements-based testing

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, such as, for example, tests that exercise specific functions or probe nonfunctional attributes such as reliability or usability.

requirements definition

1. Written documentation of the requirements for a product or partial product to be developed. Typically, the specification contains functional requirements, performance requirements, interface descriptions, design requirements, and development standards.

2. Phase in the general V-model in which the requirements for the system to be developed are collected, specified, and agreed upon.

retesting

Testing that executes test cases that failed the last time they were run in order to verify the success of correcting faults.

review

1. Measuring, analyzing, checking of one or several characteristics of a unit (the review object), and comparing with defined requirements in order to decide if conformance for every characteristic has been achieved.

2. Abstract term for all analytical quality assurance measures independent of method and object.

3. An evaluation of a product or project status to ascertain discrepancies of the planned work results from planned results and to recommend improvements. Reviews include management review, informal review, technical review, inspection, and walkthrough.

reviewable (testable)

A work product or document is reviewable or testable if the work is complete enough to enable a review or test of it.

risk

A factor that could result in future negative consequences; usually expressed as impact and likelihood.

risk-based testing

An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.

robustness

The degree to which a component or system can function correctly in the presence of invalid inputs or stressful or extreme environmental conditions.

robustness test(ing)

See negative testing.

role

Description of specific skills, qualifications and work profiles in software development. These should be filled by the persons (responsible for these roles) in the project.

safety-critical system

A system whose failure or malfunction may result in death or serious injury to people, loss or severe damage to equipment, or environmental harm.

security test

Testing to determine the access or data security of the software product. Also testing for security deficiencies.

severity

The degree of impact that a defect has on the development or operation of a component or system.

simulator

1. A tool with which the real or production environment is modeled.

2. A system that displays chosen patterns of behavior of a physical or abstract system.

smoke test

1. Usually an automated test (a subset of all defined or planned test cases) that covers the main functionality of a component or system in order to ascertain that the most crucial functions of a program work but not bothering with finer details.

2. A smoke test is often implemented without comparing the actual and the expected output. Instead, a smoke test tries to produce openly visible wrong results or crashes of the test object. It is mainly used to test robustness.

software development model/software development process

Model or process that describes a defined organizational framework of software development. It defines which activities shall be executed by which roles in which order, which results will be produced, and how the results are checked by quality assurance.

software item

Identifiable (partial) result of the software development process (for example, a source code file, document, etc.).

software quality

The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

specification

A document that specifies, ideally in a complete, precise, concrete and verifiable form, the requirements or other characteristics of a component or system. It serves the developers as a basis for programming, and it serves the testers as a basis for developing test cases with black box test design methods. (Often, a specification includes procedures for determining whether these requirements have been satisfied.)

special software

Software developed for one or a group of customers. The opposite is standard or commercial off-the-shelf (COTS) software. The British term is bespoke software.

state diagram

A diagram or model that describes the states that a component or system can assume and that shows the events or circumstances that cause and/or result from a change from one state to another.

state transition testing

A black box test design technique in which test cases are designed to execute valid and invalid state transitions of the test object from the different states. The completeness (coverage) of such a test is judged by looking at the states and state transitions.

statement

A syntactically defined entity in a programming language. It is typically the smallest indivisible unit of execution. Also referred to as an instruction.

statement test(ing)

Control-flow-based test design technique that at the least requires that every executable statement of the program has been executed once.

statement coverage

The percentage of executable statements that have been exercised by a test suite.

static analysis

Analysis of a document (e.g., requirements or code) carried out without executing it.

static analyzer

A tool that carries out static analysis.

static testing

Testing of a component or system at the specification or implementation level without execution of any software (e.g., using reviews or static analysis).

stress testing

Test of system behavior with overload. For example, running it with too high data volumes, too many parallel users, or wrong usage.

See also robustness.

structural test(ing), structure-based test(ing)

White box test design technique in which the test cases are designed using the internal structure of the test object. Completeness of such a test is judged using coverage of structural elements (for example, branches, paths, data). General term for control- or data-flow-based test.

stub

A skeletal or special-purpose implementation of a software component, needed in component or integration testing and used to replace or simulate not-yet-developed components during test execution.

syntax testing

A test design technique in which test cases are designed based on a formal definition of the input syntax.

system integration testing

Testing the integration of systems (and packages); testing interfaces to external organizations (e.g., Electronic Data Interchange, Internet).

system testing

The process of testing an integrated system to ensure that it meets specified requirements.

technical review

A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review.

test

A set of one or more test cases

test automation

1. The use of software tools to design or program test cases with the goal to be able to execute them repeatedly using the computer.

2. To support any test activities by using software tools.

test basis

All documents from which the requirements of a component or system can be inferred. The documentation on which the design and choice of the test cases is based.

test bed

An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. Also called test environment. Also used as an alternative term for test harness.

test case

A set of input values, execution preconditions, expected results, and execution postconditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

test case explosion

Expression for the exponentially increasing work for an exhaustive test with increasing numbers of parameters.

test case specification

A document specifying a set of test cases.

test condition

An item or event of a component or system that can be verified by one or more test cases, such as, for example, a function, transaction, feature, quality attribute, or structural element.

test coverage

See coverage.

test cycle

1. Execution of the fundamental test process against a single identifiable release of the test object. Its end result are orders for defect corrections or changes.

2. Execution of a series of test cases.

test data

1. Input or state values for a test object and the expected results after execution of the test case.

2. Data that exists (for example, in a database) before a test is executed and that affects or is affected by the component or system under test.

test design technique

A planned procedure (based on a set of rules) used to derive and/or select test cases. There are specification-based, structure-based, and experience-based test design techniques. Also called test technique.

test driver

See driver.

test effort

The resources (to be estimated or analyzed) for the test process.

test environment

See test bed.

test evaluation

Analysis of the test protocol or test log in order to determine if failures have occurred. If necessary, these are assigned a classification.

test execution

The process of running test cases or test scenarios (an activity in the test process) that produce actual result(s).

test harness (test bed)

A test environment comprising all stubs and drivers needed to execute test cases. Even logging and evaluation tasks may be integrated into a test harness.

test infrastructure

The artifacts needed to perform testing, consisting usually of test environments, test tools, office environment for the testers and its equipment, and other tools (like mail, Internet, text editors, etc.).

testing

The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation, and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

test interface

See Point of Control (PoC) and Point of Observation (PoO).

test level

A group of test activities that are executed and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test, and acceptance test (from the generic V-model).

test log

The written result of a test run or a test sequence (in the case of automated tests often produced by test tools). From the log it must be possible to see which parts were tested when, by whom, how intensively, and with what result.

test logging

The process of recording information about tests executed into a test log.

test management

1. The planning, estimating, monitoring, control, and evaluation of test activities, typically carried out by a test manager.

2. Group of persons responsible for a test.

test method

See test design technique.

test metric

A measurable attribute of a test case, test run, or test cycle, including measurement instructions.

test object

The component, integrated partial system, or system (in a certain version) that is to be tested.

test objective

A reason or purpose for designing and executing a test. Examples are as follows:

1. General objective: Finding defects.

2. Finding special defects using suitable test cases.

3. Showing that certain requirements are fulfilled in or by the test object as a special objective for one or more test cases.

test oracle

An information source to determine expected results of a test case (usually the requirements definition or specifications).

A test oracle may also be an existing system (for a benchmark), other software, a user manual, or an individual’s specialized knowledge, but it should not be the code. This is because then the code is used as a basis for testing, thus it is tested against itself.

test phase

A set of related activities (in the test process) that are intended for the design of an intermediate work product (for example, design of a test specification). This term differs from test level!

test plan

A document describing the scope, approach, resources, and schedule of intended test activities (from [IEEE 829-1998]).

It identifies, among other test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test environment, the test design techniques, and the techniques for measuring results with a rationale for their choice. Additionally, risks requiring contingency planning are described. Thus, a test plan is a record of the test planning process.

The document can be divided into a master test plan or a level test plan.

test planning

The activity of establishing or updating a test plan.

test procedure / test script / test schedule

1. Detailed instructions about how to prepare, execute, and evaluate the results of a certain test case.

2. A document specifying a sequence of actions for the execution of a test.

test process

The fundamental test process comprises all activities necessary for the test in a project, such as test planning and control, test analysis and design, test implementation and execution, evaluation of exit criteria and reporting, and test closure activities.

test report

An alternative term for test summary report.

test result

1. All documents that are written during a test cycle (mainly the test log and its evaluation).

2. Release or stopping of a test object (depending on the number and severity of failures discovered).

test robot

A tool to execute tests that uses open or accessible interfaces of the test objects (for example, the GUI) to feed in input values and read their reactions.

test run

Execution of one or several test cases on a specific version of the test object.

test scenario

A set of test sequences.

test schedule

1. A list of activities, tasks, or events of the test process identifying their intended start and finish dates and/or times and interdependencies (among others, the assignment of test cases to testers).

2. List of all test cases, usually grouped by topic or test objective.

test script

Instructions for the automatic execution of a test case or a test sequence (or higher-level control of further test tools) in a suitable programming language.

test specification

1. A document that consists of a test design specification, test case specification and/or test procedure specification.

2. The activity of specifying a test, typically part of “test analysis and design” in the test life cycle.

test strategy

1. Distribution of the test effort over the parts to be tested or the quality characteristics of the test object that should be fulfilled. Selection and definition of the order (or the interaction) of test methods and the order of their application on the different test objects. Definition of the test coverage to be achieved by each test method.

2. Abstract description of the test levels and their corresponding start and exit criteria. Usually, a test strategy can be used for more than one project.

test suite / test sequence

A set of several test cases in which the postcondition of one test is often used as the precondition for the next one.

test summary report

A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. Also called test report.

test technique

1. See test design technique.

2. A combination of activities to systematically design a test work product. In addition to designing test cases, test techniques are available for activities such as test estimation, defect management, product risk analysis, test execution, and reviews.

testability

1. Amount of effort and speed with which the functionality and other characteristics of the system (even after each maintenance) can be tested.

2. Ability of the tested system to be tested. (Aspects are openness of the interface, documentation quality, ability to partition the system into smaller units, and ability to model the production environment in the test environment.)

tester

1. An expert in test execution and reporting defects with knowledge about the application domain of the respective test object.

2. A general term for all people working in testing.

test-first programming

Software development process where test cases defining small controllable implementation steps are developed before the code is developed. Also called test-first design, test-first development, test-driven design, or test-driven development.

testing

The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

testware

All documents and possibly tools that are produced or used during the test process and are required to plan, design, and execute tests. Such documents may include scripts, inputs, expected results, setup and clean-up procedures, files, databases, environment, and any additional software or utilities used in testing. Everything should be usable during maintenance and therefore must be transferable and possible to update.

traceability

The ability to identify related items in documentation and software, especially requirements with associated tests.

tuning

Changing programs or program parameters and/or expanding hardware to optimize the time behavior of a hardware/software system.

unit testing

See component testing.

unit test

See component test.

unnecessary test

A test that is redundant with another already present test and thus does not lead to new results.

unreachable code

Code that cannot be reached and therefore is impossible to execute.

use case

A sequence of transactions in a dialogue between an actor and a component or system with a tangible result. An actor can be a user or anything that can exchange information with the system.

use case testing

A black box test design technique in which test cases are designed to execute scenarios of use cases.

user acceptance test

An acceptance test on behalf of or executed by users.

validation

Testing if a development result fulfills the individual requirements for a specific use.

verification

1. Checking if the outputs from a development phase meet the requirements of the phase inputs.

2. Mathematical proof of correctness of a (partial) program.

version

Development state of a software object at a certain point of time. Usually given by a number.

See also configuration.

V-model (generic)

A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle and how intermediate products can be verified and validated. Many different variants of the V-model exist nowadays.

volume testing

Testing in which large amounts of data are manipulated or the system is subjected to large volumes of data.

See also stress testing and load testing.

walkthrough

A manual, usually informal review method to find faults, defects, unclear information, and problems in written documents. A walkthrough is done using a step-by-step presentation by the author. Additionally, it serves to gather information and to establish a common understanding of its content. Note: For ISTQB purposes, a walkthrough is a formal review as opposed to the informal review.

white box test design technique

Any technique used to derive and/or select test cases based on an analysis of the internal structure of the test object (see also structural test).