2
0
mirror of https://github.com/boostorg/test.git synced 2026-02-18 14:32:11 +00:00
Files
test/doc/src/utf.user-guide.glossary.xml
Gennadiy Rozental bba1206b70 reworked glossary
[SVN r81188]
2012-11-05 04:54:38 +00:00

235 lines
9.2 KiB
XML

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY utf "<acronym>UTF</acronym>">
]>
<section id="utf.user-guide.glossary">
<titleabbrev>Glossary</titleabbrev>
<glossary>
<title>Glossary &hellip; or what's your name?</title>
<para role="first-line-indented">
Here is the list of terms used throughout this documentation.
</para>
<glossentry id="test-module.def">
<glossterm><firstterm>The test module</firstterm></glossterm>
<glossdef>
<simpara>
This is a single binary that performs the test. Physically a test module consists of one or more test source files,
which can be built into an executable or a dynamic library. A test module that consists of a single test source
file is called <firstterm id="single-file-test-module.def">single-file test module</firstterm>. Otherwise
it's called <firstterm id="multi-file-test-module.def">multi-file test module</firstterm>. Logically a test
module consists of four parts: <link linkend="test-setup.def">test setup</link> (or test initialization),
<link linkend="test-body.def">test body</link>, <link linkend="test-cleanup.def">test cleanup</link> and
<link linkend="test-runner.def">test runner</link>. The test runner part is optional. If a test module is built as
an executable the test runner is built-in. If a test module is built as a dynamic library, it is run by an
external test runner.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-body.def">
<glossterm><firstterm>The test body</firstterm></glossterm>
<glossdef>
<simpara>
This is the part of a test module that actually performs the test.
Logically test body is a collection of <link linkend="test-assertion.def">test assertions</link> wrapped in
<link linkend="test-case.def">test cases</link>, which are organized in a <link linkend="test-tree.def">test tree
</link>.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-tree.def">
<glossterm><firstterm>The test tree</firstterm></glossterm>
<glossdef>
<simpara>
This is a hierarchical structure of <link linkend="test-suite.def">test suites</link> (non-leaf nodes) and
<link linkend="test-case.def">test cases</link> (leaf nodes).
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-unit.def">
<glossterm><firstterm>The test unit</firstterm></glossterm>
<glossdef>
<simpara>
This is a collective name when referred to either <link linkend="test-suite.def">test suite</link> or
<link linkend="test-case.def">test case</link>
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-assertion.def">
<glossterm><firstterm>Test assertion</firstterm></glossterm>
<glossdef>
<simpara>
This is a single binary condition (binary in a sense that is has two outcomes: pass and fail) checked
by a test module.
</simpara>
<simpara>
There are different schools of thought on how many test assertions a test case should consist of. Two polar
positions are the one advocated by TDD followers - one assertion per test case; and opposite of this - all test
assertions within single test case - advocated by those only interested in the first error in a
test module. The &utf; supports both approaches.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-case.def">
<glossterm><firstterm>The test case</firstterm></glossterm>
<glossdef>
<simpara>
This is an independently monitored function within a test module that
consists of one or more test assertions. The term &quot;independently monitored&quot; in the definition above is
used to emphasize the fact, that all test cases are monitored independently. An uncaught exception or other normal
test case execution termination doesn't cause the testing to cease. Instead the error is caught by the test
case execution monitor, reported by the &utf; and testing proceeds to the next test case. Later on you are going
to see that this is on of the primary reasons to prefer multiple small test cases to a single big test function.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-suite.def">
<glossterm><firstterm>The test suite</firstterm></glossterm>
<glossdef>
<simpara>
This is a container for one or more test cases. The test suite gives you an ability to group
test cases into a single referable entity. There are various reasons why you may opt to do so, including:
</simpara>
<itemizedlist>
<listitem>
<simpara>To group test cases per subsystems of the unit being tested.</simpara>
</listitem>
<listitem>
<simpara>To share test case setup/cleanup code.</simpara>
</listitem>
<listitem>
<simpara>To run selected group of test cases only.</simpara>
</listitem>
<listitem>
<simpara>To see test report split by groups of test cases</simpara>
</listitem>
<listitem>
<simpara>To skip groups of test cases based on the result of another test unit in a test tree.</simpara>
</listitem>
</itemizedlist>
<simpara>
A test suite can also contain other test suites, thus allowing a hierarchical test tree structure to be formed.
The &utf; requires the test tree to contain at least one test suite with at least one test case. The top level
test suite - root node of the test tree - is called the master test suite.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-setup.def">
<glossterm><firstterm>The test setup</firstterm></glossterm>
<glossdef>
<simpara>
This is the part of a test module that is responsible for the test
preparation. It includes the following operations that take place prior to a start of the test:
</simpara>
<itemizedlist>
<listitem>
<simpara>
The &utf; initialization
</simpara>
</listitem>
<listitem>
<simpara>
Test tree construction
</simpara>
</listitem>
<listitem>
<simpara>
Global test module setup code
</simpara>
</listitem>
</itemizedlist>
<simpara>
Per test case&quot; setup code, invoked for every test case it's assigned to, is also attributed to the
test initialization, even though it's executed as a part of the test case.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-cleanup.def">
<glossterm><firstterm>The test cleanup</firstterm></glossterm>
<glossdef>
<simpara>
This is the part of test module that is responsible for cleanup operations.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-fixture.def">
<glossterm><firstterm>The test fixture</firstterm></glossterm>
<glossdef>
<simpara>
Matching setup and cleanup operations are frequently united into a single entity called test fixture.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-runner.def">
<glossterm><firstterm>The test runner</firstterm></glossterm>
<glossdef>
<simpara>
This is an &quot;executive manager&quot; that runs the show. The test runner's functionality should include
the following interfaces and operations:
</simpara>
<itemizedlist>
<listitem>
<simpara>
Entry point to a test module. This is usually either the function main() itself or single function that can be
invoked from it to start testing.
</simpara>
</listitem>
<listitem>
<simpara>
Initialize the &utf; based on runtime parameters
</simpara>
</listitem>
<listitem>
<simpara>
Select an output media for the test log and the test results report
</simpara>
</listitem>
<listitem>
<simpara>
Select test cases to execute based on runtime parameters
</simpara>
</listitem>
<listitem>
<simpara>
Execute all or selected test cases
</simpara>
</listitem>
<listitem>
<simpara>
Produce the test results report
</simpara>
</listitem>
<listitem>
<simpara>
Generate a test module result code.
</simpara>
</listitem>
</itemizedlist>
<para role="first-line-indented">
An advanced test runner may provide additional features, including interactive <acronym>GUI</acronym> interfaces,
test coverage and profiling support.
</para>
</glossdef>
</glossentry>
<glossentry id="test-log.def">
<glossterm><firstterm>The test log</firstterm></glossterm>
<glossdef>
<simpara>
This is the record of all events that occur during the testing.
</simpara>
</glossdef>
</glossentry>
<glossentry id="test-results-report.def">
<glossterm><firstterm>The test results report</firstterm></glossterm>
<glossdef>
<simpara>
This is the report produced by the &utf; after the testing is completed, that indicates which test cases/test
suites passed and which failed.
</simpara>
</glossdef>
</glossentry>
</glossary>
</section>