mirror of
https://github.com/boostorg/test.git
synced 2026-01-24 18:32:30 +00:00
150 lines
6.5 KiB
Plaintext
150 lines
6.5 KiB
Plaintext
[/
|
|
/ Copyright (c) 2003-2015 Gennadiy Rozental
|
|
/
|
|
/ Distributed under the Boost Software License, Version 1.0. (See accompanying
|
|
/ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
|
/]
|
|
|
|
[#ref_usage_recommendations][section Practical usage recommendations]
|
|
|
|
On following pages collected tips and recommendations on how to use and apply the &utf; in your real life practice.
|
|
You don't necessarily need to follow them, but I found them handy.
|
|
|
|
[h3 General]
|
|
|
|
[h4 Prefer offline compiled libraries to the inline included components]
|
|
If you are just want to write quick simple test in environment where you never used Boost.Test before - yes,
|
|
use included components. But if you plan to use Boost.Test on permanent basis, small investment of time needed
|
|
to build (if not build yet), install and change you makefiles/project settings will soon return to you in a
|
|
form of shorter compilation time. Why do you need to make your compiler do the same work over and over again?
|
|
|
|
[h4 If you use only free function based test cases advance to the automatic registration facility]
|
|
It's really easy to switch to automatic registration. And you don't need to worry about forgotten test case
|
|
|
|
[h4 To find location of first error reported by test tool within reused template function, use special hook within framework headers]
|
|
In some cases you are reusing the same template based code from within one test case (actually I recommend
|
|
better solution in such case- see below). Now if an error gets reported by the test tool within that reused
|
|
code you may have difficulty locating were exactly error occurred. To address this issue you could either a add
|
|
__BOOST_TEST_MESSAGE__ statements in templated code that log current type id of template parameters or you can use special hook located in
|
|
`unit_test_result.hpp` called `first_failed_assertion()`. If you set a breakpoint right on the line where this
|
|
function is defined you will be able to unroll the stack and see where error actually occurred.
|
|
|
|
|
|
[h4 To test reusable template base component with different template parameter use test case template facility]
|
|
If you writing unit test for generic reusable component you may have a need to test it against set of different
|
|
template parameter types . Most probably you will end up with a code like this:
|
|
|
|
``
|
|
template<typename TestType>
|
|
void specific_type_test( TestType* = 0 )
|
|
{
|
|
MyComponent<TestType> c;
|
|
// ... here we perform actual testing
|
|
}
|
|
|
|
void my_component_test()
|
|
{
|
|
specific_type_test( (int*)0 );
|
|
specific_type_test( (float*)0 );
|
|
specific_type_test( (UDT*)0 );
|
|
// ...
|
|
}
|
|
``
|
|
|
|
This is namely the situation where you would use test case template facility. It not only simplifies this kind
|
|
of unit testing by automating some of the work, in addition every argument type gets tested separately under
|
|
unit test monitor. As a result if one of types produce exception or non-fatal error you may still continue and
|
|
get results from testing with other types.
|
|
|
|
[h4 Prefer __BOOST_CHECK_EQUAL__ to __BOOST_CHECK__]
|
|
Even though __BOOST_CHECK__ tool is most generic and allows validating any `bool` convertible expression, I would
|
|
recommend using, if possible, more specific tools dedicated to the task. For example if you need you validate
|
|
some variable vs. some expected value use __BOOST_CHECK_EQUAL__ instead. The main advantage is that in case of
|
|
failure you will see the mismatched value - the information most useful for error identification. The same
|
|
applies to other tools supplied by the framework.
|
|
|
|
[h3 IDE usage recommendations]
|
|
|
|
This recommendation is shown using Microsoft Visual Studio as an example, but you can apply similar steps in
|
|
different IDEs.
|
|
|
|
[h4 Use custom build step to automatically start test program after compilation]
|
|
I found it most convenient to put test program execution as a post-build step in compilation. To do so use
|
|
project property page:
|
|
|
|
[$images/post_build_event.jpg]
|
|
|
|
Full command you need in "Command Line" field is:
|
|
|
|
``
|
|
"$(TargetDir)\$(TargetName).exe" --__param_result_code__=no --__param_report_level__=no
|
|
``
|
|
|
|
Note that both report level and result code are suppressed. This way the only output you may see from this
|
|
command are possible runtime errors. But the best part is that you could jump through these errors using usual
|
|
keyboard shortcuts/mouse clicks you use for compilation error analysis:
|
|
|
|
[$images/post_build_out.jpg]
|
|
|
|
[h4 If you got fatal exception somewhere within test case, make debugger break at the point the failure by adding
|
|
extra command line argument]
|
|
|
|
If you got "memory access violation" message (or any other message indication fatal or system error) when you
|
|
run you test, to get more information of error location add
|
|
``
|
|
--__param_catch_system__=no
|
|
``
|
|
to the test run command line:
|
|
|
|
[$images/run_args.jpg]
|
|
|
|
|
|
Now run the test again under debugger and it will break at the point of failure.
|
|
|
|
[h3 Command line usage recommendations]
|
|
|
|
[h4 If you got fatal exception somewhere within test case, make program generate coredump by adding extra command
|
|
line argument]
|
|
If you got "memory access violation" message (or any other message indication fatal or system error) when you
|
|
run you test, to get more information about the error location add
|
|
``
|
|
--__param_catch_system__=no
|
|
``
|
|
|
|
to the test run command line. Now run the test again and it will create a coredump you could analyse using you preferable
|
|
debugger. Or run it under debugger in a first place and it will break at the point of failure.
|
|
|
|
[h4 How to use test module build with Boost.Test framework under management of automated regression test facilities?]
|
|
|
|
My first recommendation is to make sure that the test framework catches all fatal errors by adding argument
|
|
``
|
|
--__param_catch_system__=yes
|
|
``
|
|
|
|
to all test modules invocations. Otherwise test program may produce unwanted dialogs (depends on compiler and OS) that will halt you
|
|
regression tests run. The second recommendation is to suppress result report output by adding
|
|
|
|
``
|
|
--__param_report_level__=no
|
|
``
|
|
|
|
argument and test log output by adding
|
|
|
|
``
|
|
--__param_log_level__=nothing
|
|
``
|
|
|
|
argument, so that test module won't produce undesirable output no one is going to look at
|
|
anyway. I recommend relying only on result code that will be consistent for all test programs. An
|
|
alternative to my second recommendation is direct both log and report to separate file you could analyse
|
|
later on. Moreover you can make Boost.Test to produce them in XML format using
|
|
|
|
``
|
|
--__param_output_format__=XML
|
|
``
|
|
and use some automated tool that will format this information as you like.
|
|
|
|
[endsect] [/ recommandations]
|
|
|
|
[/EOF]
|