![]() |
Safe Numerics |
Our system works by checking arithmetic operations whenever they could result in an erroneous result. The C++ standard describes how binary operations on different integer types are handled. Here is a simplified version of the rules:
promote any operand smaller than int to an int or unsigned int.
if the signed operand is larger than the signed one, the result will be signed, otherwise the result will be unsigned.
expand the smaller operand to the size of the larger one
So the result of the sum of two integer types will result in another integer type. If the values are large, they will exceed the size that the resulting integer type can hold. This is what we call "overflow". Standard C/C++ just truncates the result to fit into the result type - which makes the result arithmetically incorrect. This behavior is consistent with the default "native" type promotion policy. Up until now, we've focused on detecting when this happens and invoking an interrupt or other kind of error handler.
But now we look at another option. Using the "automatic" type promotion policy, we can change the rules of C++ arithmetic for safe types to something like the following:
for any C++ numeric types, we know from
std::numeric::limits what the maximum and minimum
values that a variable can be - this defines a closed
interval.
For any binary operation on these types, we can calculate the interval of the result at compile time.
From this interval we can determine a new safe type which can be guarenteed to hold the result.
Since the result type is guarenteed to hold the result, there is no need to check for errors - they can't happen !!!
The only error checking we need to do is when safe values are initialized or assigned, but this we would have to do in any case. So we've eliminated arithmetically incorrect results while incurring zero runtime overhead for error checking.
In short, given a binary operation, we promote the constituent types to a larger result type which can't overflow. This is a fundamental deparature from the C++ Standard behavior.
If the interval of the result cannot be contained in the largest type that the machine can handle (usually 64 bits these days), the largest available integer type with the correct result sign is used. So even with our "automatic" type promotion scheme, it's still possible to overflow. In this case, and only this case, is runtime error checking code generated. Depending on the application, it should be rare to generate error checking code, and even more rare to actually invoke it.
This small example illustrates how to use type promotion and how it works.
#include <cassert>
#include <stdexcept>
#include <ostream>
#include <iostream>
#include <cxxabi.h>
#include <typeinfo>
#include "../include/safe_range.hpp"
#include "../include/automatic.hpp"
// create an output manipulator which prints variable type and limits
// as well as value
template<typename T>
struct formatted_impl {
const T & m_t;
formatted_impl(const T & t) :
m_t(t)
{}
template <class charT, class Traits>
friend std::basic_ostream<charT,Traits> &
operator<<(
std::basic_ostream<charT,Traits> & os,
const formatted_impl<T> & f
){
int status;
return os
<< "<"
<< abi::__cxa_demangle(
typeid(boost::numeric::base_value(m_t)).name(),0,0,&status
)
<< ">["
<< std::numeric_limits<T>::min() << ","
<< std::numeric_limits<T>::max() << "] = "
<< f.m_t;
}
};
template<typename T>
auto formatted(const T & t){
return formatted_impl<T>(t);
}
// create a type for holding small integers which implement automatic
// type promotion to larger types to guarentee correct results with
// zero runtime overhead !
template <
std::intmax_t Min,
std::intmax_t Max
>
using safe_t = boost::numeric::safe_signed_range<
Min,
Max,
boost::numeric::automatic,
boost::numeric::throw_exception
>;
using small_integer_t = safe_t<-24, 82>;
int main(int argc, const char * argv[]){
// problem: checking of externally produced value can be overlooked
std::cout << "example 8: ";
std::cout << "eliminate runtime overhead" << std::endl;
try{
const small_integer_t x(1);
std::cout << "x" << formatted(x) << std::endl;
small_integer_t y = 2;
std::cout << "y" << formatted(y) << std::endl;
auto z = x + y; // zero runtime overhead !
std::cout << "(x + y)" << formatted(z) << std::endl;
std::cout << "(x - y)" << formatted(x - y) << std::endl;
}
catch(std::exception e){
// none of the above should trap. Mark failure if they do
std::cout << e.what() << std::endl;
return false;
}
return 0;
}
The above program produces the following output:
example 8: eliminate runtime overhead x<signed char>[-24,82] = 1 y<signed char>[-24,82] = 2 (x + y)<short>[-48,164] = 3 (x - y)<signed char>[-106,106] = -1
Variables x and y are stored as 8 bit signed integers with range specied as -24 to 82. The result of x + y could be any value in the range -48 to 164. Since this result can't be stored in an 8 bit signed integer, a 16 bit signed integer is allocated. The result x - y could range from -106 to 106 so will fit in an 8 bit signed integer is allocated. Binary operations with safe numeric using automatic type promotion will produce other safe numeric types with template parameters appropriate to hold the result. The resultant safe types may have smaller or larger ranges than the parameters of the binary operation.
We've used simple expressions in this illustration. But since binary operations on safe types result in other safe types, expressions can be made arbitrarily elaborate - just as they can be with intrinsic integer types. That is, safe integer types are drop in replacements for intrinsic integer types. We are guarenteed never to produce an incorrect result regardless of how elaborate the expression might be.