To illustrate on how to write a concept, we start by showing how to write a simple specification of monoid. A monoid is a set with a binary operator. The set is under the operator. The operator is associative. There is also an identity element in the set.
template <typename T, typename Op, typename Id> struct monoid: public concept { typedef concept_list< is_callable<Op(T, T)>, is_callable<Id()>, std::is_convertible<typename is_callable<Op(T, T)>::result_type, T>, std::is_convertible<typename is_callable<Id()>::result_type, T> > requirements; static void associativity(const Op& op, const T& a, const T& b, const T& c) { axiom_assert(op(a, op(b, c)) == op(op(a, b), c)); } static void identity(const Op& op, const T& a, const Id& id) { axiom_assert((op(id(), a) == a) && (op(a, id()) == a)); } AXIOMS(associativity, identity); };
Our concept is declared as a class inheriting from concept. It is used to make the difference from automatic concepts (which inherit from auto_concept and also from predicates (which do not inherit from any of those two special types).
We declare or requirements with a member type requirements
. If we have several requirements we can pack them as parameters of template concept_list.
A requirement can be a concept, an automatic concept, or a predicate. In this example all requirements are predicates.
We require Op
to be callable with two T
s, while Id
with no parameter. We also require that the result of those operations can be implicitly converted (no explicit cast) to T
.
Then we have axioms describing the run-time behavior. Parameters are some kind of universal quantifier, any value of the given type should keep the axiom to hold.
Then to be able to automatize testing we describe what are the axioms we want to test by using macro AXIOMS.
Here we defined the carrier set and the two operators are declared as type parameters. The point of concepts is to be generic. We want to allow as much signature morphisms as possible. Of course we could have set the operators. But we can do it afterward, and reuse the most generic concept. For instance, we can define monoid_plus
as such:
template <typename T> struct monoid_plus: concept { typedef monoid<T, op_plus, wrapped_constructor<T()> > requirements; };
For people familiar with C++, op_plus
and wrapped_constructor<T()>
are functor types. That is, they have an operator "parenthesis" to implement the operation. It is usually better to represent operations through a type rather than a function pointer, as the latter would not represent overloaded operations.
It is possible to write your own functor. Nevertheless you need to be aware of some pitfalls. op_plus is defined as:
struct op_plus { template <typename T, typename U, typename Ret = decltype(std::declval<T>() + std::declval<U>())> Ret operator()(T&& t, U&& u) const { return std::forward<T>(t) + std::forward<U>(u); } };
Using std::plus<T>
instead would have not given the same result for static concept checking matters. First std::plus<T>
forces the arguments to be converted to T
and the return type as well. op_plus does not. Second, predicate is_callable<std::plus<T>(T, T)> would be always true, no matter if an operator has been defined. However using the operator will lead to an error message and would not be caught before by concept checking. Predicate is_callable<op_plus(T, T)> on the other hand, will be true if only if an operator has been defined. The reason is that if failing to detect the return type through decltype()
will just make the operator un-accessible. Virtually, the operator will not be defined. In that way, op_plus is a real alias to operator plus. Finally, op_plus represents the whole overloaded operator plus, whereas std::plus<T>
only represents one version of the operator.
The same is done with the constructor. wrapped_constructor is defined as follow:
template <typename T, bool = is_constructible<T>::value> struct wrapped_constructor; template <typename T, typename... Args> struct wrapped_constructor<T(Args...), true> { T operator()(Args... args...) const { return T(std::forward<Args>(args)...); } }; template <typename T, typename... Args> struct wrapped_constructor<T(Args...), false> { };
As you see, the template will select between an implementation with or without operator parenthesis depending on the
Catsfoot provides wrappers for operators, and also macros for generating wrappers for methods or overloaded functions. The user is invited to use them.
As a side note, we could say that nothing says that the default constructor gives us the neutral value, for example on native types like int
. Well this is something good to test.
Since our concept monoid
is not automatic, then we want to tell whether it holds on some set of types. This is a form of contract that the developer signs where he certifies that the axioms will hold. Of course, this can be wrong, but testing is here to determine that.
Automatic concepts do not need such contract. However, we do not have specified run-time behavior in automatic concepts (no axiom). Some overloaded function might need run-time behavior information to select optimize version.
The only thing these contracts are useful for is with run-time-behavior-based overloading. For example imagine we have an implementation of sum of a set of values in parallel playing on the fact that a monoid is associative, to be able to reorder priorities. The signature of such a function would be:
Of course, if our T is not a carrier set for a monoid, then we do not want to use this version of the function sum
but a more classical that would use the priority represented by the set. The only way for selecting the right function is to sign contracts asserting that implementations follow the run-time specifications of the concepts.
Even if your point is not to use this overloading feature, you will be required to sign the contracts, otherwise the static concept checking will fail. This is to make sure that other people will be able to use your concepts.
Those contracts are "signed" with type traits verified. Its parameter should be a concept instantiation (eventually partial). For example we can tell that integer along with operator plus and the default constructor (which might actually be wrong depending on the compiler) forms a monoid
.
namespace catsfoot { template <> struct verified<monoid<int, op_plus, wrapped_constructor<int()> > > : public std::true_type {}; }
Now we can automatize signing those contracts using partial specialization. For instance, we want to say that for all verified monoid
with the plus operator and the default constructor, then concept monoid_plus
on the same type is verified.
namespace catsfoot { template <typename T> struct verified<monoid_plus<T> > : public verified<monoid<T, op_plus, wrapped_constructor<int()> > > {}; }
Type traits verified does not trigger any testing, but only allows it. Writing tests is still the responsibility of the user.
Automatic concepts and predicates have the same role, however they are defined in a different ways. They also behave differently when asserting compile-time requirements. Requiring an automatic concept when some of its requirements are missing will give error messages on specific requirements, whereas requirement a false predicate will result as a simple error message that this predicate is false. When composing several requirements it is better to write an automatic concept, as the user will get a more verbose error output.
A predicate is type with a constant member value
convertible to a bool
.
For example, we can write a predicate as such:
template <typename T> struct is_lvalue_reference: public std::false_type { }; template <typename T> struct is_lvalue_reference<T&>: public std::true_type { };
An automatic concept is basically like a concept. However there is no axiom. Any axiom would never be called by the test driver.x
template <typename T, typename Stream> struct is_printable: public auto_concept { typedef concept_list< is_callable<op_lsh(Stream&, T)>, std::is_convertible<typename is_callable<op_lsh(Stream&, T)> ::result_type, Stream&> > requirements; };
Let's say we want to write a function foo
that takes 3 arguments of any type as long as there is an operator +
, that the type has a default constructor and that the type with operator+
and the default constructor forms a monoid. The following example shows an example of such a function.
template <typename T, typename NonRefT = typename std::decay<T>::type, ENABLE_IF(monoid<NonRefT, op_plus, wrapped_constructor<NonRefT()> >)> NonRefT foo(T&& a, T&& b, T&& c) { // (a * b); NonRefT ret = std::forward<T>(a) + std::forward<T>(b) + std::forward<T>(c); return ret; }
Note that this selection is done with the last template parameter. Macro ENABLE_IF
verifies like IF
that the compile-time part of the concept holds. However it enables the version of the function instead of returning a Boolean.
It is possible to get errors if the number of parameters is the same. In this case it is possible to add parameters as typename = void
in the parameter list.
If we ever un-commented the line using the multiplication operator, the compiler would not see it. It would actually compile if we gave a type T
which has an operator *
. We want to verify this.
A requirement for some_concept<T, T>
is more specific than for some_concept<T, U>
, then the archetype for some_concept<T, T> will need to be more specific. Any function will probably end up in some unique requirement.
Checking that a function has the right requirement is then still a hassle and uses a classic way of writing archetypes.
namespace foo_check { struct T { T() = default; T(const T&) = default; ~T() = default; }; T operator+(const T&, const T&) { return T(); } bool operator==(const T&, const T&) { return true; } } namespace catsfoot { template <> struct verified<monoid< ::foo_check::T, op_plus, wrapped_constructor< ::foo_check::T()> > > : public std::true_type { };
Sometimes we do not want to overload for different requirements. We just want to require a concept for any call. Using the previous method will just end up in the compiler claiming it did not find the function for the corresponding parameters. If there is no overloading we want instead to what are the requirements missing.
template <typename T, typename NonRefT = typename std::decay<T>::type> NonRefT foo(T&& a, T&& b, T&& c) { assert_concept(monoid<NonRefT, op_plus, wrapped_constructor<NonRefT()> >()); NonRefT ret = std::forward<T>(a) + std::forward<T>(b) + std::forward<T>(c); return ret; }
In this code we call assert_concept which will provide the right error message if the requirement is not satisfied.
Instantiating class_assert_concept will have the same effect as calling assert_concept. To make sure that the assertion is instantiated in the same time as the class, then it is possible to either inherit from the assertion class or to use it as type of a dummy member. For example:
template <typename T> struct Foo : public class_assert_concept<monoid<T, op_plus, wrapped_constructor<T()> > > {};
There is no elegant way to use a similar tool as ENABLE_IF
for overloaded function templates as the number of parameters has to be fixed. Fortunately, it is possible, if we know all the possible specializations when writing the general class template, to use a list of Boolean parameters.
template <typename T, bool specialize = IF(monoid<T, op_plus, wrapped_constructor<int()> >)> struct Bar { }; template <typename T> struct Bar<T, true> { };
Testing is about calling axioms which are just simple function. There is nothing complex in this. However tools are provided to call automatically the axioms. Those tools will need data set generators to provide input data to the axioms.
Note that the test programs still need to be written. Catsfoot is only a library. This way it allows Catsfoot to be used in any testing environment. Most common environments will expect you to write programs (with a function main
on each). This is what you have to do. However, the only things you need to do in your testing functions are:
Each axiom is a function whose parameters are universally quantified variables. Any possible generated value of the type can be given to the axiom, and it will still hold.
The following axiom:
static void associativity(const Op& op, const T& a, const T& b, const T& c) { axiom_assert(op(a, op(b, c)) == op(op(a, b), c)); }
Would translate into:
It is important to profit of the universal quantifiers from the axioms and let the data generator finds the values to be tested. It is tempting in some axioms to have local variables and generate random values locally. However the concept is decoupled from the implementations.
For a stack for example s
is the same as pop(push(s), some_value)
. On the implementation side, there might be a difference between the objects even though the equality operator claims they are the same. For instance, one might have more memory allocated than the other. Thus it is not enough to generate stacks only from "push" and the initial (empty) stack. We could even go further, and use a concept for a stack onto a list. A list behaves as a stack. It does even with values that have been initialized in a list style (insertion in the middle for example). And it better have to, otherwise you would need encapsulation rather than using templates.
Since knowing how to generate terms needs the knowledge of the concrete type, it is not possible to write good axioms generating terms locally.
For example, do not write:
static void erasure(AssociativeContainer c, SizeType i) { Iterator it = begin(c)+(i%size(c)); axiom_assert(size(erase(c, it)) == size(c) - 1); }
But rather:
static void erasure(AssociativeContainer c, Iterator i) { if ((i == find(c, i)) && (i != end(c))) axiom_assert(size(erase(c, i)) == size(c) - 1); }
First, find(c,i)
is a precondition for erase(c,i)
, and it has to show in the axiom. Second, we where generating ourselves iterators which is a bad thing. There are lots of other ways in a std::set for example an iterator can be made.
It is important to use universal quantifier on operations. Some operators could have a state (for example some tables for optimizations) and it has to be tested.
Other point is that int (*)(int, int)
is a valid type for the operator on a monoid for instance, but not all the pointers of function of this type behave as a monoid operation. You want to forbid the user to instantiate the concept with this kind of type. For that reason you need an universal quantifier on the operation.
A data generator has an operation Set get<T>()
which returns a data set for. The return type is a container of values of type T&
. One could define its own generator in such a way:
struct my_int_generator { std::vector<int> v; my_int_generator(): v{1, 2, 3} {} template <typename T, ENABLE_IF(is_same<T, int>)> const std::vector<int>& get() { return v; } };
It is possible to give a list of values given as initializer lists.
auto mygenerator = list_data_generator<int, float>
({-1, 0, 1, 2, 3,
std::numeric_limits<int>::min(),
std::numeric_limits<int>::max()},
{.5, 42.,
std::numeric_limits<float>::quiet_NaN(),
std::numeric_limits<float>::denorm_min(),
std::numeric_limits<float>::infinity()}));
It is easy to generate random values for simple types. It get however more complex to generate them for a type which signature (all operations available for this type) is big. To be able to generate all kinds of values, the generator has to randomly call functions. For example building a random list is not only about inserting random elements. It is also erasing some.
Also several types have to be build along side. For example, lists should be generated in the same time as iterators and values. Especially if we want to activate conditional axioms like described in section Writing axioms carefully.
Random term generation is generic. The only thing that changes is the signature (the set of operations we can call). To generate those values.
Since some functions have preconditions, we wrap those functions. At the end we have to define the signature as a list of functions. Those functions must take parameters from the set of types we generate (it can be references), and it must return a type from the set of types.
If we want to generate list of integer we could write such a generator:
auto int_list_generator = cxx_axioms::term_generator_builder<std::list<int>, std::list<int>::iterator, int>() (engine, std::function<int()>([&engine] () { return std::uniform_int_distribution<int>()(engine); }), constructor<std::list<int>()>(), disamb<const int&>()(&std::list<int>::push_back), disamb<const int&>()(&std::list<int>::push_front), std::function<int(const std::list<int>&)> ([] (const std::list<int>& in) { if (!in.empty()) return int(in.front()); return 0; }), std::function<int(const std::list<int>&)> ([] (const std::list<int>& in) { if (!in.empty()) return int(in.back()); return 0; }), disamb<>()(&std::list<int>::begin), disamb<>()(&std::list<int>::end), std::function<std::list<int> (std::list<int>)>([] (std::list<int> in) { if (!in.empty()) in.pop_back(); return in; }), std::function<std::list<int> (std::list<int>)>([] (std::list<int> in) { if (!in.empty()) in.pop_front(); return in; }) );
Since operators are usually described as types, and since most of the time these operators will be using wrappers that do not have any state, and just have a default constructor, then it is convenient to just give a generator that returns the default constructor value.
default_generator mygenerator;
It is possible to use a combination of generators. Function choose will build a generate that choose the left-most generator that can generate the requested type.
For example if we want to generate integer from a set, and any other type from the default constructor, we will build such a generator:
auto mygenerator = choose (list_data_generator<int> ({-1, 0, 1, 2, 3, std::numeric_limits<int>::min(), std::numeric_limits<int>::max()}), default_generator{});
Now data generators are defined, it is time to call test drivers. There are two test drivers:
We can test axioms individually:
bool res = test(mygenerator, monoid<int, op_plus, wrapped_constructor<int()> > ::associativity, "monoid's associativity");
We can even test any function if it behaves as axioms.
It is not very practical to test axioms one by one. Usually the user should prefer to test all the axioms required by a concept.
bool res = test_all(mygenerator, monoid<int, op_plus, wrapped_constructor<int()> >{});
It is possible that some conditions are never met. For example, in the following axiom:
if ((i == find(c, i)) && (i != end(c))) axiom_assert(size(erase(c, i)) == size(c) - 1);
If iterator i
is never found inside container c
, the axiom is never be triggered. To be able to check it, we can run at the end of the program a function that will verify the coverage of conditions.
res &&= check_unverified();
It will report any axioms never covered, and return false if any were found.
Note, it might not always be wished to cover all the axioms. Sometimes conditions might be static:
if (std::atomic<T>::is_lock_free()) axiom_assert(...);
The behavior of std::atomic
is dependent to the architecture. This dependence is represented by member is_lock_free
. So in this case we would like to have different axioms. But this condition is "static". Coverage checking will probably report this axiom, in the case where the condition is false.
There is an ugly work-around: use of block delimiters around conditional axioms can disable coverage checking. This is due to the definition of axiom_assert.
if (std::atomic<T>::is_lock_free()) { axiom_assert(...); }
Error messages output by the library quite standard and should be already understood by any IDE. However, if you use "parallel-tests" in Automake which outputs is redirected you need to tell your IDE that the log file is a file of error messages. With Emacs you can insert a mode selection as first line of the output of your test program:
std::cout << "-*- mode: compilation -*-" << std::endl;
GNU has a documentation page for Compilation mode.