- Introduction
- Writing concepts
- Automatic concepts and predicates
- Functions
- Requirements on class templates
- Testing

To illustrate on how to write a concept, we start by showing how to write a simple specification of monoid. A monoid is a set with a binary operator. The set is under the operator. The operator is associative. There is also an identity element in the set.

template <typename T, typename Op, typename Id> struct monoid: public concept { typedef concept_list< is_callable<Op(T, T)>, is_callable<Id()>, std::is_convertible<typename is_callable<Op(T, T)>::result_type, T>, std::is_convertible<typename is_callable<Id()>::result_type, T> > requirements; static void associativity(const Op& op, const T& a, const T& b, const T& c) { axiom_assert(op(a, op(b, c)) == op(op(a, b), c)); } static void identity(const Op& op, const T& a, const Id& id) { axiom_assert((op(id(), a) == a) && (op(a, id()) == a)); } AXIOMS(associativity, identity); };

Our concept is declared as a class inheriting from concept. It is used to distinguish from automatic concepts (which inherit from auto_concept) and also from predicates (which do not inherit from either of those two special types).

We declare our requirements with a member type `requirements`

. If we have several requirements we can pack them as parameters of template concept_list.

A requirement can be a concept, an automatic concept, or a predicate. In this example all requirements are predicates.

We require `Op`

to be callable with two `T`

s, and `Id`

with no parameters. We also require that the result of those operations can be implicitly converted (no explicit cast) to `T`

.

Then we have axioms describing the run-time behavior. Parameters are universally quantified, meaning that the axioms should hold for any values of the given types.

Then to be able to automatize testing we describe what are the axioms we want to test by using the macro `AXIOMS`

.

Here we defined the carrier set and the two operators were declared as type parameters. The point of concepts is to be generic. We want to allow as much signature morphing as possible. Of course we could have fixed the operators, but we can do it afterwards, and reuse the most generic concept. For instance, we can define `monoid_plus`

as such:

template <typename T> struct monoid_plus: concept { typedef monoid<T, op_plus, wrapped_constructor<T()> > requirements; };

For people familiar with C++, `op_plus`

and `wrapped_constructor<T()>`

are functor types. That is, they have an operator "parenthesis" to implement the operation. It is usually better to represent operations through a type rather than a function pointer, as the latter would not represent overloaded operations.

It is possible to write your own functor. Nevertheless you need to be aware of some pitfalls. op_plus is defined as:

struct op_plus { template <typename T, typename U, typename Ret = decltype(std::declval<T>() + std::declval<U>())> Ret operator()(T&& t, U&& u) const { return std::forward<T>(t) + std::forward<U>(u); } };

Using `std::plus<T>`

instead would not give the same behavior in static concept checking. First, `std::plus<T>`

forces the arguments to be converted to `T`

and the return type as well. op_plus does not. Second, predicate is_callable<std::plus<T>(T, T)> would be always true, no matter if an operator has been defined. However using the operator will lead to an error message and would not be caught before by concept checking. Predicate is_callable<op_plus(T, T)> on the other hand, will be true if only if an operator has been defined. The reason is that a failure to detect the return type through `decltype()`

will just make the operator inaccessible. Virtually, the operator will not be defined. This makes op_plus a real alias to operator plus. Finally, op_plus represents the whole overloaded operator plus, whereas `std::plus<T>`

only represents one version of the operator.

The same is done with the constructor. wrapped_constructor is defined as follows:

template <typename T, bool = is_constructible<T>::value> struct wrapped_constructor; template <typename T, typename... Args> struct wrapped_constructor<T(Args...), true> { T operator()(Args... args...) const { return T(std::forward<Args>(args)...); } }; template <typename T, typename... Args> struct wrapped_constructor<T(Args...), false> { };

As you see, the template will select between an implementation with or without `operator()`

depending on constructibility of `T`

.

Catsfoot provides wrappers for operators, and also macros for generating wrappers for methods or overloaded functions. The user is invited to use them.

As a side note, there is nothing that says that the default constructor gives us the neutral value, for example for native types like `int`

. Well this is something good to test.

Since our concept `monoid`

is not automatic, we want to specify whether it holds on some set of types. This is a form of contract that the developer signs where he certifies that the axioms will hold. Of course, this can be wrong, but testing is there to determine that.

Automatic concepts do not need such contract. However, we do not have specified run-time behavior for automatic concepts (no axiom). Some overloaded function might need run-time behavior information to select an optimal version.

The only thing these contracts are useful for is with run-time-behavior-based overloading. For example imagine we have an implementation of sum of a set of values in parallel playing on the fact that a monoid is associative, to be able to reorder priorities. The signature of such a function would be:

Of course, if our T is not a carrier set for a monoid, then we do not want to use this version of the function `sum`

but a more classical one that would use the priority represented by the set. The only way for selecting the right function is to sign contracts asserting that implementations follow the run-time specifications of the concepts.

Even if your point is not to use this overloading feature, you will be required to sign the contracts, otherwise the static concept checking will fail. This is to make sure that other people will be able to use your concepts.

Contracts are "signed" with type traits verified. Its parameter should be a concept instantiation (eventually partial). For example we can state that integer along with operator plus and the default constructor (which might actually be wrong depending on the compiler) forms a `monoid`

.

namespace catsfoot { template <> struct verified<monoid<int, op_plus, wrapped_constructor<int()> > > : public std::true_type {}; }

Now we can automate signing those contracts using partial specialization. For instance, we might say that for all verified `monoid`

s with the plus operator and the default constructor, the concept `monoid_plus`

on the same type is verified.

namespace catsfoot { template <typename T> struct verified<monoid_plus<T> > : public verified<monoid<T, op_plus, wrapped_constructor<int()> > > {}; }

The type trait verified does not trigger any testing, but only allows it. Writing tests is still the responsibility of the user.

Automatic concepts and predicates have the same role, but they are defined in different ways. They also behave differently when asserting compile-time requirements. Requiring an automatic concept when some of its requirements are missing will give error messages on specific requirements, whereas requiring a false predicate will result in a simple error message stating that the predicate is false. When composing several requirements it is better to write an automatic concept, as the user will get more detailed error output.

A predicate is a type with a constant member `value`

convertible to a `bool`

.

For example, we can write a predicate like this:

template <typename T> struct is_lvalue_reference: public std::false_type { }; template <typename T> struct is_lvalue_reference<T&>: public std::true_type { };

An automatic concept is basically like a concept. However there is no axiom. Any axiom would never be called by the test driver.

template <typename T, typename Stream> struct is_printable: public auto_concept { typedef concept_list< is_callable<op_lsh(Stream&, T)>, std::is_convertible<typename is_callable<op_lsh(Stream&, T)> ::result_type, Stream&> > requirements; };

Let's say we want to write a function `foo`

that takes 3 arguments of any type as long as there is an operator `+`

for the type, the type has a default constructor, and the type with these operations forms a monoid. The following example shows an example of such a function.

template <typename T, typename NonRefT = typename std::decay<T>::type, ENABLE_IF(monoid<NonRefT, op_plus, wrapped_constructor<NonRefT()> >)> NonRefT foo(T&& a, T&& b, T&& c) { // (a * b); NonRefT ret = std::forward<T>(a) + std::forward<T>(b) + std::forward<T>(c); return ret; }

Note that this selection is done with the last template parameter. Macro `ENABLE_IF`

verifies like `IF`

that the compile-time part of the concept holds. However it enables the version of the function instead of returning a Boolean.

It is possible to get errors if the number of parameters is the same. In this case it is possible to add parameters as `typename = void`

in the parameter list.

If we ever un-commented the line using the multiplication operator (above), the compiler would not see it. It would actually compile if we gave a type `T`

which has an operator `*`

. We want to verify this.

A requirement for `some_concept<T, T>`

is more specific than for `some_concept<T, U>`

, then the archetype for `some_concept<T, T>`

will need to be more specific. Any function will probably end up in some unique requirement.

Checking that a function has the right requirement is then still a hassle and uses a classic way of writing archetypes.

namespace foo_check { struct T { T() = default; T(const T&) = default; ~T() = default; }; T operator+(const T&, const T&) { return T(); } bool operator==(const T&, const T&) { return true; } } namespace catsfoot { template <> struct verified<monoid< ::foo_check::T, op_plus, wrapped_constructor< ::foo_check::T()> > > : public std::true_type { };

Sometimes we do not want to overload for different requirements. We just want to require a concept for any call. Using the previous method will just end up in the compiler claiming it did not find the function for the corresponding parameters. If there is no overloading we probably just want to report the missing requirements.

template <typename T, typename NonRefT = typename std::decay<T>::type> NonRefT foo(T&& a, T&& b, T&& c) { assert_concept(monoid<NonRefT, op_plus, wrapped_constructor<NonRefT()> >()); NonRefT ret = std::forward<T>(a) + std::forward<T>(b) + std::forward<T>(c); return ret; }

In this code we call assert_concept which will provide the right error message if the requirement is not satisfied.

Instantiating class_assert_concept will have the same effect as calling assert_concept. To make sure that the assertion is instantiated at the same time as the class, one can either inherit from the assertion class or to use it as type of a dummy member. For example:

template <typename T> struct Foo : public class_assert_concept<monoid<T, op_plus, wrapped_constructor<T()> > > {};

There is no elegant way to use a tool similar to `ENABLE_IF`

for overloaded function templates as the number of parameters has to be fixed. Fortunately, it is possible, if we know all the possible specializations when writing the general class template, to use a list of Boolean parameters.

template <typename T, bool specialize = IF(monoid<T, op_plus, wrapped_constructor<int()> >)> struct Bar { }; template <typename T> struct Bar<T, true> { };

Testing involves calling axioms which are just simple functions. There is nothing complex in this. Nevertheless tools are provided to automatically call the axioms. Those tools will need data set generators to provide input data to the axioms.

Note that the test programs still need to be written. Catsfoot is only a library. This allows Catsfoot to be used in any testing environment. Most common environments will expect you to write programs (with a function `main`

in each). This is what you have to do. However, the only things you need to do in your testing functions are:

- to define data generators;
- to call test drivers with concepts (or just axioms) in combination with generators.

Each axiom is a function whose parameters are universally quantified variables. Any possible generated value of the type can be given to the axiom, which should still hold.

The following axiom:

static void associativity(const Op& op, const T& a, const T& b, const T& c) { axiom_assert(op(a, op(b, c)) == op(op(a, b), c)); }

Would translate into:

It is important to take advantage of the universal quantifiers in the axioms and let the data generator find the values to be tested. It is tempting in some axioms to have local variables and generate random values locally, but the concept is decoupled from its implementations.

For example, for any stack `s`

, the expression `s`

should be semantically equivalent to `pop(push(s), some_value)`

. However, on the implementation side there might be a difference between the objects even though the equality operator claims they are the same. For instance, one might have more memory allocated than the other. Thus it is not enough to generate stacks only based on `push`

and the initial (empty) stack. We could even go further, and define a concept for a list-based stack. A list can behave the same as a stack, even if it has been initialized with values using list operations (insertion in the middle for example). And it better, as otherwise you would need encapsulation rather than template-based generics.

Since knowing how to generate suitable terms requires knowledge of the concrete type, it is not possible to write good axioms by generating terms locally.

For example, do not write:

static void erasure(AssociativeContainer c, SizeType i) { Iterator it = begin(c)+(i%size(c)); axiom_assert(size(erase(c, it)) == size(c) - 1); }

But rather:

static void erasure(AssociativeContainer c, Iterator i) { if ((i == find(c, i)) && (i != end(c))) axiom_assert(size(erase(c, i)) == size(c) - 1); }

First, `find(c,i)`

is a precondition for `erase(c,i)`

, and it has to appear in the axiom. Second, in the former definition we instantiated an iterator by ourselves, which is a bad thing. There are lots of other ways to create iterators for a `std::set`

, for example.

It is important to use universally quantified operations in testing. Some operators could have state (for example some tables for optimizations), and the state handling should be tested.

Another point is that `int (*)(int, int)`

is a valid type for the operator on a monoid, for instance, but not all such pointers point to a function performing a valid monoid operation. You want to forbid the instantiation of the concept for this kind of type. For that reason you need an universal quantifier on the operation.

A data generator has an operation `Set get<T>()`

which returns a data set. The return type is a container of values of type `T&`

. One could define one's own generator like this:

struct my_int_generator { std::vector<int> v; my_int_generator(): v{1, 2, 3} {} template <typename T, ENABLE_IF(is_same<T, int>)> const std::vector<int>& get() { return v; } };

It is possible to define a generator by giving a list of values as initializer lists. The template list_data_generator is provided for this purpose.

```
auto mygenerator = list_data_generator<int, float>
({-1, 0, 1, 2, 3,
std::numeric_limits<int>::min(),
std::numeric_limits<int>::max()},
{.5, 42.,
std::numeric_limits<float>::quiet_NaN(),
std::numeric_limits<float>::denorm_min(),
std::numeric_limits<float>::infinity()}));
```

It is easy to generate random values for simple types, but it gets more complex to generate them for a type whose signature (all operations available for this type) is big. To be able to generate all kinds of values, the generator has to randomly call functions. For example building a random list is not only about inserting random elements. It is also erasing some.

Also several types have to be built alongside. For example, lists should be generated at the same time as iterators and values. Especially if we want to activate conditional axioms as described in the section Writing axioms carefully.

Random term generation is generic. The only thing that changes is the signature, i.e. the set of operations we can call to generate values.

Since some functions have preconditions, we wrap those functions. At the end we have to define the signature as a list of functions. Those functions must take parameters from the set of types we generate (they can be references), and it must return a type from the set of types.

If we want to generate lists of integers we could write a generator such as:

auto int_list_generator = cxx_axioms::term_generator_builder<std::list<int>, std::list<int>::iterator, int>() (engine, std::function<int()>([&engine] () { return std::uniform_int_distribution<int>()(engine); }), constructor<std::list<int>()>(), disamb<const int&>()(&std::list<int>::push_back), disamb<const int&>()(&std::list<int>::push_front), std::function<int(const std::list<int>&)> ([] (const std::list<int>& in) { if (!in.empty()) return int(in.front()); return 0; }), std::function<int(const std::list<int>&)> ([] (const std::list<int>& in) { if (!in.empty()) return int(in.back()); return 0; }), disamb<>()(&std::list<int>::begin), disamb<>()(&std::list<int>::end), std::function<std::list<int> (std::list<int>)>([] (std::list<int> in) { if (!in.empty()) in.pop_back(); return in; }), std::function<std::list<int> (std::list<int>)>([] (std::list<int> in) { if (!in.empty()) in.pop_front(); return in; }) );

Since operators are usually described as types, and since most of the time these operators will be using wrappers that do not have any state, and just have a default constructor, then it is convenient to just give a generator that returns the default constructor value. Such a genererator can be conveniently declared with default_generator.

default_generator mygenerator;

It is possible to use a combination of generators. The function choose will build a generator that chooses the left-most generator that can generate the requested type.

For example if we wanted to generate integers from a specific set, and any other types based on their default constructor, we could define a generator as:

auto mygenerator = choose (list_data_generator<int> ({-1, 0, 1, 2, 3, std::numeric_limits<int>::min(), std::numeric_limits<int>::max()}), default_generator{});

Once the data generators are defined, it is time to call test drivers. There are two test drivers:

- test: tests an axiom (represented by a function).
- test_all: tests all axioms for a concept (including inherited axioms from requirements).

We can test axioms individually:

bool res = test(mygenerator, monoid<int, op_plus, wrapped_constructor<int()> > ::associativity, "monoid's associativity");

We can also test with any function that behaves like an axiom.

It is not very practical to test axioms one by one. Usually the user should prefer to test all the axioms required by a concept.

bool res = test_all(mygenerator, monoid<int, op_plus, wrapped_constructor<int()> >{});

It is possible that some conditions are never met. For example, in the following axiom:

if ((i == find(c, i)) && (i != end(c))) axiom_assert(size(erase(c, i)) == size(c) - 1);

If iterator `i`

is never found inside container `c`

, the axiom is never actually checked. To learn of such cases we can run at the end of the program a function that will verify the coverage of conditions.

res &&= check_unverified();

It will report any axioms never covered, and return false if any were found.

Note, it might not always be desirable to cover all the axioms. Sometimes conditions might be static:

if (std::atomic<T>::is_lock_free()) axiom_assert(...);

The behavior of `std::atomic`

is dependent on the architecture. This dependence is represented by member `is_lock_free`

. So in this case we would like to have different axioms. But this condition is "static". Coverage checking will probably report this axiom, in the case where the condition is false.

There is an ugly work-around: block delimiters around conditional axioms can be used to disable coverage checking. This is due to the definition of axiom_assert.

if (std::atomic<T>::is_lock_free()) { axiom_assert(...); }

Error messages output by the library are quite standard and should be already understood by any IDE. However, if you use "parallel-tests" in Automake (which results in redirected output) you need to tell your IDE that the log file is a file of error messages. With Emacs you can insert a mode selection as first line of the output of your test program:

```
std::cout << "-*- mode: compilation -*-" << std::endl;
```

GNU has a documentation page for Compilation mode.