Test runners could be an interesting functionality. Consider the case where you have a mocking framework (like fakeit) that behaves in a specific way. In order to avoid having Cest depend on particular frameworks and tools, maybe some kind of "Runner" capability could be interesting. This capability could improve testing when using mocks and other third party extensions.
For example, this happens using the fakeit mocking framework:
describe("Test with mocks", []() {
it("throws a generic exception when mock verification fails", []() {
Mock<HttpClient> mock_http_client;
....
Verify(Method(mock_http_client, post).Using("hello")));
});
});
When the above verification fails, the output does not provide useful information:
FAIL test.cpp:29 it throws a generic exception when mock verification fails
❌ Assertion Failed: Unhandled exception in test case: std::exception
test.cpp:29
A possibility could be to include some kind of Runner keyword, so that third party testing plugins could be used:
describe("Test with mocks", []() {
runWith(Fakeit);
it("throws a generic exception when mock verification fails", []() {
Mock<HttpClient> mock_http_client;
....
Verify(Method(mock_http_client, post).Using("hello")));
});
});
So in this case, when it fails, the fakeit exception could be catched and printed properly.
Test runners could be an interesting functionality. Consider the case where you have a mocking framework (like fakeit) that behaves in a specific way. In order to avoid having Cest depend on particular frameworks and tools, maybe some kind of "Runner" capability could be interesting. This capability could improve testing when using mocks and other third party extensions.
For example, this happens using the fakeit mocking framework:
When the above verification fails, the output does not provide useful information:
A possibility could be to include some kind of
Runnerkeyword, so that third party testing plugins could be used:So in this case, when it fails, the fakeit exception could be catched and printed properly.