Skip to content

Conversation

@ThomSerg
Copy link
Collaborator

@ThomSerg ThomSerg commented Oct 24, 2025

Attempt to parametrise our pytest test-suite. The default behavior stays the same: generic tests get run on the default solver OR-Tools and the more solver specific tests / across solver tests check which backends are available on the current system.

But sometimes you want to just run the testsuite on a single solver without having to uninstall all other solvers / create a fresh environment. Now you can pass an optional argument --solver:

python -m pytest ./tests --solver exact

This will have three consequences:

  • the "generic tests" will now be run on exact instead of the default ortools
  • the solver specific tests not targeting exact will be filtered
  • the across solver tests will be limited to only running on exact

In general, I opted for filtering instead of skipping tests. So the non-exact tests will not count towards the total number of tests. I believe we should reserve "skipping" for tests which don't get run due reasons of which we want to inform the user, e.g. missing dependencies which they need to install. When the user provides the --solver option, they already know that tests targeting other solvers won't be run so it would just clutter the results if we were to skip instead of filter those tests.

To parameterise a unittest class for the "generic" tests, simply decorate it with:

@pytest.mark.usefixtures("solver")

After which self.solver will be available, matching the user provided solver argument.

All solver specific tests can now be decorated with:

@pytest.mark.requires_solver("<SOLVER_NAME>")

And will automatically be skipped if a --solver argument has bee provided which doesn't match SOLVER_NAME.

For the across-solver tests which use generators (those in test_constraints.py), the pytest_collection_modifyitems hook will filter out parameterised pytest functions which have been instantiated with a solver different than the user provided one. Both the argument solver and solver_name get filtered on.

There are still some smaller places (see test_solveAll.py) where cp.SolverLookup.base_solvers() is used more directly, which can't be filtered without making changes to the test itself (not possible with one of the decorators / callback functions)

As a further improvement, it might be possible to merge the following two (Already did it ;) )

@pytest.mark.requires_solver("minizinc")
@pytest.mark.skipif(not CPM_minizinc.supported(), reason="MinZinc not installed")

So do the skipping if solver is not available also through the first mark and skip the tests more centrally in pytest_collection_modifyitems.

Using this parameterisation with solver different from OR-Tools revealed some issues with our testsuite related to #779

@ThomSerg ThomSerg added the blocked Pull request blocked by another pull request/issue. label Nov 3, 2025
@ThomSerg ThomSerg removed the blocked Pull request blocked by another pull request/issue. label Nov 6, 2025
@ThomSerg
Copy link
Collaborator Author

ThomSerg commented Dec 8, 2025

Added some more functionality. Can now define more then one solver:

python -m pytest ./tests --solver exact,gurobi

Solver-specific tests will be filtered to these two. Non-solver-specific tests will be parametrised with each of these solvers.

You can also run on all installed solvers:

python -m pytest ./tests --solver all

Or skipp all tests which depend on "solving a model":

python -m pytest ./tests --solver None

Also added a README with instructions on how to use the testsuite, how to use the new decorators and in general how to write new tests.

@ThomSerg
Copy link
Collaborator Author

ThomSerg commented Dec 8, 2025

Currently a lot of tests have their own solver parametrisation, for example in test_constraints.py:

SOLVERNAMES = [name for name, solver in SolverLookup.base_solvers() if solver.supported()]

def _generate_inputs(generator):
    exprs = []
    for solver in SOLVERNAMES:
        exprs += [(solver, expr) for expr in generator(solver)]
    return exprs


@pytest.mark.parametrize(("solver","constraint"),list(_generate_inputs(bool_exprs)), ids=str)
def test_bool_constraints(solver, constraint):
    ...

This makes it so that for the default behavior (no --solver provided) these tests are still run on all installed solvers. Would we want to keep this or make the default behavior be to only run these on OR-Tools but still run solver-specific tests for other solvers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants