Skip to content
Open
55 changes: 49 additions & 6 deletions doc/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3750,6 +3750,51 @@ All the resources and drivers in this chapter have a YAML example snippet which
can simply be added (at the correct indentation level, one level deeper) to the
environment configuration.

See the :ref:`labgrid-device-config` man page for documentation on the
top-level ``options``, ``images``, ``tools``, and ``imports`` keys in the
environment configuration.

.. _environment-configuration-feature-flags:

Feature Flags
~~~~~~~~~~~~~
Similar targets or multi target environments may differ from each other in
certain small aspects, e.g. one device has a camera or screen connected, but
another one has not.
In labgrid's environment configs, such variations are described as feature flags.

Here's an example environment configuration for a target-scoped feature
``camera``:

.. code-block:: yaml
:name: feature-flag-env.yaml

targets:
main:
features:
- camera
resources: {}
drivers: {}

Features can not only be set per target, but also globally:

.. code-block:: yaml
:name: feature-flag-global-env.yaml

features:
- camera
targets:
main:
features:
- console
resources: {}
drivers: {}

See :ref:`usage_pytestplugin_mark_lg_feature` for how to make use of feature
flags in tests.

Multiple Drivers of the Same Type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to use multiple drivers of the same type, the resources and drivers
need to be lists, e.g:

Expand Down Expand Up @@ -3802,6 +3847,8 @@ To bind the correct driver to the correct resource, explicit ``name`` and
The property name for the binding (e.g. ``port`` in the example above) is
documented for each individual driver in this chapter.

Templating the Environment Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The YAML configuration file also supports templating for some substitutions,
these are:

Expand All @@ -3825,10 +3872,6 @@ would resolve the ``qemu_bin`` path relative to the ``BASE`` dir of the YAML
file and try to use the `RemotePlace`_ with the name set in the ``LG_PLACE``
environment variable.

See the :ref:`labgrid-device-config` man page for documentation on the
top-level ``options``, ``images``, ``tools``, and ``examples`` keys in the
environment configuration.

.. _exporter-configuration:

Exporter Configuration
Expand Down Expand Up @@ -3910,8 +3953,8 @@ to achieve the same effect:
match:
'@ID_PATH': 'pci-0000:05:00.0-usb-3-1.4'

Templating
~~~~~~~~~~
Templating the Exporter Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To reduce the amount of repeated declarations when many similar resources
need to be exported, the `Jinja2 template engine <http://jinja.pocoo.org/>`_
is used as a preprocessor for the configuration file:
Expand Down
141 changes: 51 additions & 90 deletions doc/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -479,9 +479,12 @@ own proxy, and only fallback to LG_PROXY.

See also :ref:`overview-proxy-mechanism`.

Simple Example
~~~~~~~~~~~~~~

Writing and Running Tests
~~~~~~~~~~~~~~~~~~~~~~~~~

Getting Started: A Minimal Test
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As a minimal example, we have a target connected via a USB serial converter
('/dev/ttyUSB0') and booted to the Linux shell.
The following environment config file (``shell-example.yaml``) describes how to
Expand Down Expand Up @@ -557,8 +560,8 @@ environment config:

pytest has automatically found the test case and executed it on the target.

Custom Fixture Example
~~~~~~~~~~~~~~~~~~~~~~
Reusing Setup Code with Custom Fixtures
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When writing many test cases which use the same driver, we can get rid of some
common code by wrapping the `CommandProtocol` in a fixture.
As pytest always executes the ``conftest.py`` file in the test suite directory,
Expand Down Expand Up @@ -595,8 +598,8 @@ With this fixture, we can simplify the ``test_example.py`` file to:

... 1 passed...

Strategy Fixture Example
~~~~~~~~~~~~~~~~~~~~~~~~
Managing Target States with Strategy Fixtures
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When using a :any:`Strategy` to transition the target between states, it is
useful to define a function scope fixture per state in ``conftest.py``:

Expand Down Expand Up @@ -694,116 +697,74 @@ For this example, you should get a report similar to this:

========================== 3 passed in 29.77 seconds ===========================

Feature Flags
~~~~~~~~~~~~~
labgrid includes support for feature flags on a global and target scope.
Adding a ``@pytest.mark.lg_feature`` decorator to a test ensures it is only
executed if the desired feature is available:
.. _usage_pytestplugin_mark_lg_feature:

@pytest.mark.lg_feature()
~~~~~~~~~~~~~~~~~~~~~~~~~
labgrid supports :ref:`environment-configuration-feature-flags` in the
:ref:`environment-configuration`.
Adding a ``@pytest.mark.lg_feature()`` decorator to a test ensures it is only
executed if the desired feature is set, either under the target or global
``features:`` keys.

.. code-block:: python
:name: test_feature_flags.py

import pytest

@pytest.mark.lg_feature("camera")
def test_camera(target):
pass

Here's an example environment configuration:

.. code-block:: yaml
:name: feature-flag-env.yaml

targets:
main:
features:
- camera
resources: {}
drivers: {}

.. testcode:: pytest-example
:hide:

import pytest

plugins = ['labgrid.pytestplugin']
pytest.main(['--lg-env', 'feature-flag-env.yaml', 'test_feature_flags.py'], plugins)
In case the feature is unavailable, pytest will record the missing feature
as the skip reason.

.. testoutput:: pytest-example
:hide:

... 1 passed...

This would run the above test, however the following configuration would skip the
test because of the missing feature:

.. code-block:: yaml
:name: feature-flag-skip-env.yaml

targets:
main:
features:
- console
resources: {}
drivers: {}

.. testcode:: pytest-example
:hide:

import pytest

plugins = ['labgrid.pytestplugin']
pytest.main(['--lg-env', 'feature-flag-skip-env.yaml', 'test_feature_flags.py'], plugins)

.. testoutput:: pytest-example
:hide:

... 1 skipped...

pytest will record the missing feature as the skip reason.

For tests with multiple required features, pass them as a list to pytest:
Tests requiring multiple features are also possible:

.. code-block:: python
:name: test_feature_flags_global.py

import pytest

@pytest.mark.lg_feature(["camera", "console"])
def test_camera(target):
pass

Features do not have to be set per target, they can also be set via the global
features key:

.. code-block:: yaml
:name: feature-flag-global-env.yaml

features:
- camera
targets:
main:
features:
- console
resources: {}
drivers: {}
@pytest.mark.lg_xfail_feature()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
labgrid supports :ref:`environment-configuration-feature-flags` in the
:ref:`environment-configuration`.
pytest supports the ``xfail`` marker, see
`pytest.mark.xfail() <https://docs.pytest.org/en/stable/reference/reference.html#pytest-mark-xfail>`_.

.. testcode:: pytest-example
:hide:
When having more specific features, tests can be marked as ``xfail`` for a
particular feature.

import pytest
Imagine two targets have ``camera`` feature flags.
One of them has the additional ``special-camera-2000`` feature flag.
The other has the additional ``special-camera-3000`` feature flag.
Due to a known bug on ``special-camera-3000``, the test is expected to
fail.
The test can be marked as ``xfail`` for that feature:

plugins = ['labgrid.pytestplugin']
pytest.main(['--lg-env', 'feature-flag-global-env.yaml', 'test_feature_flags_global.py'],
plugins)
.. code-block:: python

.. testoutput:: pytest-example
:hide:
import pytest

... 1 passed...
@pytest.mark.lg_feature("camera"])
@pytest.mark.lg_xfail_feature(
"special-camera-3000",
reason="known bug xy on special-camera-3000",
raises=AssertionError,
strict=True,
)
def test_camera(target):
pass

This YAML configuration would combine both the global and the target features.
Features under the target and global ``features:`` keys are considered.

``@pytest.mark.lg_xfail_feature(feature, **kwargs)``:
- ``feature`` (str) - Feature that should mark the test as ``xfail``, passed
as boolean ``condition=`` to ``pytest.mark.xfail()``.
- ``**kwargs`` - All kw-only args are passed to ``pytest.mark.xfail()``.

Test Reports
~~~~~~~~~~~~
Expand Down
51 changes: 36 additions & 15 deletions labgrid/pytestplugin/hooks.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
import copy
import logging
import pytest

Expand Down Expand Up @@ -71,7 +72,10 @@ def pytest_configure(config):
configure_pytest_logging(config, logging_plugin)

config.addinivalue_line("markers",
"lg_feature: marker for labgrid feature flags")
"lg_feature: skip tests on envs/targets without given labgrid feature flags")
config.addinivalue_line("markers",
"lg_xfail_feature: mark tests xfail on envs/targets with given labgrid feature flag")

lg_log = config.option.lg_log
if lg_log:
ConsoleLoggingReporter(lg_log)
Expand Down Expand Up @@ -101,27 +105,44 @@ def pytest_collection_modifyitems(config, items):
have_feature = env.get_features() | env.get_target_features()

for item in items:
# pytest.mark.lg_feature
lg_feature_signature = "pytest.mark.lg_feature(features: str | list[str])"
want_feature = set()

for marker in item.iter_markers("lg_feature"):
arg = marker.args[0]
if isinstance(arg, str):
want_feature.add(arg)
elif isinstance(arg, list):
want_feature.update(arg)
if len(marker.args) != 1 or marker.kwargs:
raise pytest.UsageError(f"Unexpected number of args/kwargs for {lg_feature_signature}")
elif isinstance(marker.args[0], str):
want_feature.add(marker.args[0])
elif isinstance(marker.args[0], list):
want_feature.update(marker.args[0])
else:
raise Exception("Unsupported feature argument type")
raise pytest.UsageError(f"Unsupported 'features' argument type ({type(marker.args[0])}) for {lg_feature_signature}")

missing_feature = want_feature - have_feature
if missing_feature:
if len(missing_feature) == 1:
skip = pytest.mark.skip(
reason=f'Skipping because feature "{missing_feature}" is not supported'
)
else:
skip = pytest.mark.skip(
reason=f'Skipping because features "{missing_feature}" are not supported'
reason = f'unsupported feature(s): {", ".join(missing_feature)}'
item.add_marker(pytest.mark.skip(reason=reason))

# pytest.mark.lg_xfail_feature
lg_xfail_feature_signature = "pytest.mark.lg_xfail_feature(feature: str, *, **xfail_kwargs), xfail_kwargs as pytest.mark.xfail expects them"
for marker in item.iter_markers("lg_xfail_feature"):
if len(marker.args) != 1:
raise pytest.UsageError(f"Unexpected number of arguments for {lg_xfail_feature_signature}")
elif not isinstance(marker.args[0], str):
raise pytest.UsageError(f"Unsupported 'feature' argument type {type(marker.args[0])} for {lg_xfail_feature_signature}")
if "condition" in marker.kwargs:
raise pytest.UsageError(f"Unsupported 'condition' argument for {lg_xfail_feature_signature}")

kwargs = copy.copy(marker.kwargs)
reason = kwargs.pop("reason", marker.args[0])
item.add_marker(
pytest.mark.xfail(
condition=marker.args[0] in have_feature,
reason=reason,
**kwargs,
)
item.add_marker(skip)
)

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
Expand Down
Loading