This project is actively maintained and contributions from developers of all levels are very, very welcome. We recommend opening an issue before starting work on anything big so that we can discuss with you any details that might be relevant - but an issue is not required. Changes are welcome however they arrive :)
Start by making sure the project is running from source - instructions in the README
for setting up local project.
rst2pdf is an inclusive and welcoming community. To participate in this project, everyone is bound by our Community Code of Conduct.
Issues are always welcome here! Please give as much detail as you can; small replication cases showing what doesn't work help us a lot - bonus points if you can turn that into a test! (more on testing later)
Documentation fixes are at least as valuable as code ones. The manual is in the file doc/manual.rst and gets converted to both PDF and HTML for https://rst2pdf.org. Please go ahead and open a pull request for each of the changes you think we need.
We love pull requests!
Please open a pull request if you have changes/fixes/improvements to share. This project is for all of us and we welcome changes. Your PR should include:
- A detailed description of what is being changed and importantly why.
- Tests to cover the new feature, including reference PDFs.
- An update to the manual if needed.
Please be patient with us, we are all volunteers with busy lives.
The code for the project is in rst2pdf
inside the main project directory.
Tests are in tests
directory. Within this folder:
input
holds the instructions for running a test.[name].rst
holds the rst source for the test[name].style
has any stylesheet that should be applied for this test[name].cli
allows adding extra commands to the test (look for examples)[name].depends
allows specifying dependencies for the test, so that in case they aren't met, the test is skipped. Each dependency should be added in a separate line. To add a new one, first add it in thecheck_dependency
list insideconftest.py
.test_[name].retcode
allows for specifying an expected non-zero return code for expected error casestest_[name].nopdf
is a text file whose existence indicates that the test does not create a PDF output filetest_[name].expected_log
allow for specifying expected text within the log output. If this file exists, then the existence oftest_[name].nopdf
implies that any generated PDF file is to be ignored.
output
holds the PDFs that were generated by the tests running.reference
holds the "correct" version of the PDF for this test. When adding a test, put the desired PDF output into this file as well as supplying the tests in theinput
folder.
When you open a pull request, we run some automated checks to make sure everything is in order. We recommend you install these tools locally to check things over as you go along. We use the pre-commit framework, which you can install with pip:
pip install pre-commit
Once installed, enable it by running this command:
pre-commit install --allow-missing-config
It will let you know if any of the formatting/build tools are reporting problems when you commit your changes.
To run the pre-commit checks manually, run this command:
pre-commit run --all-files --show-diff-on-failure
Please run the tests :) They also run automatically along with some style checking when you open or update a pull request. The first run of the tests needs some setup so check out the steps below.
The rst2pdf test suite generates PDFs - stored in tests/output
-
which are then compared against reference PDFs - stored in
tests/reference
- using the PyMuPDF Python bindings for the
MuPDF library. rst2pdf depends on a number of different tools and
libraries, such as ReportLab, and the output of these can vary slightly
between releases. The PyMuPDF library allows us to compare the structure
of the PDFs, with a minor amount of fuzzing to allow for minor differences
caused by these changes in underlying dependencies.
To run the tests for the first time, you will need to do some setup (after this, you can just work on your given virtualenv each time):
python -m venv .venv . .venv/bin/activate pip install pytest pytest-xdist pip install -c requirements.txt -e .[aafiguresupport,mathsupport,plantumlsupport,rawhtmlsupport,sphinx,svgsupport,tests]
To run all tests, run:
pytest
You can also run tests in parallel by passing the -n auto
flag:
pytest -n auto
To run one test only, pass the file or directory to pytest. For example:
pytest tests/input/sphinx-repeat-table-rows
This will run one test and show the output.
To skip a test, create a text file in the tests/input
directory
called [test].ignore
containing a note on why the test is skipped. This
will mark the test as skipped when the test suite runs. This could be useful
for inherited tests that we aren't confident of the correct output for, but
where we don't want to delete/lose the test entirely.
The specific versions of all dependencies that are used for CI testing are stored in requirements.txt
.
To update, change to a venv that has Python 3.8+ installed and run:
pip install pip-tools pip-compile --extra=aafiguresupport --extra=mathsupport --extra=plantumlsupport \ --extra=rawhtmlsupport --extra=sphinx --extra=svgsupport --extra=tests \ --output-file requirements.txt pyproject.toml
After the mass-reformatting in PR 877, it is helpful to ignore the relevant commits that simply reformatted the code when using git blame.
The .git-blame-ignore-revs
file contains the list of commits to ignore and
you can use this git config line to make git blame
work more usefully:
git config blame.ignoreRevsFile .git-blame-ignore-revs