Skip to content

Add critical tests to the workflow and block failed PR #59

@choisant

Description

@choisant

As a first step towards making a proper suite of tests, we should make some basic tests. These tests should check the main functionality of the software and the main results of the calculations. They should be automatically run when a pull request is opened, and the pull request should not be allowed to be merged if the tests do not pass. Check when each is done, then label this issue as completed. Edit the suggestions as needed. Non-critical tests should be added to the Issue related to those.

  • Does the installation work?
  • Does parallel processing work where the test is run?
  • Can each of the main functions run without error?
  • For a specific data sample, does the buildmetadata() function work as expected?
  • For a specific data sample, metadata file and random seed, does the mcmcoutput stay consistent within a margin of error?
  • For a specific learnt output and test data sample, do the mean probability and quantiles stay consistent when calculated over the whole output?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions