Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Bayes test #2258

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

WIP: Bayes test #2258

wants to merge 9 commits into from

Conversation

dgoodwin
Copy link
Contributor

No description provided.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 15, 2025
Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dgoodwin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 15, 2025
dgoodwin and others added 7 commits January 17, 2025 09:26
This is because we don't want heavily run tests to dominate the new samples
The R version came first and essentially uses the dnbinom function.  I
had AI translate it into Go and it had to do some of the math the hard
way.  The Go output is nearly identical, just lacking some precision.

An idea would be to run trials with various distributions to see which
turned out to be the most accurate.

What matters in this code is the "Posterior Predictive Probability"
output.  A low probabily means the prior data doesn't fit the new data
well so it might be worth investigating.  You can see that setting a
threshold of "Flag the test if we're 95% certain something weird is
going on" would work in all these examples.  In practice, I think we
could set it to 99% (Meaning, the Posterior Predictive Probability would
be <= 1%).

Right now this doesn't feel that different than p-values and Fisher's
Exact Number, but I think the nice part is that the math can work with
any sample size at all---even a sample of 1.  Obviously, the more data
that comes in the more confidence we'd have.
Negative binomial PR analysis
Copy link
Contributor

openshift-ci bot commented Jan 27, 2025

@dgoodwin: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/security 25d948c link false /test security
ci/prow/unit 25d948c link true /test unit
ci/prow/lint 25d948c link true /test lint

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants