Changing way "failing" tests are implemented, run: test for working condition, except non-pass #243
Closed
cowtowncoder
started this conversation in
Ideas
Replies: 3 comments 11 replies
-
No good solution via SO:
And ChatGPT pointed to JUnit "assertThrows" while admitting it's not quite the same (solid answer, assuming it didn't miss anything) |
Beta Was this translation helpful? Give feedback.
8 replies
-
Wrote about this work in Medium post. I was thinking sharing with other maintainers from other projects can benefit from this. Anyway, WDYT of closing this now @cowtowncoder ? Improvement can start in a new discussion |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Currently we have a set of "failing" tests for many/most repos: tests under
src/test/java/com/fasterxml/.../failing/
. These tests:pom.xml
to be skipped for junit runsDesign is this way so that we do have reproduction of a problem (so when fix applied, tests start passing), but will not block test suite (and especially CI system, Github actions).
But unfortunately this means that the tests are not automatically run at all and need to be manually run every now and then.
And when not doing that, tests may have unexpectedly start to pass, without anyone noticing it.
In fact this just happened recently with
jackson-databind
"master" branch (3.0.0-SNAPSHOT) -- due to change inObjectMapper
defaults, some tests started passing. In this particular case it meant test setup needed to be changed, but it could also happen when an actual problem gets fixed.Ideally, tho, we would have a test setup where these failing cases were run, as-is, but with inverted logic: build would fail if one or more of these expected-to-fail tests would pass. This would let us know immediately when formerly failing test started to pass.
But how to do that?
One approach some projects take -- and I think
jackson-module-kotlin
is an example -- is to change test itself to test for expected failure. While this does fulfill goals, I don't like the idea of "testing the wrong thing" -- especially when failure modes vary a lot. It seems better to have just one tests, checking for expected correct behavior.I am ok with some minor changes to tests: for example, adding annotation(s). But I don't know if any test framework (well, JUnit / Maven surefire to run) allows such definition of "Expect non-passing; fail build if pass"? Is anyone familiar with such features, extensions?
EDIT: reference to Python's
@unittest.expectedFailure
which seems to be about what I'd like:https://stackoverflow.com/questions/8017514/tdd-practice-distinguishing-between-genuine-failures-and-unimplemented-features
Beta Was this translation helpful? Give feedback.
All reactions