-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix failing test: test_nonexistant_db #5027
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd be good to understand what is really going on here; this seems a lot like there's some global state not being properly restored after some preceding test. Or maybe some intricacies of stdin/out handling does not work as intended.
In general, it is quite unclear to me which error path the test is actually supposed to hit, because there are two possibilities:
- the library directory doesn't even exist (this is what leads to the prompt you see, and given the invalid path I'd say this is expected)
- the library directory exists, but opening the DB fails (Maybe the file isn't there, or it is corrupt. This raises
UserError
.)
Arguably, there should be tests for both cases.
test/test_ui.py
Outdated
with control_stdin("y"): | ||
config.write("library: /xxx/yyy/not/a/real/path") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this actually doing anything? From quick glance at the code, there should be no way this could prompt for user input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right; I think we probably don't need the control_stdin
wrapper here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I accidentally globbered by old commit and history with a botched git push -f
, so I can't see what I had before, but this is what it should have been:
with self.write_config_file() as config:
config.write("library: /xxx/yyy/not/a/real/path")
with self.assertRaises(ui.UserError):
with control_stdin("n"):
self.run_command("test", lib=None)
Without it, I don't see anything globally set that's mocking stdin.
With control_stdin
here, the call the call to input
(here) is handled w/o need for user input, thereby making the test pass.
I'm not sure how this test was passing before without this?
I re-added it and it's passing now.
Just to make sure, I rebased on commit c1a232ec
of master
, and it's working.
Let me push this change again, this time correctly 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It's likely this does address the proximate problem, but I think it might be nice to do a little extra homework here to identify the hidden test ordering dependency. In particular, we could do some experiments that would run tests in different orders/subsets to identify which one is causing this test to pass when it doesn't alone; that could identify some other test that is (worrisomely) not resetting its changes to global state.
test/test_ui.py
Outdated
with control_stdin("y"): | ||
config.write("library: /xxx/yyy/not/a/real/path") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right; I think we probably don't need the control_stdin
wrapper here.
Sounds good to me; I think there might be some other hidden problems that reordering will bring out. When I shuffled the test order around for a trial, the
Unfortunately, I didn't save the test order for this run, but I'll be playing around and learning more about the beets test suite ordering to see what's going on... Also, I'm using |
I do plan to do a test overhaul 'soon'. I'll keep this in mind. |
What do you have in mind for the overhaul (how big is the scope), and why is it needed? I only have ~5 hours a week to contribute, but I'm interested in helping; seems like the perfect opportunity for me to learn more about the codebase |
(whoops, accidentally closed, reopening until I finish fixing the bug) |
Hey guys, I'm wondering how this test was ever passing if it's supposed to fail on stdin? Here's the callstack for the failing test: In all cases I can see, it ends up calling Pythons |
A lot of things. We have spotty test coverage with a lot of things not being tested at all. We also have some complicated tests that could be improved with pytest to make setup and teardown easier. I haven't done a full inventory and I'll be doing it piecemeal, file by file. I just need time/energy/motivation. |
Resolves #5027: failing test: test_nonexistant_db This test was failing because it was requesting input from the user on stdin. This diff mocks stdin with a canned response.
Indeed; I do think that's what's suspicious here… it seems like some kind of input-wrapping state is being held over from some other test and therefore suppressing the failure here. So I could be wrong about this, but I think we actually have two problems:
Does that seem right? |
Instead of restoring `sys.stdin` to `sys.__stdin__`, restore it to whatever it was before we installed out dummy I/O hooks. This is relevant in pytest, for example, which installs its *own* `sys.stdin`, which we were then clobbering. This was leading to the suppression of test failures observed in #5021 and addressed in #5027.
Resolves #5027: failing test: test_nonexistant_db This test was failing because it was requesting input from the user on stdin. This diff mocks stdin with a canned response.
Description
Fixes #5021
The
test_nonexistant_db
test intest_ui
would attempt to read fromstdin
under certain conditions (particularly when run directly, i.e.pytest -k nonex test/test_ui.py
).This diff fixes the issue.
note: I didn't really figure out what the root cause of the initial issue was (since the tests could sometimes pass when run with the entire test-suite), but if this works on other folks machines, then hey, good enough I suppose...