-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long wait on "Mapping to class/routine coverage" 15mins+ on local machine #56
Comments
@isc-tleavitt are you or a colleague able to shed any light on this please? Thanks 👍 |
@stevelee12 sorry for the delay. What IRIS version are you running on? Can you paste in the query plan you're getting on your system for the slow query? |
One possible thought here - in our CI processes we do this before each build to clear out previous data (note, this will delete EVERYTHING from previous TestCoverage runs): Running that could help with performance if past runs' data are a factor. As a comparison point, I'm seeing this performance on one of our larger applications with a low-resourced build machine running IRIS for UNIX (Red Hat Enterprise Linux 8 for x86-64) 2022.1.2 (Build 574U) Fri Jan 13 2023 14:58:02 EST:
Codebase size is fairly comparable (not smaller enough to explain a 250x slowdown - and we have much higher coverage too):
Gives:
The operative query:
Returns in under a second with query plan:
|
IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2022.1.5 (Build 940U) Thu Apr 18 2024 14:30:11 EDT |
Not sure if it’s relevant but the code coverage I’m analysing is 100% routine .mac classes rather than .cls’ |
@stevelee12 can you snag the query plan and see if it's the same? |
I forgot to add, I tried running the SQL on terminal. Query executes but when I try to do RS.Next() on the first row it hangs |
I quit the tests early with ctrl+c, the query plan is still the same as above, but executing it will not return as yours does. Happy to show you on a Teams call on Monday or any day next week if you're available? |
@stevelee12 please drop me an email: |
This query plan is meaningfully different and I think I see the bad choice: for each routine line we're looping over all of the hashes for the given run and test path! That's a lot of silly extra work. TuneTable isn't much help here because we're starting out from nothing, but we might be able to trick the query optimizer in the right direction with a %IGNOREINDEX pointer. Unfortunately, we need to use TestCoverage_Data.Coverage.MeaningfulCoverageData on the outer loop. The best possibility/hope would be that ignoring TestCoverage_Data.CodeUnitMap.HashReverse would get it to use HashForward and do so first. |
Ah - actually we can use %NOINDEX in ON too, just thought to look for that: |
@stevelee12 - rather than meeting, I'm asking @isc-shuliu to put up a PR with the query optimizer keywords to fix the issue; if that doesn't resolve it we can meet. |
Proper optimization strategy: Rewrite the query to change the join order to: And use the %INORDER query optimizer hint. |
@stevelee12 thank you for confirming! I've merged and we'll release 4.0.5 today. |
@stevelee12 we've released 4.0.5 here and via Open Exchange/IPM. |
Hi @isc-tleavitt |
@stevelee12 you're completely right - filed #58 to fix this. There's a new artifact, that'll be the right one. |
As per subject, the "Mapping to class/routine coverage" process is taking a long time to complete. Running locally takes between 15-20 minutes on average (over 50mins in Azure DevOps).
Run locally:

I've put a couple of debug lines at various points in TestCoverage.Data.Run.MapRunCoverage() to track timings, flagged with the original COS comments where possible, here's my findings:
Size of tables:



Running the sqlstatement as a straight count(*) without the insert on SMP just sits waiting forever:

However when I remove the join back to "TestCoverage_Data.Coverage target" the query returns instantly

Can anyone help me with this please?
Thanks as always :)
EDIT:

The straight count did eventually return a result after 46min:
The text was updated successfully, but these errors were encountered: