-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WARN: Unexpected downgrade detected: skipping #29919
Comments
This GitHub Action example was autoclosed: ayushmanchhabra/vsx#554 Related log:
|
This comment was marked as resolved.
This comment was marked as resolved.
This appears to be a Docker downgrade: #29855 |
Another GitHub Action example: etrias-nl/php-dev#467 |
We are facing the same problem on our self-hosted instance. With dockerimages from dockerhub and our own gitlab registry. We havent noticed any downgrade before 06/22, but now we get ~1 per day.
|
@rarkins I use the defaults with the renovate docker image. So just disk based package cache. |
Debug logs from the buildkitd downgrade I mentioned:
|
Docker versioning hasn't changed in 7 months: https://github.com/renovatebot/renovate/tree/main/lib/modules/versioning/docker Here's the recent changes to the common lookup logic: https://github.com/renovatebot/renovate/commits/main/lib/workers/repository/process/lookup Can anyone narrow down the release range in which this would have started? |
We're seeing a reasonably high number of these which autoclose themselves. Which is kind of good of course, but it also means it's related to some type of temporary data problem, which is harder to diagnose. |
But it still decreases my confidence of ever using automerge with Renovate :s. I had to request to retry the PRs to have renovate figure out it wasn't needed. Does having the preset to separate minor major have something to do in this? |
Since we are self-hosted, I can see the requests made to our gitlab registry. Notice that the response is always the exact same number of bytes because nothing has changed. But that renovate did a downgrade at
|
This version was fine for us, so we moved back to this |
This PR should hopefully avoid these happening, but not solve the root cause: #29921 |
This comment was marked as resolved.
This comment was marked as resolved.
@TWiStErRob could you save the full log - or Job IDs - of the run where it was autoclosed and the one before it when it was created? I'd like to see if there's any helpful indicators |
Yep, @rarkins I was looking at that, here are the files: I was diffing T and T+3, a few observations:
|
@TWiStErRob thanks for the logs and detailed descriptions. The T-3 one also has 1 GraphQL query too. The first GraphQL should be the initRepo() one, so it implies that no github tags/releases were queried (results came from cache). This indicates that caching alone doesn't cause it, although doesn't rule out that something about caching may contribute to it. It just doesn't happen every time the result is from cache. From a quick code inspection I couldn't figure out which cache period applied here. The fact that self-hosted users are also seeing this seems to imply:
|
And what about non-docker downgrades, with GitHub actions? I just got another one right now: oxsecurity/megalinter#3715 |
GitHub actions have already been mentioned multiple times above. They use docker versioning. |
Renamed the issue to be less "technically correct" (that it's limited to Docker versioning) so that it's less confusing for most. It seems the problem is limited to Docker or GitHub Actions, both of which use versioning=docker |
FYI the workaround will be deployed to the hosted app today |
With the workaround released, we have noticed some warnings in our logs. So first of all, the workaround does work, which is good :) I noticed a very curious pattern. I don't know if it's important, but I thought I would share it. Apparently when the bug triggers, the version that is picked is in the middle of the list of versions. In this case, there are 121 versions and the 60th was picked:
In another case, there were 340 versions, and the 168th was picked. |
I also went spelunking through Renovate's recent changes and found two things of note:
Again, I don't know if either of these is relevant, but I thought I'd share them, just in case. It does not explain why this only happens on some Renovate runs. Maybe sometimes the container registry responds with the tags in a different order than the other runs? The response size would still be the same. |
Interesting. I actually just modified the log message in #29978 to remove the full allVersions as I thought it could be too long/annoying. Glad you spotted that quirk first. |
Since no one has posted it yet, here is the new log message. Note that
|
Hi @MarcWort, could you please check whether the following DEBUG message is present or absent for the same run containing the WARN message:
|
Sure, this message did not appear on any downgrade. We had 2 downgrades since updating to 37.422.1. |
We experience the opposite direction for the aws-machine-image datasource&versioning for some reason. Since 2 weeks legit updates are autoclosed later as an unexpected downgrade. Might be related to the fix in #29921 |
@otbe can you create a minimal reproduction of that? |
@rarkins sadly it does not work on public Github repos/Github Renovate app because of missing permissions to describe images but this would be an example repo: https://github.com/otbe/aws-downgrade Thats the log I get on our infrastructure:
|
|
Saw it on our helm chart repo for renovate image
https://developer.mend.io/github/renovatebot/helm-charts/-/job/c23bc731-28f2-4222-89b3-4a44a521c064 |
We are still blocked on this - not sure why it's happening. Any suggestions or new observations are appreciated. |
I've had this with
{
"packageName": "actions/cache"
"currentValue": "v4"
"compareValue": "v4"
"currentVersion": "v4"
"update": {
"bucket": "major",
"newVersion": "v2",
"newValue": "v2",
"newDigest": "8492260343ad570701412c2f464a5877dc76bace",
"releaseTimestamp": "2023-03-14T07:34:53.000Z",
"newMajor": 2,
"newMinor": null,
"newPatch": null,
"updateType": "patch"
}
"allVersionsLength": 57
"filteredReleaseVersions": [
"v2"
]
} |
Hey @allanlewis, thank you for reporting. Could you please tell whether this run also contained |
Yes, there are 12 of those. I can see that some do reference dependencies with known sec vulns, why would that be relevant? |
We had the hypothesis of how these lines could be a root cause of the problem: The last edit of the if-statement correlates with the first occurrences of the bug, so we had indirect confirmation, though not 100% sure yet. |
Hi @zharinov, |
At least one cause of downgrades I believe can be attributed to reusing a branch when the target branch updates. Also explains the fact that when ticking 'rebase' checkbox, the problem goes away Here is the discussion with the reproduction included Here is another candidate, the maintainer's answer states that it's only relevant to a group with a regex manager, but I'm fairly certain the regex manager isn't needed to reproduce it. As a side note, I'm not sure why renovate warns you that the updates might break if you don't pin your dependencies in poetry, but then uses |
This problem was never explained, and it seems to have disappeared as mysteriously as it came. The Mend App hasn't had this log message in at least 7 days, unfortunately we didn't notice when exactly it stopped in order to identify which release it was. |
Describe the proposed change(s).
Renovate in some cases is creating PRs to update dependencies where it's actually a downgrade. It appears to be isolated to
docker
versioning, which is used for the docker datasource and also for github actions.Discussion: #29901
Unfortunately we are not yet able to reproduce it
Update: we have added code to suppress such workarounds and log a message
WARN: Unexpected downgrade detected: skipping
instead, so I have updated the title of this Issue to matchThe text was updated successfully, but these errors were encountered: