-
Notifications
You must be signed in to change notification settings - Fork 98
feat(ec): do not allow upgrading more than 1 k8s minor #5313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ec): do not allow upgrading more than 1 k8s minor #5313
Conversation
currentVersion, err := extractKubeVersion(params.CurrentECVersion) | ||
if err != nil { | ||
return errors.Wrap(err, "failed to extract current kube version") | ||
} | ||
updateVersion, err := extractKubeVersion(params.UpdateECVersion) | ||
if err != nil { | ||
return errors.Wrap(err, "failed to extract update kube version") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're erroring out if we fail to extract a valid kube version effectively halting the upgrade process altogether. So my question would be:
- Could there be EC releases out there without the
*+k8s-<version>*
version syntax? - Should we be more cautious here and instead ignore the check altogether if we fail to extract any of the versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new versions should all have k8s-<version>
. i think this looks good for now.
func TestUpdateWithinKubeRange(t *testing.T) { | ||
tests := []struct { | ||
name string | ||
params types.UpgradeServiceParams | ||
expectError bool | ||
expectedErrMsg string | ||
}{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we have a good way to test the bootstrap
and Serve
methods right now, particularly because of the large volume of side effects the methods seem to have. I've opted for some unit tests here but curious to hear your thoughts:
kots/pkg/upgradeservice/server.go
Lines 23 to 29 in 1153828
func Serve(params types.UpgradeServiceParams) error { fmt.Printf("Starting KOTS Upgrade Service version %s on port %s\n", buildversion.Version(), params.Port) // cleanup on shutdown defer cleanup(params) if err := bootstrap(params); err != nil {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a unit test where you have one is the best option at this point.
if currentVersion.Major() != updateVersion.Major() { | ||
return errors.Errorf("major version mismatch: current %s, update %s", currentVersion.String(), updateVersion.String()) | ||
} | ||
if updateVersion.Minor() > currentVersion.Minor()+1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to prevent downgrades here as well? or is that handled somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL we support downgrading the kube version 👀 Nvm, I get what you mean now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done via f06ade3
Co-authored-by: Ethan Mosbaugh <[email protected]>
Co-authored-by: Ethan Mosbaugh <[email protected]>
Co-authored-by: Alex Parker <[email protected]>
Co-authored-by: Alex Parker <[email protected]>
Co-authored-by: Alex Parker <[email protected]>
mind updating the screenshot in the description with what the error now looks like? |
Missed it sorry. Yeah sure. |
@ajp-io updated the screenshot. |
What this PR does / why we need it:
This is a temporary measure while we work on releasing 1.31 (see replicatedhq/embedded-cluster#2148), to avoid having customers jump between more than 1 kubernetes minor version (e.g. moving from 1.29 to 1.31).
What we do is return an error that gets exposed in the UI when the user triggers a deploy. This is how it's currently showing up after a user presses
deploy
under the upgrade:Which issue(s) this PR fixes:
https://app.shortcut.com/replicated/story/113724/release-ec-with-support-for-1-31
Does this PR require a test?
Yes
Does this PR require a release note?
Does this PR require documentation?
Maybe?