You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the perks of terraform is the ability to recreate identical environments, as everything is defined in code.
As part of this, I would expect that if I apply terraform one day to create one env, I should be able to re-create that env by re-applying that same terraform a few days later (within a reasonable timeframe).
This does not currently seem to be the case:
One week ago, I created a kubernetes cluster on v1.29.1+1, which was the latest back then.
Today, I try to recreate this cluster; but I get the following error: VKE server error: Could not add Vultr Kubernetes Engine: Invalid K8 version.
When checking vultr-cli kubernetes versions, I see:
VERSIONS
v1.29.2+1
v1.28.7+1
v1.27.11+1
My understanding of this is that VKE only allows for creating clusters on the latest possible patch version. Is that correct?
This is not only problematic for terraform usage and environment reproducibility, but it also raise stability issues - what if a cluster needs to be recreated on an older patch version, because the latest patch has a bug, or unexpected breaking change?
If VKE only allows for cluster operators to pin to the minor version (1.29), why expose the patch versions at all?
Solutions
VKE should provide a longer list of available kubernetes versions that can be created; ideally, every patch version released should be available for as long as that minor version is supported. This would allow for far more reproducible terraform; deprecating these versions is technically a breaking change!
VKE could provide an option to subscribe to version "stream" which would ; eg "rapid", "regular", and "stable", or even just to subscribe to a minor kubernetes version. Could allow for automatic patch updates; even if they were rolling upgrades over a longer time rather than surge upgrades.
The Vultr terraform provider could add a kubernetes version data resource, filterable by a prefix, to check that versions are valid (or even auto-update!) - similar to what digitalocean do in their provider. I would expect a version change on a live cluster to trigger a kubernetes surge upgrade, similar to what would happen when an upgrade is applied through the UI.
Alternatives
Manually checking the available kubernetes versions at every terraform run.
(apologies in advance for the long post!)
The text was updated successfully, but these errors were encountered:
Thanks for the write-up! I agree with all of this but I also think it's important to differentiate between k8s versions when a patch is made. When a version is patched, it's usually for a good reason and I think that's why you cannot deploy an older version. As for what I think should be done, I've been thinking about building a data source for the k8s versions so I'm going to do that no matter what. That, at least, can be accomplished wholly within the terraform provider. Your other proposed solutions would require significant alteration within the platform and some careful consideration.
I absolutely agree that different k8s versions should be differentiated, even when its just for a patch! We're on the same page there :)
But that's why I opened the issue; the current behaviour forces users to disregard the patch version, since they can only create clusters on the "latest" patch. Which means that in practice, users only really control the minor version, even if the patch version is "exposed" in the API.
In an ideal world, yes - software should always be kept up to date. However, the reality is that a patch version can still introduce unintentional bugs, which is why leaving version selection to users makes sense.
That's why I'm asking to reconsider the hard-deprecation policy that is currently in place for patch versions; I'm aware that this is probably out of scope for the terraform provider, but as the provider is official consumer of the vultr API, I figured this was the closest I could get to a feature request. I guess what I'm asking for here is for a vultr-insider to pass this feedback on to whichever team is responsible for VKE+the Vultr API :) Thank you!
After all, Debian and Ubuntu package repositories allow for specifying specific patch versions to install; why shouldn't Vultr? (Unless of course I'm missing some internal context, which is entirely possible!)
Problem statement
One of the perks of terraform is the ability to recreate identical environments, as everything is defined in code.
As part of this, I would expect that if I apply terraform one day to create one env, I should be able to re-create that env by re-applying that same terraform a few days later (within a reasonable timeframe).
This does not currently seem to be the case:
One week ago, I created a kubernetes cluster on
v1.29.1+1
, which was the latest back then.Today, I try to recreate this cluster; but I get the following error:
VKE server error: Could not add Vultr Kubernetes Engine: Invalid K8 version.
When checking
vultr-cli kubernetes versions
, I see:My understanding of this is that VKE only allows for creating clusters on the latest possible patch version. Is that correct?
This is not only problematic for terraform usage and environment reproducibility, but it also raise stability issues - what if a cluster needs to be recreated on an older patch version, because the latest patch has a bug, or unexpected breaking change?
If VKE only allows for cluster operators to pin to the minor version (
1.29
), why expose the patch versions at all?Solutions
Alternatives
(apologies in advance for the long post!)
The text was updated successfully, but these errors were encountered: