-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream initial fixes #7
base: master
Are you sure you want to change the base?
Conversation
ServiceAccounts created did not have the server's CA certificate included.
This can be bad for security but this can be further restricted using PodSecurity, some privileged pods can be necessary on occasion.
The kubernetes.default.svc.cluster.local certificate should contain the IP address of that service (the first available IP address in the ClusterIP CIDR) since some services will try to connect to the apiserver via its IP address.
OpenSSL is useful for debugging certificates like printing out their X.509 information in plain text.
@@ -16,6 +16,7 @@ let | |||
# Alternative names remain, as they might be useful for debugging purposes | |||
getAltNames "controlplane" ++ | |||
getAltNames "loadbalancer" ++ | |||
lib.singleton "10.32.0.1" ++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nixpkgs standard library doesn't contain any functions for manipulating IPv4/6 addresses and CIDRs so I thing this being hardcoded is acceptable until a better solution can be found or such functions are added.
Sorry, I kinda forgot about this for a while. Thank you for your changes! This looks great in general, although I'm unsure about the blanket |
In my case, I needed privileged pods for the Cilium CNI to run. This was a few weeks ago so I don't exactly remember which component of cilium needed a privileged pod or why. I didn't bother packaging it the way flannel is packaged in this project and run outside the cluster because that's not generally a supported way to run a CNI and my personal goal isn't to spin up a cluster that has no pods in it.
It could be done using the addon manager, sort of how the nixpkgs kubernetes modules does things, but from the discussion I've been reading this addon manager is deprecated and scheduled for removal so I didn't bother investing any time in it. All it is, anyway, is a bash script that applies resources to the cluster using an RBAC role. |
Adapts fixes from #7. * Set `caFile` globally * Add `10.32.0.1` to TLS cert names * Add `serviceAccountKeyFile` for controller manager
Hey, thanks for the hard work putting out this proof of concept. I'm attempting to build better kubernetes infrastructure than what is offered in Nixpkgs and using this as ground-work is giving me a lot of inspiration. While I'm aiming at something more production-grade than a toy cluster I started out with this and fixed a few issues I identified while trying to deploy applications into the cluster after initial bootstrap such as the Cilium CNI.
Issues identified and fixed: