-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDK upgrade #111
base: master
Are you sure you want to change the base?
SDK upgrade #111
Conversation
6b66841
to
c923de5
Compare
Whoa! This is a massive PR in a positive way. I'll try to allocate some time this week for review and build it locally. Thanks! |
FIY, I rebased the PR against the current master |
deploy/kustomization.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was the entire deploy directory removed? CRDs and role are moved to other location - that's fine, but there was so much more manifests in this directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're deploying the helm chart internally, but I added the default configuration from the SDK skeleton with some defaults I would expect from the readme / previous defaults.
e843503
to
a44b73e
Compare
Add configurable prefix for DB names
45f9337
to
95d57d4
Compare
I rebased the branch against the latest master. |
I have ported the code from the current master to a clean project created with the current kubebuilder (targeting the controller-runtime 0.15 and kubernetes 1.27).
This is basically #100, but with tests passing against
export ENVTEST_K8S_VERSION=1.26.x; . <(bin/setup-envtest use -p env $ENVTEST_K8S_VERSION); ginkgo run --flake-attempts=2 internal/controller/
The second commit also addresse #110