-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add helm chart #2208
Add helm chart #2208
Conversation
Welcome @stevehipwell! |
/assign @Raffo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Left some comments.
742ca7f
to
6a95d39
Compare
Seems like the linter is failing 😞
|
@njuettner the Kind action failed (see helm/kind-action#41 for why it didn't error) which causes later actions to fail. If you have the permissions could you re-run the action?
|
@Raffo could you re-run the lint-test action for the above mentioned reason? |
6a95d39
to
f492149
Compare
@Raffo @njuettner I've pushed again so it looks like the actions will need to be enabled again. |
@stevehipwell I enabled CI, the linter is still unhappy. |
@Raffo I can't get to the logs but was the Kind cluster creation good or did that never get healthy? |
I got to the logs, it looks like it was Kind. Could you re run the action? |
@stevehipwell kind being unkind uh 😅 rerunning! |
And it failed again. I think the build deserves a deeper look on why this is happening. |
Hi guys, https://github.com/kubernetes-sigs/external-dns/pull/2208/checks?check_run_id=3320498211 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate I'm not familiar with the cluster were you are running this but sounds like it is spawning a new node to run this pod and you are not waiting long enough for the node to become ready. |
If I remember correctly it's possible to increase the timeout on the Kind action. I run a number of charts with this exact config and up until recently hadn't seen this issue; up until this point I think I've only ever seen it twice in well over 50 runs. |
@stevehipwell you can use act to simulate GHA runs locally. Also we found k3d project is more lightweight to spawn k8s clusters for CI. Here is the action for k3d. JFYI |
Thanks @k0da, as I said above this pattern has been working for years and hundreds of chart PRs so I've no idea why it's suddenly got brittle. |
Signed-off-by: Steve Hipwell <[email protected]>
f492149
to
124fd70
Compare
@Raffo I've updated the Kind action version (I thought I'd already done this) and added a longer wait for the cluster to be ready which should solve this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stevehipwell CI is green and the chart content looks all right to me. However, I think that we need to remove/change the part on releasing the chart: we have a consolidated release process described in https://github.com/kubernetes-sigs/external-dns/blob/master/docs/release.md and we would need to make creating the release part of it. I also would love to avoid having multiple github releases for what are code and helm changes. Would you be able to propose a change to incorporate that?
@Raffo unfortunately the chart releaser will create a GitHub release for each chart release and there is no way to change this behaviour. You'll see in the release action that the suffix -helm-chart is added to distinguish the release. Would it be easier to merge this PR and add the documentation separately as the first release only requires the gh-pages branch to be created with GitHub pages enabled and the PR to be accepted? |
@stevehipwell would it be beneficial to include the external-dns It appears like bitnami includes the CRD in their helm chart. However they are doing it wrong IMO, it should not be templated. According to the helm docs:
|
@onedr0p if the CRDs are fully supported then they should be in the CRDs chart folder, but as they're not in the standard manifest install I've not added them. However if you read the Helm 3 docs you will see that although CRD support is partially implemented you shouldn't use it and disable installing CRDs with the |
This sounds good to me. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: njuettner, Raffo, stevehipwell The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Raffo @njuettner would either of you be willing to sponsor me as a member of the Kubernetes SIGs GitHub organisation? |
@stevehipwell what would be the need? Approving the Helm chat prs? |
@Raffo being in the owners file to get notified of chart PRs to review would be the main reason. I maintain multiple Kubernetes SIGs charts so it'd be easier if I was a member. |
Description
Adds a Helm chart based on the official manifests in the repo; the Helm repo will be hosted by GitHub pages at https://kubernetes-sigs.github.io/external-dns/.
Adds required GitHub Actions CI.
/charts/external-dns/**
will trigger the lint and test action/charts/external-dns/**
on the master branch a release will be madegh-pages
branch needs to be setup so the release action can use it as the Helm repoThe chart is fully documented but I've not linked it in to any of the other documentation as maybe this should be a soft launch? Otherwise could someone point me in the right direction to put the documentation in the correct place.
Fixes #1941
Checklist