Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dce-addon/insight-agent chart has repeated operator.insight.io/managed-by field in template charts/kube-prometheus-stack/templates/exporters/core-dns/servicemonitor.yaml #48

Open
DonaldKellett opened this issue Nov 7, 2023 · 1 comment
Assignees

Comments

@DonaldKellett
Copy link

dce-addon/insight-agent chart has repeated operator.insight.io/managed-by field in template charts/kube-prometheus-stack/templates/exporters/core-dns/servicemonitor.yaml which causes HelmRelease reconciliation failures with Flux v2 as shown in the screenshot below:

2023-11-07 16_43_58-donaldleung@ESPF3QDR0B_ ~

Chart details

  • Repository: dce-addon (https://release.daocloud.io/chartrepo/addon)
  • Chart name: insight-agent
  • Chart version: 0.21.1 (affects 0.19.x as well)

Steps to reproduce

Unfortunately, it appears some templates rely on data from the running cluster so helm template doesn't work.

Install DCE5 Community with dce5-installer according to the official instructions, then run:

helm -n insight-system get manifest insight-agent > insight-agent.yaml.txt

Now inspect the output of the command in insight-agent.yaml.txt (attached below).

insight-agent.yaml.txt

In the YAML block with source charts/kube-prometheus-stack/templates/exporters/core-dns/servicemonitor.yaml, observe a duplicate operator.insight.io/managed-by field:

---
# Source: insight-agent/charts/kube-prometheus-stack/templates/exporters/core-dns/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: insight-agent-kube-prometh-coredns
  namespace: insight-system
  labels:
    app: kube-prometheus-stack-coredns
    operator.insight.io/managed-by: insight
    
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: insight-agent
    app.kubernetes.io/version: "45.28.1"
    app.kubernetes.io/part-of: kube-prometheus-stack
    chart: kube-prometheus-stack-45.28.1
    release: "insight-agent"
    heritage: "Helm"
    operator.insight.io/managed-by: insight
spec:
  jobLabel: jobLabel
  
  selector:
    matchLabels:
      app: kube-prometheus-stack-coredns
      release: "insight-agent"
  namespaceSelector:
    matchNames:
      - "kube-system"
  endpoints:
  - port: http-metrics
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

For reference, here is the values.yaml file used (password REDACTED for security reasons):

global:
  exporters:
    auditLog:
      host: insight-opentelemetry-collector.insight-system.svc.cluster.local
    logging:
      host: 192.168.86.201
      password: REDACTED
      port: 31824
      scheme: https
      user: elastic
    metric:
      host: vminsert-insight-victoria-metrics-k8s-stack.insight-system.svc.cluster.local
kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      replicas: 1
opentelemetry-collector:
  replicaCount: 1
@DonaldKellett
Copy link
Author

On further inspection, it seems this issue occurs in multiple files servicemonitor.yaml under charts/kube-prometheus-stack/templates/exporters/ likely due to a common cause.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants