Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Enabling rancher after previously disabling it does not work #1466

Closed
hco opened this issue Aug 31, 2024 · 1 comment
Closed

[Bug]: Enabling rancher after previously disabling it does not work #1466

hco opened this issue Aug 31, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@hco
Copy link

hco commented Aug 31, 2024

Description

I tried to enable rancher before, but then realized that my k3s is to new. By now it should be supported, so I tried to re-enable rancher.
After re-enabling rancher by setting enable_rancher to true, nothing happend except for k3s_kustomization_backup.yaml being updated.

My feeling is that it should have installed rancher using helm, but helm ls --all-namespaces does not show it. kubectl get helmcharts.helm.cattle.io --all-namespaces also does not contain anything rancher-related.

Kube.tf file

locals {
  hcloud_token = "xxxxxxxxxxx"
}

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token

  source = "kube-hetzner/kube-hetzner/hcloud"
  version = "2.14.4"
  ssh_public_key = var.ssh_public_key
  ssh_private_key = var.ssh_private_key
  ssh_additional_public_keys = ["foobar"]
  network_region = "eu-central" # change to `us-east` if location is ash
  control_plane_nodepools = [
    {
      name        = "control-plane-fsn1",
      server_type = "cx21",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
    },
    {
      name        = "control-plane-nbg1",
      server_type = "cx21",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 1
    },
    {
      name        = "control-plane-hel1",
      server_type = "cx21",
      location    = "hel1",
      labels      = [],
      taints      = [],
      count       = 1
    }
  ]

  agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cpx11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 0
    },
    {
      name        = "agent-large",
      server_type = "cpx21",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 0
    },
    {
      name        = "agent-cpx31",
      server_type = "cpx31",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count = 4
    },
  ]

  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"

  initial_k3s_channel = "v1.29"

  dns_servers = [
    "1.1.1.1",
    "8.8.8.8",
    "2606:4700:4700::1111",
  ]

  use_control_plane_lb = true
  enable_rancher = true
  rancher_hostname = "rancher.foo.bar.baz"
}

provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.43.0"
    }
  }
}

output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}

variable "hcloud_token" {
  sensitive = true
  default   = ""
}

variable "ssh_private_key" {
  sensitive = true
  default   = ""
}

variable "ssh_public_key" {
  sensitive = false
  default   = ""
}

Screenshots

No response

Platform

mac

@hco hco added the bug Something isn't working label Aug 31, 2024
@mysticaltech
Copy link
Collaborator

@hco We use the rancher helm controller which has CRDs from within the cluster. Just do kubectl get crds -A | grep helm to find them, query them, and delete the ones for rancher.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants