Skip to content
This repository has been archived by the owner on Aug 19, 2024. It is now read-only.

Rolling deployment restart doesn't restart the pod #281

Closed
masayag opened this issue Mar 27, 2024 · 4 comments
Closed

Rolling deployment restart doesn't restart the pod #281

masayag opened this issue Mar 27, 2024 · 4 comments
Labels
jira Issue will be sync'ed to Red Hat JIRA

Comments

@masayag
Copy link
Contributor

masayag commented Mar 27, 2024

It seems that with RHDH 1.1 operator rolling out a restart of the deployment doesn't affect the running pod:

# replicaset before issuing a rollout restart of the deployment
→ oc get -n rhdh-operator replicasets.apps 
NAME                             DESIRED   CURRENT   READY   AGE
backstage-backstage-76cf6bb88b   1         1         1       21h
rhdh-operator-6d9786bbf9         1         1         1       21h
masayag@masayag: ~/work/orchestrator-helm-chart/charts (main)

# pods before issuing a rollout restart of the deployment
→ oc get -n rhdh-operator pods
NAME                                   READY   STATUS    RESTARTS   AGE
backstage-backstage-76cf6bb88b-q95qw   1/1     Running   0          21h
backstage-psql-backstage-0             1/1     Running   0          21h
rhdh-operator-6d9786bbf9-58krh         2/2     Running   0          21h

# executing a restart
→ oc rollout restart deployment -n rhdh-operator backstage-backstage 
deployment.apps/backstage-backstage restarted

# pod didn't restart
→ oc get -n rhdh-operator pods
NAME                                   READY   STATUS    RESTARTS   AGE
backstage-backstage-76cf6bb88b-q95qw   1/1     Running   0          21h
backstage-psql-backstage-0             1/1     Running   0          21h
rhdh-operator-6d9786bbf9-58krh         2/2     Running   0          21h

# replicaset created, but not advanced
→ oc get -n rhdh-operator replicasets.apps 
NAME                             DESIRED   CURRENT   READY   AGE
backstage-backstage-76cf6bb88b   1         1         1       21h
backstage-backstage-7d8dd84c56   0         0         0       13s
rhdh-operator-6d9786bbf9         1         1         1       21h

more info for the replicaset:

→ oc describe rs backstage-backstage-7d8dd84c56 -n rhdh-operator 
Name:           backstage-backstage-7d8dd84c56
Namespace:      rhdh-operator
Selector:       pod-template-hash=7d8dd84c56,rhdh.redhat.com/app=backstage-backstage
Labels:         pod-template-hash=7d8dd84c56
                rhdh.redhat.com/app=backstage-backstage
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 12
Controlled By:  Deployment/backstage-backstage
Replicas:       0 current / 0 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       pod-template-hash=7d8dd84c56
                rhdh.redhat.com/app=backstage-backstage
  Annotations:  kubectl.kubernetes.io/restartedAt: 2024-03-27T22:43:23+02:00
  Init Containers:
   install-dynamic-plugins:
    Image:      registry.redhat.io/rhdh/rhdh-hub-rhel9@sha256:3d620b8f73dfd9d79fb1eaddcf97096299e61d167ff6c871758badbf8218b915
    Port:       <none>
    Host Port:  <none>
    Command:
      ./install-dynamic-plugins.sh
      /dynamic-plugins-root
    Limits:
      cpu:                1
      ephemeral-storage:  5Gi
      memory:             2560Mi
    Environment:
      NPM_CONFIG_USERCONFIG:  /opt/app-root/src/.npmrc.dynamic-plugins
    Mounts:
      /dynamic-plugins-root from dynamic-plugins-root (rw)
      /opt/app-root/src/.npmrc.dynamic-plugins from dynamic-plugins-npmrc (ro,path=".npmrc")
      /opt/app-root/src/dynamic-plugins.yaml from dynamic-plugins-rhdh (ro,path="dynamic-plugins.yaml")
  Containers:
   backstage-backend:
    Image:      registry.redhat.io/rhdh/rhdh-hub-rhel9@sha256:3d620b8f73dfd9d79fb1eaddcf97096299e61d167ff6c871758badbf8218b915
    Port:       7007/TCP
    Host Port:  0/TCP
    Args:
      --config
      dynamic-plugins-root/app-config.dynamic-plugins.yaml
      --config
      /opt/app-root/src/app-config-rhdh.yaml
      --config
      /opt/app-root/src/app-config-auth.gh.yaml
      --config
      /opt/app-root/src/app-config-catalog.yaml
    Limits:
      cpu:                1
      ephemeral-storage:  5Gi
      memory:             2560Mi
    Liveness:             http-get http://:7007/healthcheck delay=60s timeout=2s period=10s #success=1 #failure=3
    Readiness:            http-get http://:7007/healthcheck delay=30s timeout=2s period=10s #success=2 #failure=3
    Environment Variables from:
      backstage-psql-secret-backstage  Secret  Optional: false
      backstage-backend-auth-secret    Secret  Optional: false
    Environment:
      APP_CONFIG_backend_listen_port:  7007
    Mounts:
      /opt/app-root/src/app-config-auth.gh.yaml from app-config-rhdh-auth (rw,path="app-config-auth.gh.yaml")
      /opt/app-root/src/app-config-catalog.yaml from app-config-rhdh-catalog (rw,path="app-config-catalog.yaml")
      /opt/app-root/src/app-config-rhdh.yaml from app-config-rhdh (rw,path="app-config-rhdh.yaml")
      /opt/app-root/src/dynamic-plugins-root from dynamic-plugins-root (rw)
  Volumes:
   dynamic-plugins-root:
    Type:          EphemeralVolume (an inline specification for a volume that gets created and deleted with the pod)
    StorageClass:  
    Volume:        
    Labels:            <none>
    Annotations:       <none>
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
   dynamic-plugins-npmrc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dynamic-plugins-npmrc
    Optional:    true
   dynamic-plugins-rhdh:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dynamic-plugins-rhdh
    Optional:  false
   app-config-rhdh:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      app-config-rhdh
    Optional:  false
   app-config-rhdh-auth:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      app-config-rhdh-auth
    Optional:  false
   app-config-rhdh-catalog:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      app-config-rhdh-catalog
    Optional:  false
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  2m20s  replicaset-controller  Created pod: backstage-backstage-7d8dd84c56-mqhx9
  Normal  SuccessfulDelete  2m19s  replicaset-controller  Deleted pod: backstage-backstage-7d8dd84c56-mqhx9
@github-actions github-actions bot added the jira Issue will be sync'ed to Red Hat JIRA label Mar 27, 2024
@gazarenkov
Copy link
Member

Hi @masayag

Thanks for the feedback!
I can confirm that oc rollout restart deployment does not work as expected, it does not recreate the pod and so if you change associated ConfigMap or Secret it is not refreshed. This needs some further investigation, so if you find some additional info please share.
Meantime, if you need to apply the changes in CM/Secrets (until we implement #236) one of the following can be used:

  • delete the pod and let replicaSet to recreate it. The simplest for single replica IMO
  • scale to 0 and then back Backstage CR using Spec's replicas field

@rm3l
Copy link
Member

rm3l commented Mar 29, 2024

IIRC, kubectl rollout restart would add the kubectl.kubernetes.io/restartedAt annotation, which causes the Pod to restart, but (my guess) is that the Operator would overwrite these annotations.
As an additional workaround, I usually delete the Deployment, and the Operator would recreate it. But this workaround (same with #281 (comment) above) would unfortunately cause some downtime, I guess.

@gazarenkov
Copy link
Member

Good point, @rm3l !
Yes, I think it come with the fact that version 1.1 (0.1) watches Deployment.
Just tested oc rollout restart deployment with next version and it works fine.

@gazarenkov
Copy link
Member

Works on 1.2.
Closing this issue, please feel free to reopen if needed

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
jira Issue will be sync'ed to Red Hat JIRA
Projects
None yet
Development

No branches or pull requests

3 participants