Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube started from an GCE disk image #19587

Closed
divya-kumari-27 opened this issue Sep 9, 2024 · 4 comments
Closed

Minikube started from an GCE disk image #19587

divya-kumari-27 opened this issue Sep 9, 2024 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@divya-kumari-27
Copy link

divya-kumari-27 commented Sep 9, 2024

What Happened?

Is it possible to have stable minikube cluster created with the following steps

  1. Create a VM instance on Google Compute Engine
  2. On ubuntu 20.04 LTS, start minikube (--api-server=VM_IP is passed in args)
  3. Deploy services
  4. Stop minikube
  5. Turn of GCE VM
  6. Create a disk image
  7. Create a new instance out of the created disk image
  8. Restart minikube with minikube start (--api-server=new VM_IP is passed in args)

After doing above steps, I can see my service pods up and running, but some of the kube-system pods
go into error state. Once the pods enter error state, deleting the pods, does not fix the problem

Is there a way to recover this cluster?

$ kubectl get pods -A
NAMESPACE       NAME                                                              READY   STATUS             RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create-sw8gt                              0/1     Completed          0          28d
ingress-nginx   ingress-nginx-admission-patch-khgpv                               0/1     Completed          0          28d
ingress-nginx   ingress-nginx-controller-5fc9586f46-hl2v9                         1/1     Running            0          28d
kube-system     coredns-558bd4d5db-bz2tz                                          0/1     Error              0          28d
kube-system     etcd-minikube                                                     1/1     Running            1          28d
kube-system     kube-apiserver-minikube                                           0/1     Running            1          28d
kube-system     kube-controller-manager-minikube                                  0/1     Running            2          28d
kube-system     kube-proxy-557r6                                                  0/1     Error              0          28d
kube-system     kube-scheduler-minikube                                           1/1     Running            1          28d
kube-system     storage-provisioner                                               0/1     Error              1          ### 28d

Attach the log file

* ==> storage-provisioner [51246a3fb3be] <==
* I0909 09:57:17.937640       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0909 09:57:17.951859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0909 09:57:17.952583       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0909 09:57:35.490100       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0909 09:57:35.490163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3cb096e-3877-4319-8669-5d648956ae54", APIVersion:"v1", ResourceVersion:"2399198", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_803f2b0d-6636-4b15-9fab-01d23ee5c582 became leader
I0909 09:57:35.490255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_803f2b0d-6636-4b15-9fab-01d23ee5c582!
I0909 09:57:35.592663       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_803f2b0d-6636-4b15-9fab-01d23ee5c582!



* ==> kube-scheduler [f9cddea0fb85] <==
* I0804 11:45:32.015790       1 serving.go:347] Generated self-signed cert in-memory
I0804 11:45:32.365669       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0804 11:45:32.365705       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0804 11:45:32.365714       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0804 11:45:32.365737       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0804 11:45:32.365759       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0804 11:45:32.365765       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0804 11:45:32.365907       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0804 11:45:32.365966       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0804 11:45:32.466200       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0804 11:45:32.466246       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0804 11:45:32.466327       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 


Can share more logs if at all using the minikube cluster from a vm created out of image is possible

Operating System

Ubuntu

Driver

Docker

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 7, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 6, 2025
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants