-
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No-downtime rollout #299
Comments
I am also curious about upgrades with no downtime. @maelp do you mean when it’s running inside docker , and you want to update process-compose ? Would appreciate the context |
I think that kamal-proxy is a decent alternative from the Wowo one . It’s a golang system with its own http proxy that allows you to upgrade a docker , but when it upgrades it does a green / blue upgrade, draining the existing service , and then diverting traffic to the replacement service . This is required so you have e no downtime in terms of connections. It has a “ rollout “ command that does this. https://github.com/basecamp/kamal-proxy/tree/main/internal/cmd Shows that command . I use this as does a very large community in the ruby world. You can also layer Caddy on top too for extra features if you really need it , like automatic SSL certificates, Cloudflare DNS, and certificate storage for a Cluster( which is required by the way for Let’s Encrypt , etc to not ban you). https://caddyserver.com/docs/json/storage/ Is the list of caddy storage modules . S3 is a decent choice As per process compose , and draining the swapping a sub process , , such that there is no downtime at all, there are a few golang packages that can do this that might be inspiration… This Cloudflare article explains how with 4 golang packages that do it: https://blog.cloudflare.com/graceful-upgrades-in-go/ Process-compose would obviously be the supervisor and I think a small amount of code would allow it to do the binary swap without loosing any connections or downtime. |
I was thinking more of having either process-compose running a docker
container or a normal process, but sending traffic through a proxy, so that
when we launch a new process and it can take a bit of time, there's no
downtime as we just start a new process and then switch the proxy
Message ID: ***@***.***>
|
But then you will have slow startup because docker has to start the image . I might not be understanding , but if you want a quick startup and shutdown experience, you want docker to be always up, and just getting PC to manipulate what binaries it is running . To have no downtime I would have thought it would be this way : Host:
Guest ( inside the docker ) 0 , 1 is the initial deployment and always on. Proxy is your HTTP proxy, like Caddy or Kamal. Docker is your standard docker , which you need of course for security. This image was PC and the binaries inside it. You can then ask docker to run Binary X , by using the “docker exec “ command to tell process compose to start that binary , if not already started. The Proxy ( like caddy ) then changes its config to expose the port that the binary uses via docker. —- Have you used kamal proxy or Caddy ? I have used kamal for 2 days . Used caddy for many years. I have been thinking about doing this for a while, but never got time to prototype it. |
Yes exactly that would be the process, a proxy in front of either a docker or a regular host process, which could be switched. I'm just saying, it would be nice if all that process was handled by process-compose, eg
this could be wrapped in a |
Yes 👍 That’s all very possible I think with PC Don’t forget connection draining. It will mean that we have 2 dockers . Kamal will flip them , and then PC just does its normal thing. The 2nd docker is started with PC starting up everything . The 1st docker and the PC and the binary ( s ) is drained . Kamal lets new connections through the the 2nd docker , whilst letting existing connections complete in the 1st docker . Kamal finally kills the 1st docker . https://github.com/basecamp/kamal-proxy/ I think that the logic of this flipping could be applied to PC itself . That’s however tricky on windows as the mechanism in non docker environments is to do a process fork , and that’s tricky on Windows. In Unix is easy and can def be done by PC . https://github.com/jpillora/overseer Does what kamal-proxy does in that it does this flipping. Blue green upgrades is when it’s done across many servers . I should make that clear . I have a basic prototype with kamal on my laptop and have been trying to find time to try integrating process-compose The nice thing is that kamal throws events over json rpc as it goes through its stages . It’s not a black box. |
Kamal proxy now gained this ability . It’s cleanly done and can do zero downtime of your dockers. I am testing it now , with tofu and nats. So that it works globally across many servers and data centers. So we should look into doing it with process-compose as you suggested @maelp I really liked your vision on this and would be happy to help etc . Nats would allow oversight globally on what’s running where and what version etc . |
would be lovely if some people were motivated to add this! |
I think it’s not a big effort to add this . If it’s not wanted in the project , making a golang project that imports process-compose will be pretty easy and simple. |
@gedw99 that would be lovely, having a nice setup for process-compose+kamal nodowntime rollout would be great, whether it's integrated in process-compose, or as an external "plugin" / "template"! I'd love to see it when you do it :) |
Feature Request
Docker compose and docker-rollout (https://github.com/Wowu/docker-rollout) allows for no-downtime updates, can it be done equivalently with process-compose (I guess by integrating with some proxy like traefik?)
Use Case:
Update a service without downtime
Proposed Change:
Add a
process-compose rollout
feature similar todocker-rollout
Who Benefits From The Change(s)?
Devops wanting to update a service without downtime
Alternative Approaches
Use docker-rollout with docker, or kill and restart service with downtime
The text was updated successfully, but these errors were encountered: