-
Couldn't load subscription status.
- Fork 34
Show Remote SSH Output panel on workspace start #627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
bbb0391 to
92b1788
Compare
92b1788 to
719de6f
Compare
719de6f to
fe6d1dc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not had a chance to try it yet, but it is looking really good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking really good! I think we just need to tweak how we handle the timeout and error states, and then we are good to go.
| case "timeout": | ||
| throw new Error( | ||
| `${workspaceName}/${currentAgent.name} ${currentAgent.status}`, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think timeout means the agent is taking longer than expected to connect, but we can keep waiting because it might eventually connect.
| case "off": | ||
| case "start_error": | ||
| case "start_timeout": | ||
| case "shutdown_error": | ||
| case "shutdown_timeout": | ||
| case "shutting_down": | ||
| throw new Error( | ||
| `${workspaceName}/${currentAgent.name} lifecycle state: ${currentAgent.lifecycle_state}`, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should still connect in the case of start_error, it could be that their script has an error or a bug, but we can still connect to the agent.
start_timeout means their script is taking a really long time, but we can still wait, it might eventually complete.
With the shutdown_* ones, this means the workspace is shutting down, I wonder if we should wait for the workspace to shut down and then ask the user if they want to start it? Not sure if off means a similar thing.
edit: if we incorporate workspace build status checks then we would might be able to handle shutdown/off for free, I imagine in these cases the workspace goes to stopping and stopped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for shutdown_* we can rely on the workspace shutting down - Does that mean the state machine for the agent should be a no-op here? IDK if the workspace can be running but the agent is shutting down for example, in that case we'd be stuck doing nothing...
| ): Promise<WorkspaceAgent> { | ||
| return new Promise<WorkspaceAgent>((resolve, reject) => { | ||
| const updateEvent = monitor.onChange.event((workspace) => { | ||
| if (workspace.latest_build.status !== "running") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about moving the build status checks into ensureAgentReady as well and then call it on the change event? So we would have something like:
new Promise((resolve, reject) => {
monitor.onChange.event((workspace) => {
try {
if (ensureAgentReady(workspace) {
resolve(workspace)
}
} catch (error) {
reject(error)
}
})
})And ensureAgentReady would merge in that switch we have on workspace.latest_build.status.
Partially because, otherwise we have to duplicate checks. For example, we need to allow connecting with start_error, start_timeout, or ready, so we would have those checks once in the switch, and then once again in checker. We also are duplicating the agent and workspace build status checks, which granted is simpler since they only have a single connectable state right now, but could be nice if they were derived from a single spot.
The socket handling and logging could be a bit tricky though, maybe we share one progress notification and terminal to make it easier.
Not saying we should block on this though, just a suggestion.
Closes #626