-
Notifications
You must be signed in to change notification settings - Fork 686
Description
There are scenarios where it's desirable to utilize multi-stage container builds to have a single Dockerfile that can be setup to work one way during run mode (e.g. assuming a bind mount to the app files and enabling a dev server for hot reload) and another way for publish (copying the app files and starting without hot reload). There is no way to easily change the build stage used other than passing it directly into the call to AddDockerfile
. An extension method that does this can't be made in the app as DockerBuildAnnotation
is internal.
Right now, one has to instead conditionally call a different overload of AddDockerfile
itself, as well as likely conditionally calling extension methods, e.g.:
var pythonInference =
(builder.ExecutionContext.IsRunMode
? builder.AddDockerfile("python-inference", "../PythonInference", null, "base")
: builder.AddDockerfile("python-inference", "../PythonInference"))
.WithHttpEndpoint(targetPort: 62394, env: "UVICORN_PORT")
.WithContainerRuntimeArgs("--gpus=all")
.WithLifetime(ContainerLifetime.Persistent);
if (builder.ExecutionContext.IsRunMode)
{
pythonInference
.WithBindMount("../PythonInference", "/app")
.WithArgs("--reload");
}
Alternatively, the build stage can be determined first and then passed in to the longest overload:
var stage = builder.ExecutionContext.IsRunMode ? "base" : "publish";
var pythonInference = builder.AddDockerfile("python-inference", "../PythonInference", null, stage)
.WithHttpEndpoint(targetPort: 62394, env: "UVICORN_PORT")
.WithContainerRuntimeArgs("--gpus=all")
.WithLifetime(ContainerLifetime.Persistent);
if (builder.ExecutionContext.IsRunMode)
{
pythonInference
.WithBindMount("../PythonInference", "/app")
.WithArgs("--reload");
}
Proposal
It would allow for cleaner app host code to have a WithBuildStage
method that could be conditionally called, e.g.:
// Add the python inference app
var pythonInference = builder.AddDockerfile("python-inference", "../PythonInference")
.WithHttpEndpoint(targetPort: 62394, env: "UVICORN_PORT")
.WithContainerRuntimeArgs("--gpus=all")
.WithLifetime(ContainerLifetime.Persistent);
if (builder.ExecutionContext.IsRunMode)
{
// Configure the python inference app for development
pythonInference
.WithBuildStage("base")
.WithBindMount("../PythonInference", "/app")
.WithArgs("--reload");
}
# Start from the python image
FROM python:3.12.5-slim as base
# Set the working directory, the app files will be bind-mounted here during dev
WORKDIR /app
# Copy the requirements file
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Ensure the server is accessible from outside the container
ENV UVICORN_HOST=0.0.0.0
# Set the entry point to run the application
ENTRYPOINT ["python", "-m", "uvicorn", "main:app"]
# Ensure CMD is empty
CMD []
# Create the publish stage
FROM base as publish
# Copy application files from the host as there's no bind-mount
COPY . .