Skip to content

Latest commit

 

History

History
21 lines (14 loc) · 1006 Bytes

deploying.md

File metadata and controls

21 lines (14 loc) · 1006 Bytes

Deployment

You can manage your deployments via the standard kubernetes CLI kubectl, e.g.

kubectl apply -f my_ml_deployment.yaml

Production Integration

For production settings you will want to incorporate your ML infrastructure and ML code into a continuous integration and deployment pipeline. One such realization of such a pipeline is shown below:

Production Pipelines

The pipeline consists of

  • A model code repo (in Git) where training and runtime ML components are stored
  • A continuuous integration pipeline that will train and test the model and wrap it (using Seldon built-in Wrappers or custome wrappers)
  • An image repository where the final runtime inference model image is stored.
  • A git repo for the infrastructure to store the ML deployment graph described as a SeldonDeployment
  • Some tool to either monitor the infrastructure repo and apply to the production kubernetes changes or a tool to allow dev ops to push updated infrastructure manually.