This is a BentoML example project, demonstrating how to build an image generation inference API server using the SDXL-Lightning model, a lightning-fast text-to-image generation model that is able to generate high-quality 1024px images in a few steps.
See here for a full list of BentoML example projects.
To run the Service locally, we recommend you use an Nvidia GPU with at least 16G VRAM.
git clone https://github.com/bentoml/BentoDiffusion.git
cd BentoDiffusion/sdxl-lightning
# Recommend Python 3.11
pip install -r requirements.txt
export HF_TOKEN=<your-api-key>
We have defined a BentoML Service in service.py
. Run bentoml serve
in your project directory to start the Service.
$ bentoml serve
2024-01-18T18:31:49+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:SDXLLightning" listening on http://localhost:3000 (Press CTRL+C to quit)
Loading pipeline components...: 100%
The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.
CURL
curl -X 'POST' \
'http://localhost:3000/txt2img' \
-H 'accept: image/*' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "A cinematic shot of a baby racoon wearing an intricate italian priest robe.",
"num_inference_steps": 1,
"guidance_scale": 0
}'
BentoML client
import bentoml
with bentoml.SyncHTTPClient("http://localhost:3000") as client:
result = client.txt2img(
prompt="A cinematic shot of a baby racoon wearing an intricate italian priest robe.",
num_inference_steps=1,
guidance_scale=0.0
)
After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.
Make sure you have logged in to BentoCloud.
bentoml cloud login
Deploy it to BentoCloud.
bentoml deploy
Once the application is up and running on BentoCloud, you can access it via the exposed URL.
Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.