Stable Diffusion XL Turbo¶
Stable Diffusion XL Turbo (SDXL Turbo) is a distilled version of SDXL 1.0 and is capable of creating images in a single step, with improved real-time text-to-image output quality and sampling fidelity.
This document demonstrates how to serve SDXL Turbo with BentoML.
The resulting inference API accepts custom parameters for image generation. For example, you can send a query containing the following:
{
"guidance_scale": 0,
"num_inference_steps": 1,
"prompt": "A cinematic shot of a baby racoon wearing an intricate italian priest robe."
}
Example output:

This example is ready for quick deployment and scaling on BentoCloud. With a single command, you get a production-grade application with fast autoscaling, secure deployment in your cloud, and comprehensive observability.

Code explanations¶
You can find the source code in GitHub. Below is a breakdown of the key code implementations within this project.
Define the SDXL Turbo model ID. You can switch to any other diffusion model as needed.
service.py¶MODEL_ID = "stabilityai/sdxl-turbo"
Use the
@bentoml.service
decorator to define a BentoML Service, where you can customize how the model will be served. The decorator lets you set configurations like timeout and GPU resources to use on BentoCloud. Note that SDXL Turbo requires at least an NVIDIA L4 GPU for optimal performance.service.py¶@bentoml.service( traffic={"timeout": 300}, resources={ "gpu": 1, "gpu_type": "nvidia-l4", }, ) class SDXLTurbo: model_path = bentoml.models.HuggingFaceModel(MODEL_ID) ...
Within the class, load the model from Hugging Face and define it as a class variable. The
HuggingFaceModel
method provides an efficient mechanism for loading AI models to accelerate model deployment on BentoCloud, reducing image build time and cold start time.The
@bentoml.service
decorator also allows you to define the runtime environment for a Bento, the unified distribution format in BentoML. A Bento is packaged with all the source code, Python dependencies, model references, and environment setup, making it easy to deploy consistently across different environments.Here is an example:
service.py¶my_image = bentoml.images.PythonImage(python_version="3.11") \ .requirements_file("requirements.txt") @bentoml.service( image=my_image, # Apply the specifications ... ) class SDXLTurbo: ...
Use the
@bentoml.api
decorator to define an API endpoint for image generation inference. Thetxt2img
method is an endpoint that takes a text prompt, number of inference steps, and a guidance scale as inputs. It uses the model pipeline to generate an image based on the given prompt and parameters.service.py¶class SDXLTurbo: model_path = bentoml.models.HuggingFaceModel(MODEL_ID) def __init__(self) -> None: from diffusers import AutoPipelineForText2Image import torch # Load the model self.pipe = AutoPipelineForText2Image.from_pretrained( self.model_path, torch_dtype=torch.float16, variant="fp16", ) # Move the pipeline to GPU self.pipe.to(device="cuda") @bentoml.api def txt2img( self, prompt: str = sample_prompt, num_inference_steps: Annotated[int, Ge(1), Le(10)] = 1, guidance_scale: float = 0.0, ) -> Image: image = self.pipe( prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale, ).images[0] return image
Try it out¶
You can run this example project on BentoCloud, or serve it locally, containerize it as an OCI-compliant image, and deploy it anywhere.
BentoCloud¶
BentoCloud provides fast and scalable infrastructure for building and scaling AI applications with BentoML in the cloud.
Install BentoML and log in to BentoCloud through the BentoML CLI. If you don’t have a BentoCloud account, sign up here for free.
pip install bentoml bentoml cloud login
Clone the BentoDiffusion repository and deploy the project.
git clone https://github.com/bentoml/BentoDiffusion.git cd BentoDiffusion/sdxl-turbo bentoml deploy
Once it is up and running on BentoCloud, you can call the endpoint in the following ways:
Create a BentoML client to call the endpoint. Make sure you replace the Deployment URL with your own on BentoCloud. Refer to Obtain the endpoint URL for details.
import bentoml from pathlib import Path # Define the path to save the generated image output_path = Path("generated_image.png") with bentoml.SyncHTTPClient("https://sdxl-turbo-nmsx-e3c1c7db.mt-guc1.bentoml.ai") as client: result = client.txt2img( guidance_scale=0, num_inference_steps=1, prompt="A cinematic shot of a baby racoon wearing an intricate italian priest robe.", ) # The result should be a PIL.Image object result.save(output_path) print(f"Image saved at {output_path}")
Make sure you replace the Deployment URL with your own on BentoCloud. Refer to Obtain the endpoint URL for details.
curl -s -X POST \ 'https://sdxl-turbo-nmsx-e3c1c7db.mt-guc1.bentoml.ai/txt2img' \ -H 'Content-Type: application/json' \ -d '{ "guidance_scale": 0, "num_inference_steps": 1, "prompt": "A cinematic shot of a baby racoon wearing an intricate italian priest robe." }' \ -o output.jpg
Note
SDXL Turbo is capable of performing inference with just a single step. Therefore, setting
num_inference_steps
to1
is typically sufficient for generating high-quality images. Additionally, you need to setguidance_scale
to0
to deactivate it as the model was trained without it. See the official release notes to learn more.To make sure the Deployment automatically scales within a certain replica range, add the scaling flags:
bentoml deploy --scaling-min 0 --scaling-max 3 # Set your desired count
If it’s already deployed, update its allowed replicas as follows:
bentoml deployment update <deployment-name> --scaling-min 0 --scaling-max 3 # Set your desired count
For more information, see how to configure concurrency and autoscaling.
Local serving¶
BentoML allows you to run and test your code locally, so that you can quickly validate your code with local compute resources.
Clone the repository and choose your desired project.
git clone https://github.com/bentoml/BentoDiffusion.git cd BentoDiffusion/sdxl-turbo # Recommend Python 3.11 pip install -r requirements.txt
Serve it locally.
bentoml serve
Note
To run this project with SDXL Turbo, you need an Nvidia GPU with at least 12G VRAM.
Visit or send API requests to http://localhost:3000.
For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.