XGBoost¶
XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework.
This document explains how to serve and deploy an XGBoost model for predicting breast cancer with BentoML.
You can query the exposed prediction endpoint with breast tumor data. For example:
{
"data": [
[1.308e+01, 1.571e+01, 8.563e+01, 5.200e+02, 1.075e-01, 1.270e-01,
4.568e-02, 3.110e-02, 1.967e-01, 6.811e-02, 1.852e-01, 7.477e-01,
1.383e+00, 1.467e+01, 4.097e-03, 1.898e-02, 1.698e-02, 6.490e-03,
1.678e-02, 2.425e-03, 1.450e+01, 2.049e+01, 9.609e+01, 6.305e+02,
1.312e-01, 2.776e-01, 1.890e-01, 7.283e-02, 3.184e-01, 8.183e-02]
]
}
Expected output:
[[0.02664177 0.9733583 ]] # 2.66% chance benign and 97.34% chance malignant
This example is ready for quick deployment and scaling on BentoCloud. With a single command, you get a production-grade application with fast autoscaling, secure deployment in your cloud, and comprehensive observability.

Code explanations¶
You can find the source code in GitHub. Below is a breakdown of the key code implementations within this project.
save_model.py¶
This example uses the scikit-learn
framework to load and preprocess the breast cancer dataset, which is then converted into an XGBoost-compatible format (DMatrix
) to train the machine learning model.
import typing as t
from sklearn.datasets import load_breast_cancer
from sklearn.utils import Bunch
import xgboost as xgb
# Load the data
cancer: Bunch = t.cast("Bunch", load_breast_cancer())
cancer_data = t.cast("ext.NpNDArray", cancer.data)
cancer_target = t.cast("ext.NpNDArray", cancer.target)
dt = xgb.DMatrix(cancer_data, label=cancer_target)
# Specify model parameters
param = {
"max_depth": 3,
"eta": 0.3,
"objective": "multi:softprob",
"num_class": 2
}
# Train the model
model = xgb.train(param, dt)
After training, use the bentoml.xgboost.save_model
API to save the model to the BentoML Model Store, a local directory to store and manage models. You can retrieve this model later in other services to run predictions.
import bentoml
# Specify the model name and the model to be saved
bentoml.xgboost.save_model("cancer", model)
To verify that the model has been successfully saved, run:
$ bentoml models list
Tag Module Size Creation Time
cancer:xa2npbboccvv7u4c bentoml.xgboost 23.17 KiB 2024-06-19 07:51:21
test.py¶
To ensure that the saved model works correctly, try loading it and running a prediction:
import bentoml
import xgboost as xgb
# Load the model by setting the model tag
booster = bentoml.xgboost.load_model("cancer:xa2npbboccvv7u4c")
# Predict using a sample
res = booster.predict(xgb.DMatrix([[1.308e+01, 1.571e+01, 8.563e+01, 5.200e+02, 1.075e-01, 1.270e-01,
4.568e-02, 3.110e-02, 1.967e-01, 6.811e-02, 1.852e-01, 7.477e-01,
1.383e+00, 1.467e+01, 4.097e-03, 1.898e-02, 1.698e-02, 6.490e-03,
1.678e-02, 2.425e-03, 1.450e+01, 2.049e+01, 9.609e+01, 6.305e+02,
1.312e-01, 2.776e-01, 1.890e-01, 7.283e-02, 3.184e-01, 8.183e-02]]))
print(res)
Expected result:
[[0.02664177 0.9733583 ]]
service.py¶
The service.py
file is where you define the serving logic and expose the model as a web service.
import bentoml
import numpy as np
import xgboost as xgb
import os
@bentoml.service(
resources={"cpu": "2"},
traffic={"timeout": 10},
)
class CancerClassifier:
# Declare the model as a class variable
bento_model = bentoml.models.BentoModel("cancer:latest")
def __init__(self):
self.model = bentoml.xgboost.load_model(self.bento_model)
# Check resource availability
if os.getenv("CUDA_VISIBLE_DEVICES") not in (None, "", "-1"):
self.model.set_param({"predictor": "gpu_predictor", "gpu_id": 0}) # type: ignore (incomplete XGBoost types)
else:
nthreads = os.getenv("OMP_NUM_THREADS")
if nthreads:
nthreads = max(int(nthreads), 1)
else:
nthreads = 1
self.model.set_param(
{"predictor": "cpu_predictor", "nthread": nthreads}
)
@bentoml.api
def predict(self, data: np.ndarray) -> np.ndarray:
return self.model.predict(xgb.DMatrix(data))
The Service code:
Uses the
@bentoml.service
decorator to define a BentoML Service. Optionally, you can set additional configurations like resource allocation on BentoCloud and traffic timeout.Retrieves the model from the Model Store and defines it a class variable.
Checks resource availability like GPUs and the number of threads.
Uses the
@bentoml.api
decorator to expose thepredict
function as an API endpoint, which takes a NumPy array as input and returns a NumPy array. Note that the input data is converted into aDMatrix
, which is the data structure XGBoost uses for datasets.
The @bentoml.service
decorator also allows you to define the runtime environment for a Bento, the unified distribution format in BentoML. A Bento is packaged with all the source code, Python dependencies, model references, and environment setup, making it easy to deploy consistently across different environments.
Here is an example:
my_image = bentoml.images.Image(python_version="3.11") \
.python_packages("xgboost", "scikit-learn")
@bentoml.service(
image=my_image, # Apply the specifications
...
)
class CancerClassifier:
...
Try it out¶
You can run this example project on BentoCloud, or serve it locally, containerize it as an OCI-compliant image and deploy it anywhere.
BentoCloud¶
BentoCloud provides fast and scalable infrastructure for building and scaling AI applications with BentoML in the cloud.
Install the dependencies and log in to BentoCloud through the BentoML CLI. If you don’t have a BentoCloud account, sign up here for free.
# Recommend Python 3.11 pip install bentoml xgboost scikit-learn bentoml cloud login
Clone the repository.
git clone https://github.com/bentoml/BentoXGBoost.git cd BentoXGBoost
Train and save the MLflow model to the BentoML Model Store.
python3 save_model.py
Deploy the Service to BentoCloud.
bentoml deploy
Once it is up and running, you can call the endpoint in the following ways:
Create a BentoML client to call the endpoint. Make sure you replace the Deployment URL with your own on BentoCloud. Refer to Obtain the endpoint URL for details.
import bentoml with bentoml.SyncHTTPClient("https://cancer-classifier-33e8-e3c1c7db.mt-guc1.bentoml.ai") as client: result = client.predict( data=[ [1.308e+01, 1.571e+01, 8.563e+01, 5.200e+02, 1.075e-01, 1.270e-01, 4.568e-02, 3.110e-02, 1.967e-01, 6.811e-02, 1.852e-01, 7.477e-01, 1.383e+00, 1.467e+01, 4.097e-03, 1.898e-02, 1.698e-02, 6.490e-03, 1.678e-02, 2.425e-03, 1.450e+01, 2.049e+01, 9.609e+01, 6.305e+02, 1.312e-01, 2.776e-01, 1.890e-01, 7.283e-02, 3.184e-01, 8.183e-02] ], ) print(result)
Make sure you replace the Deployment URL with your own on BentoCloud. Refer to Obtain the endpoint URL for details.
curl -X 'POST' \ 'https://cancer-classifier-33e8-e3c1c7db.mt-guc1.bentoml.ai/predict' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "data": [ [1.308e+01, 1.571e+01, 8.563e+01, 5.200e+02, 1.075e-01, 1.270e-01, 4.568e-02, 3.110e-02, 1.967e-01, 6.811e-02, 1.852e-01, 7.477e-01, 1.383e+00, 1.467e+01, 4.097e-03, 1.898e-02, 1.698e-02, 6.490e-03, 1.678e-02, 2.425e-03, 1.450e+01, 2.049e+01, 9.609e+01, 6.305e+02, 1.312e-01, 2.776e-01, 1.890e-01, 7.283e-02, 3.184e-01, 8.183e-02] ] }'
To make sure the Deployment automatically scales within a certain replica range, add the scaling flags:
bentoml deploy --scaling-min 0 --scaling-max 3 # Set your desired count
If it’s already deployed, update its allowed replicas as follows:
bentoml deployment update <deployment-name> --scaling-min 0 --scaling-max 3 # Set your desired count
For more information, see how to configure concurrency and autoscaling.
Local serving¶
BentoML allows you to run and test your code locally, so that you can quickly validate your code with local compute resources.
Clone the project repository and install the dependencies.
git clone https://github.com/bentoml/BentoXGBoost.git cd BentoXGBoost # Recommend Python 3.11 pip install bentoml xgboost scikit-learn
Train and save the model to the BentoML Model Store.
python3 save_model.py
Serve it locally.
bentoml serve
Visit or send API requests to http://localhost:3000.
For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.