Detectron

About this page

This is an API reference for Detectron in BentoML. Please refer to Detectron guide for more information about how to use Detectron in BentoML.

bentoml.detectron.save_model(name: Tag | str, checkpointables: Engine.DefaultPredictor | nn.Module, config: Config.CfgNode | None = None, *, signatures: ModelSignaturesType | None = None, labels: dict[str, str] | None = None, custom_objects: dict[str, t.Any] | None = None, external_modules: t.List[ModuleType] | None = None, metadata: dict[str, t.Any] | None = None) bentoml.Model

Save a model instance to BentoML modelstore.

Parameters:
  • name – Name for given model instance. This should pass Python identifier check.

  • checkpointables – The model instance to be saved. Could be a detectron2.engine.DefaultPredictor or a torch.nn.Module.

  • config – Optional CfgNode for the model. Required when checkpointables is a torch.nn.Module.

  • signatures – Methods to expose for running inference on the target model. Signatures are used for creating Runner instances when serving model with Service

  • labels – User-defined labels for managing models, e.g. team=nlp, stage=dev.

  • custom_objects – Custom objects to be saved with the model. An example is {"my-normalizer": normalizer}. Custom objects are currently serialized with cloudpickle, but this implementation is subject to change.

  • external_modules – user-defined additional python modules to be saved alongside the model or custom objects, e.g. a tokenizer module, preprocessor module, model configuration module

  • metadata – Custom metadata for given model.

Returns:

A tag with a format name:version where name is the user-defined model’s name, and a generated version.

Return type:

Tag

Examples:

import bentoml
import detectron2
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.modeling import build_model

model_url = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(model_url))
# set threshold for this model
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(model_url)
cloned = cfg.clone()
cloned.MODEL.DEVICE = "cpu"

bento_model = bentoml.detectron2.save_model('mask_rcnn', build_model(cloned), config=cloned)
import bentoml
import detectron2
from detectron2.engine import DefaultPredictor
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.modeling import build_model

model_url = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(model_url))
# set threshold for this model
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(model_url)
cloned = cfg.clone()
cloned.MODEL.DEVICE = "cpu"

predictor = DefaultPredictor(cloned)
bento_model = bentoml.detectron2.save_model('mask_rcnn', predictor)
bentoml.detectron.load_model(bento_model: str | Tag | Model, device_id: str = 'cpu') Engine.DefaultPredictor | nn.Module

Load the detectron2 model from BentoML local model store with given name.

Parameters:
  • bento_model – Either the tag of the model to get from the store, or a BentoML Model instance to load the model from.

  • device_id – The device to load the model to. Default to “cpu”.

Returns:

  • detectron2.engine.DefaultPredictor if the the checkpointables is saved as a Predictor.

  • torch.nn.Module if the checkpointables is saved as a nn.Module

Return type:

One of the following

Example:

import bentoml
predictor = bentoml.detectron2.load_model('predictor:latest')

model = bentoml.detectron2.load_model('model:latest')
bentoml.detectron.get(tag_like: str | Tag) Model

Get the BentoML model with the given tag.

Parameters:

tag_like – The tag of the model to retrieve from the model store.

Returns:

A BentoML Model with the matching tag.

Return type:

Model

Example:

import bentoml
# target model must be from the BentoML model store
model = bentoml.detectron2.get("en_reader:latest")