LLM¶

This section provides example projects for large language models.

Deploy an LLM application with vLLM and BentoML.

Inference: vLLM

Deploy an AI agent capable of calling user-defined functions.

Agent: Function calling

Deploy an AI assistant using ShieldGemma to filter out harmful input before they are processed further.

LLM safety: ShieldGemma

Deploy a RAG application with custom open-source models.

RAG: Document ingestion and search