Skip to main content

CV model on Triton

1. Train and Save Your PyTorch Model

First, you need to have a trained PyTorch model. Once the model is trained, save it using PyTorch's torch.save() function.

import torch

# Assuming `model` is your trained PyTorch model
model_path = "path/to/your/model.pth"
torch.save(model, model_path)

2. Convert the Model to ONNX Format

Triton Inference Server supports multiple model formats including ONNX, TensorFlow, and TensorRT. ONNX (Open Neural Network Exchange) is a popular choice for its cross-platform and cross-framework compatibility.

To convert a PyTorch model to ONNX:

import torch.onnx

# Load the trained model
model = torch.load('path/to/your/model.pt')
model.eval()

# Create dummy input in the shape the model expects
dummy_input = torch.randn(1, 3, 224, 224) # example input shape for an image model

# Export the model
onnx_model_path = "path/to/exported/model.onnx"
torch.onnx.export(model, dummy_input, onnx_model_path)

3. Convert the ONNX Model to TensorRT Engine

TensorRT is a high-performance deep learning inference engine that optimizes models for efficient deployment on NVIDIA hardware. To leverage TensorRT's optimization capabilities, you can convert your ONNX model to a TensorRT engine (.engine) file.

caution

TensorRT engines are hardware-specific and optimized for the GPU on which they are generated. This means that if you're deploying your model across different GPU architectures (e.g., deploying on a different NVIDIA GPU model or machine), you must regenerate the TensorRT engine for each different GPU.

  • For example, an engine generated on an NVIDIA H100 may not work optimally or at all on an NVIDIA A100. To ensure compatibility and performance, always generate a new .engine file on the target GPU machine.

To convert the ONNX model to a TensorRT engine, you will need to use the command-line tool trtexec.

The trtexec tool is a convenient command-line interface for converting and optimizing ONNX models. You can convert your ONNX model to a TensorRT engine by running the following command:

trtexec --onnx=path/to/your/model.onnx --saveEngine=path/to/output/model.engine --explicitBatch
  • --onnx: Path to the ONNX model you want to convert.
  • --saveEngine: Path where the TensorRT engine file (.engine) will be saved.
  • --explicitBatch: Use explicit batch size mode, which is required for many ONNX models.

You can also specify additional arguments like precision modes (fp16, int8, etc.) and others (use trtexec --help command to see all available options):

trtexec --onnx=path/to/your/model.onnx --saveEngine=path/to/output/model.engine --explicitBatch --fp16 

4. Create a Configuration File for Triton

Triton requires a config.pbtxt file to understand how to serve your model. This file includes information about the model name, platform (ONNX, TensorRT, etc.), input and output tensors, and other settings.

Here's an example config.pbtxt for a TensorRT Engine model:

name: "model_name"
platform: "tensorrt_plan"
max_batch_size: 1
input [
{
name: "images0" # Name of the input layer in your model
data_type: TYPE_FP32
dims: [3, 224, 224] # Replace with the input dimensions of your model
}
]
output [
{
name: "output0" # Name of the output layer in your model
data_type: TYPE_FP32
dims: [6, 17661] # Replace with the output dimensions of your model
}
]

You can use netron.app to see your model specifics (input dims, output dims, etc.).

For more information on model configuration visit the official documentation.

5. Deploy the Model on Triton

Place your ONNX model file and the config.pbtxt file in a directory inside Triton’s models directory. The directory structure should look like this:

└──/models-repository
└──/model_name
├──config.pbtxt
└──/1
└──model.plan
info

In order to use the generated .engine model you need to rename the file to model.plan

6. Start the Triton Server

To start your Triton Inference server please refer to How to run