Skip to content

Commit

Permalink
Update README.MD
Browse files Browse the repository at this point in the history
Signed-off-by: Bubbles The Dev <[email protected]>
  • Loading branch information
KernFerm authored Oct 2, 2024
1 parent a205534 commit 7bce947
Showing 1 changed file with 23 additions and 8 deletions.
31 changes: 23 additions & 8 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -34,27 +34,42 @@ pip3 install torch==2.4.1+cu118 torchvision==0.19.1+cu118 torchaudio==2.4.1+cu11
- **commands-to-export.txt:** A file containing useful commands for exporting your YOLO model.
- **export.py:** The Python script responsible for handling the export process.

## Exporting Your YOLO Model
## Exporting YOLO Models (NVIDIA and AMD GPUs)

### Export to TensorRT Engine (For NVIDIA GPUs)
- To export your YOLO model to a TensorRT engine, use the following command:

To export your YOLO model to a TensorRT engine (for NVIDIA GPUs only), use the following command:

```
python .\export.py --weights ./<your_model_path>.pt --include engine --half --imgsz 320 320 --device 0
python .\export.py --weights ./"your_model_path.pt" --include engine --half --imgsz 320 320 --device 0
```
- Replace `<your_model_path>` with the path to your YOLO `.pt` file.
- The `--half` flag enables half-precision inference.
- Replace `"your_model_path"` with the path to your YOLO `.pt` file.
- The `--half` flag enables half-precision inference for faster performance and lower memory usage.
- `--imgsz 320 320` sets the image size to 320x320 pixels for export.
- `--device 0` specifies the GPU device ID (use `--device cpu` for CPU-based inference).
- **Note**: TensorRT is only compatible with **NVIDIA GPUs**.

### Export to ONNX
- To export your YOLO model to ONNX format, use the following command:

To export your YOLO model to ONNX format, use the following command:

```
python .\export.py --weights ./<your_model_path>.pt --include onnx --half --imgsz 320 320
python .\export.py --weights ./"your_model_path.pt" --include onnx --half --imgsz 320 320 --device 0
```
- As above, replace `<your_model_path>` with your YOLO `.pt` model.
- Replace `"your_model_path"` with your YOLO `.pt` model.
- The `--half` flag enables half-precision inference (if supported).
- `--imgsz 320 320` sets the image size to 320x320 pixels.

### Export for AMD GPU

To export your YOLO model for an AMD GPU, use the following command:

```
python .\export.py --weights .\your_model_path.pt --include onnx --imgsz 320 320
```
- Replace `"your_model_path"` with the path to your YOLO `.pt` file.
- This command will export the model in the ONNX format for AMD GPU inference.

## Troubleshooting

- If you encounter issues during export, ensure that your `CUDA`, `cuDNN`, and `TensorRT` versions are compatible with the version of `PyTorch` you are using.
Expand Down

0 comments on commit 7bce947

Please sign in to comment.