From 7bce947435f98e846a2ddda2a9cb28f0b78b2011 Mon Sep 17 00:00:00 2001 From: Bubbles The Dev <152947339+KernFerm@users.noreply.github.com> Date: Wed, 2 Oct 2024 03:59:04 -0400 Subject: [PATCH] Update README.MD Signed-off-by: Bubbles The Dev <152947339+KernFerm@users.noreply.github.com> --- README.MD | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/README.MD b/README.MD index f1322df..be11671 100644 --- a/README.MD +++ b/README.MD @@ -34,27 +34,42 @@ pip3 install torch==2.4.1+cu118 torchvision==0.19.1+cu118 torchaudio==2.4.1+cu11 - **commands-to-export.txt:** A file containing useful commands for exporting your YOLO model. - **export.py:** The Python script responsible for handling the export process. -## Exporting Your YOLO Model +## Exporting YOLO Models (NVIDIA and AMD GPUs) ### Export to TensorRT Engine (For NVIDIA GPUs) -- To export your YOLO model to a TensorRT engine, use the following command: + +To export your YOLO model to a TensorRT engine (for NVIDIA GPUs only), use the following command: + ``` -python .\export.py --weights ./.pt --include engine --half --imgsz 320 320 --device 0 +python .\export.py --weights ./"your_model_path.pt" --include engine --half --imgsz 320 320 --device 0 ``` -- Replace `` with the path to your YOLO `.pt` file. -- The `--half` flag enables half-precision inference. +- Replace `"your_model_path"` with the path to your YOLO `.pt` file. +- The `--half` flag enables half-precision inference for faster performance and lower memory usage. - `--imgsz 320 320` sets the image size to 320x320 pixels for export. - `--device 0` specifies the GPU device ID (use `--device cpu` for CPU-based inference). +- **Note**: TensorRT is only compatible with **NVIDIA GPUs**. ### Export to ONNX -- To export your YOLO model to ONNX format, use the following command: + +To export your YOLO model to ONNX format, use the following command: + ``` -python .\export.py --weights ./.pt --include onnx --half --imgsz 320 320 +python .\export.py --weights ./"your_model_path.pt" --include onnx --half --imgsz 320 320 --device 0 ``` -- As above, replace `` with your YOLO `.pt` model. +- Replace `"your_model_path"` with your YOLO `.pt` model. - The `--half` flag enables half-precision inference (if supported). - `--imgsz 320 320` sets the image size to 320x320 pixels. +### Export for AMD GPU + +To export your YOLO model for an AMD GPU, use the following command: + +``` +python .\export.py --weights .\your_model_path.pt --include onnx --imgsz 320 320 +``` +- Replace `"your_model_path"` with the path to your YOLO `.pt` file. +- This command will export the model in the ONNX format for AMD GPU inference. + ## Troubleshooting - If you encounter issues during export, ensure that your `CUDA`, `cuDNN`, and `TensorRT` versions are compatible with the version of `PyTorch` you are using.