Skip to content

ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)

License

Notifications You must be signed in to change notification settings

yuvraj108c/ComfyUI-Depth-Anything-Tensorrt

Repository files navigation

ComfyUI Depth Anything TensorRT

python cuda trt mit

This repo provides a ComfyUI Custom Node implementation of the Depth-Anything-Tensorrt in Python for ultra fast depth map generation (up to 14x faster than comfyui_controlnet_aux)

⭐ Support

If you like my projects and wish to see updates and new features, please consider supporting me. It helps a lot!

ComfyUI-Depth-Anything-Tensorrt ComfyUI-Upscaler-Tensorrt ComfyUI-Dwpose-Tensorrt ComfyUI-Rife-Tensorrt

ComfyUI-Whisper ComfyUI_InvSR ComfyUI-FLOAT ComfyUI-Thera ComfyUI-Video-Depth-Anything ComfyUI-PiperTTS

Special thanks to livepeer.org for supporting the project!

buy-me-coffees paypal-donation

⏱️ Performance (Depth Anything V1)

Note: The following results were benchmarked on FP16 engines inside ComfyUI

Device Model Model Input (WxH) Image Resolution (WxH) FPS
RTX4090 Depth-Anything-S 518x518 1280x720 35
RTX4090 Depth-Anything-B 518x518 1280x720 33
RTX4090 Depth-Anything-L 518x518 1280x720 24
H100 Depth-Anything-L 518x518 1280x720 125+

⏱️ Performance (Depth Anything V2)

Note: The following results were benchmarked on FP16 engines inside ComfyUI

Device Model Model Input (WxH) Image Resolution (WxH) FPS
H100 Depth-Anything-S 518x518 1280x720 213
H100 Depth-Anything-B 518x518 1280x720 180
H100 Depth-Anything-L 518x518 1280x720 109

🚀 Installation

Navigate to the ComfyUI /custom_nodes directory

git clone https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt.git
cd ./ComfyUI-Depth-Anything-Tensorrt
pip install -r requirements.txt

🛠️ Building TensorRT Engine

There are two ways to build TensorRT engines:

Method 1: Using the EngineBuilder Node

  1. Insert node by Right Click -> tensorrt -> Depth Anything Engine Builder
  2. Select the model version (v1 or v2) and size (small, base, or large)
  3. Optionally customize the engine name, FP16 settings, and onnx path
  4. Run the workflow to build the engine

The engine will be automatically downloaded and built in the specified location. Refresh the webpage or strike 'r' on your keyboard, and the new engine will appear in the Depth Anything Tensorrt node.

Method 2: Manual Building

  1. Download one of the available onnx models:
  2. Run the export script, e.g
python export_trt.py --onnx-path ./depth_anything_vitl14-fp16.onnx --trt-path ./depth_anything_vitl14-fp16.engine
  1. Place the exported engine inside ComfyUI /models/tensorrt/depth-anything directory

☀️ Usage

  • Insert node by Right Click -> tensorrt -> Depth Anything Tensorrt
  • Choose the appropriate engine from the dropdown

🤖 Environment tested

  • Ubuntu 22.04 LTS, Cuda 12.3, Tensorrt 10.0.1, Python 3.10, RTX 4090 GPU
  • Windows (Not tested)

📝 Changelog

  • 20/05/2025

  • 02/07/2024

    • Add Depth Anything V2 onnx models + benchmarks
    • Merge PR for engine caching in memory by BuffMcBigHuge
  • 26/04/2024

    • Update to tensorrt 10.0.1
    • Massive code refactor, remove trtexec, remove pycuda, show engine building progress
    • Update and standardise engine directory and node category for upcoming tensorrt custom nodes suite
  • 7/04/2024

    • Fix image resize bug during depth map post processing
  • 30/03/2024

    • Fix CUDNN_STATUS_MAPPING_ERROR
  • 27/03/2024

    • Major refactor and optimisation (remove subprocess)

👏 Credits

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages