Skip to content

pykeio/ort

Repository files navigation


Coverage Results Crates.io Open Collective backers and sponsors
Crates.io ONNX Runtime

ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.

Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.

ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.

When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.

📖 Documentation

🤔 Support

🌠 Sponsor ort


💖 Projects using ort

Open a PR to add your project here 🌟

  • Bloop uses ort to power their semantic code search feature.
  • edge-transformers uses ort for accelerated transformer model inference at the edge.
  • Ortex uses ort for safe ONNX Runtime bindings in Elixir.
  • Supabase uses ort to remove cold starts for their edge functions.
  • Lantern uses ort to provide embedding model inference inside Postgres.
  • Magika uses ort for content type detection.
  • sbv2-api is a fast implementation of Style-BERT-VITS2 text-to-speech using ort.
  • Ahnlich uses ort to power their AI proxy for semantic search applications.
  • Spacedrive is a cross-platform file manager with AI features powered by ort.
  • BoquilaHUB uses ort for local AI deployment in biodiversity conservation efforts.
  • FastEmbed-rs uses ort for generating vector embeddings, reranking locally.
  • Aftershoot uses ort to power AI-assisted image editing workflows.
  • Valentinus uses ort to provide embedding model inference inside LMDB.
  • retto uses ort for reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms.
  • oar-ocr A comprehensive OCR library, built in Rust with ort for efficient inference.
  • Text Embeddings Inference (TEI) uses ort to deliver high-performance ONNX runtime inference for text embedding models.
  • Flow-Like uses ort to enable local ML inference inside its typed workflow engine.

About

Fast ML inference & training for ONNX models in Rust

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Sponsor this project

Contributors 56

Languages