Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 61.6k 10.9k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.2k 271

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 197 68

Repositories

Showing 10 of 26 repositories
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 37 Apache-2.0 26 5 17 Updated Oct 31, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,169 Apache-2.0 271 56 (11 issues need help) 42 Updated Oct 31, 2025
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 138 Apache-2.0 21 9 (1 issue needs help) 52 Updated Oct 31, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 675 Apache-2.0 92 83 (5 issues need help) 23 Updated Oct 31, 2025
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 62 Apache-2.0 11 5 (2 issues need help) 16 Updated Oct 31, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 23 Apache-2.0 43 0 20 Updated Oct 31, 2025
  • vllm-project/vllm-project.github.io’s past year of commit activity
    JavaScript 22 36 0 2 Updated Oct 31, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 61,579 Apache-2.0 10,946 1,880 (27 issues need help) 1,225 Updated Oct 31, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,896 Apache-2.0 312 86 (3 issues need help) 58 Updated Oct 31, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 15 Apache-2.0 59 1 61 Updated Oct 31, 2025