Skip to content
@neuralmagic

Neural Magic

Neural Magic empowers developers to optimize and deploy LLMs at scale. Our model compression and acceleration enable top performance with vLLM.

Pinned Loading

  1. nm-vllm-certs nm-vllm-certs Public

    General Information, model certifications, and benchmarks for nm-vllm enterprise distributions

    11 2

  2. deepsparse deepsparse Public

    Sparsity-aware deep learning inference runtime for CPUs

    Python 3.1k 183

  3. sparseml sparseml Public

    Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

    Python 2.1k 152

  4. docs docs Public

    Top-level directory for documentation and general content

    MDX 121 7

  5. sparsezoo sparsezoo Public

    Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes

    Python 383 26

  6. guidellm guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    Python 283 34

Repositories

Showing 10 of 70 repositories