You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 4, 2026. It is now read-only.
Latest Intel® AI Reference Model Optimizations for Intel® Xeon Scalable Processors
This document provides links to step-by-step instructions on how to leverage the latest reference model docker containers to run optimized open-source Deep Learning Training and Inference workloads using PyTorch* and TensorFlow* frameworks on Intel® Xeon® Scalable processors.
Note: The containers below are finely tuned to demonstrate best performance on Intel® Extension for PyTorch* and Intel® Optimized TensorFlow* and are not intended for use in production.
Use cases
The tables below link to documentation on how to run each use case using docker containers. These containers were validated on a host running Linux.