You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Authors:** `Oguz Ulgen <https://github.com/oulgen>`_ and `Sam Larsen <https://github.com/masnesral>`_
4
+
5
+
Introduction
6
+
------------------
7
+
8
+
PyTorch Compiler implements several caches to reduce compilation latency.
9
+
This recipe demonstrates how you can configure various parts of the caching in ``torch.compile``.
10
+
11
+
Prerequisites
12
+
-------------------
13
+
14
+
Before starting this recipe, make sure that you have the following:
15
+
16
+
* Basic understanding of ``torch.compile``. See:
17
+
18
+
* `torch.compiler API documentation <https://pytorch.org/docs/stable/torch.compiler.html#torch-compiler>`__
19
+
* `Introduction to torch.compile <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`__
20
+
* `Compile Time Caching in torch.compile <https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html>`__
21
+
22
+
* PyTorch 2.4 or later
23
+
24
+
Inductor Cache Settings
25
+
----------------------------
26
+
27
+
Most of these caches are in-memory, only used within the same process, and are transparent to the user. An exception is caches that store compiled FX graphs (``FXGraphCache``, ``AOTAutogradCache``). These caches allow Inductor to avoid recompilation across process boundaries when it encounters the same graph with the same Tensor input shapes (and the same configuration). The default implementation stores compiled artifacts in the system temp directory. An optional feature also supports sharing those artifacts within a cluster by storing them in a Redis database.
28
+
29
+
There are a few settings relevant to caching and to FX graph caching in particular.
30
+
The settings are accessible via environment variables listed below or can be hard-coded in the Inductor’s config file.
31
+
32
+
TORCHINDUCTOR_FX_GRAPH_CACHE
33
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
34
+
This setting enables the local FX graph cache feature, which stores artifacts in the host’s temp directory. Setting it to ``1`` enables the feature while any other value disables it. By default, the disk location is per username, but users can enable sharing across usernames by specifying ``TORCHINDUCTOR_CACHE_DIR`` (below).
35
+
36
+
TORCHINDUCTOR_AUTOGRAD_CACHE
37
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38
+
This setting extends ``FXGraphCache`` to store cached results at the ``AOTAutograd`` level, rather than at the Inductor level. Setting it to ``1`` enables this feature, while any other value disables it.
39
+
By default, the disk location is per username, but users can enable sharing across usernames by specifying ``TORCHINDUCTOR_CACHE_DIR`` (below).
40
+
``TORCHINDUCTOR_AUTOGRAD_CACHE`` requires ``TORCHINDUCTOR_FX_GRAPH_CACHE`` to work. The same cache dir stores cache entries for ``AOTAutogradCache`` (under ``{TORCHINDUCTOR_CACHE_DIR}/aotautograd``) and ``FXGraphCache`` (under ``{TORCHINDUCTOR_CACHE_DIR}/fxgraph``).
41
+
42
+
TORCHINDUCTOR_CACHE_DIR
43
+
~~~~~~~~~~~~~~~~~~~~~~~~
44
+
This setting specifies the location of all on-disk caches. By default, the location is in the system temp directory under ``torchinductor_<username>``, for example, ``/tmp/torchinductor_myusername``.
45
+
46
+
Note that if ``TRITON_CACHE_DIR`` is not set in the environment, Inductor sets the ``Triton`` cache directory to this same temp location, under the Triton sub-directory.
47
+
48
+
TORCHINDUCTOR_FX_GRAPH_REMOTE_CACHE
49
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
50
+
This setting enables the remote FX graph cache feature. The current implementation uses ``Redis``. ``1`` enables caching, and any other value disables it. The following environment variables configure the host and port of the Redis server:
51
+
52
+
``TORCHINDUCTOR_REDIS_HOST`` (defaults to ``localhost``)
53
+
``TORCHINDUCTOR_REDIS_PORT`` (defaults to ``6379``)
54
+
55
+
.. note::
56
+
57
+
Note that if Inductor locates a remote cache entry, it stores the compiled artifact in the local on-disk cache; that local artifact would be served on subsequent runs on the same machine.
58
+
59
+
TORCHINDUCTOR_AUTOGRAD_REMOTE_CACHE
60
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
61
+
Similar to ``TORCHINDUCTOR_FX_GRAPH_REMOTE_CACHE``, this setting enables the remote ``AOTAutogradCache`` feature. The current implementation uses Redis. Setting it to ``1`` enables caching, while any other value disables it. The following environment variables are used to configure the host and port of the ``Redis`` server:
62
+
* ``TORCHINDUCTOR_REDIS_HOST`` (defaults to ``localhost``)
63
+
* ``TORCHINDUCTOR_REDIS_PORT`` (defaults to ``6379``)
64
+
65
+
`TORCHINDUCTOR_AUTOGRAD_REMOTE_CACHE`` requires ``TORCHINDUCTOR_FX_GRAPH_REMOTE_CACHE`` to be enabled in order to function. The same Redis server can be used to store both AOTAutograd and FXGraph cache results.
66
+
67
+
TORCHINDUCTOR_AUTOTUNE_REMOTE_CACHE
68
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
69
+
This setting enables a remote cache for ``TorchInductor``’s autotuner. Similar to remote FX graph cache, the current implementation uses Redis. Setting it to ``1`` enables caching, while any other value disables it. The same host / port environment variables mentioned above apply to this cache.
70
+
71
+
TORCHINDUCTOR_FORCE_DISABLE_CACHES
72
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73
+
Set this value to ``1`` to disable all Inductor caching. This setting is useful for tasks like experimenting with cold-start compile times or forcing recompilation for debugging purposes.
74
+
75
+
Conclusion
76
+
-------------
77
+
In this recipe, we have learned how to configure PyTorch Compiler's caching mechanisms. Additionally, we explored the various settings and environment variables that allow users to configure and optimize these caching features according to their specific needs.
PyTorch Inductor implements several caches to reduce compilation latency.
9
-
This recipe demonstrates how you can configure various parts of the caching in ``torch.compile``.
8
+
PyTorch Compiler provides several caching offerings to reduce compilation latency.
9
+
This recipe will explain these offerings in detail to help users pick the best option for their use case.
10
+
11
+
Check out `Compile Time Caching Configurations <https://pytorch.org/tutorials/recipes/torch_compile_caching_configuration_tutorial.html>`__ for how to configure these caches.
12
+
13
+
Also check out our caching benchmark at `PT CacheBench Benchmarks <https://hud.pytorch.org/benchmark/llms?repoName=pytorch%2Fpytorch&benchmarkName=TorchCache+Benchmark>`__.
10
14
11
15
Prerequisites
12
16
-------------------
@@ -17,60 +21,83 @@ Before starting this recipe, make sure that you have the following:
17
21
18
22
* `torch.compiler API documentation <https://pytorch.org/docs/stable/torch.compiler.html#torch-compiler>`__
19
23
* `Introduction to torch.compile <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`__
24
+
* `Triton language documentation <https://triton-lang.org/main/index.html>`__
20
25
21
26
* PyTorch 2.4 or later
22
27
23
-
Inductor Cache Settings
24
-
----------------------------
28
+
Caching Offerings
29
+
---------------------
30
+
31
+
``torch.compile`` provides the following caching offerings:
32
+
33
+
* End to end caching (also known as ``Mega-Cache``)
34
+
* Modular caching of ``TorchDynamo``, ``TorchInductor``, and ``Triton``
35
+
36
+
It is important to note that caching validates that the cache artifacts are used with the same PyTorch and Triton version, as well as, same GPU when device is set to be cuda.
End to end caching, from here onwards referred to ``Mega-Cache``, is the ideal solution for users looking for a portable caching solution that can be stored in a database and can later be fetched possibly on a separate machine.
42
+
43
+
``Mega-Cache`` provides two compiler APIs:
44
+
45
+
* ``torch.compiler.save_cache_artifacts()``
46
+
* ``torch.compiler.load_cache_artifacts()``
47
+
48
+
The intended use case is after compiling and executing a model, the user calls ``torch.compiler.save_cache_artifacts()`` which will return the compiler artifacts in a portable form. Later, potentially on a different machine, the user may call ``torch.compiler.load_cache_artifacts()`` with these artifacts to pre-populate the ``torch.compile`` caches in order to jump-start their cache.
49
+
50
+
Consider the following example. First, compile and save the cache artifacts.
51
+
52
+
.. code-block:: python
53
+
54
+
@torch.compile
55
+
deffn(x, y):
56
+
return x.sin() @ y
57
+
58
+
a = torch.rand(100, 100, dtype=dtype, device=device)
59
+
b = torch.rand(100, 100, dtype=dtype, device=device)
60
+
61
+
result = fn(a, b)
62
+
63
+
artifacts = torch.compiler.save_cache_artifacts()
64
+
65
+
# Now, potentially store these artifacts in a database
66
+
67
+
Later, you can jump-start the cache by the following:
25
68
26
-
Most of these caches are in-memory, only used within the same process, and are transparent to the user. An exception is caches that store compiled FX graphs (FXGraphCache, AOTAutogradCache). These caches allow Inductor to avoid recompilation across process boundaries when it encounters the same graph with the same Tensor input shapes (and the same configuration). The default implementation stores compiled artifacts in the system temp directory. An optional feature also supports sharing those artifacts within a cluster by storing them in a Redis database.
69
+
.. code-block:: python
27
70
28
-
There are a few settings relevant to caching and to FX graph caching in particular.
29
-
The settings are accessible via environment variables listed below or can be hard-coded in Inductor’s config file.
71
+
# Potentially download/fetch the artifacts from the database
72
+
assert artifacts isnotNone
73
+
artifact_bytes, cache_info = artifacts
30
74
31
-
TORCHINDUCTOR_FX_GRAPH_CACHE
32
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
33
-
This setting enables the local FX graph cache feature, i.e., by storing artifacts in the host’s temp directory. ``1`` enables, and any other value disables it. By default, the disk location is per username, but users can enable sharing across usernames by specifying ``TORCHINDUCTOR_CACHE_DIR`` (below).
This setting extends FXGraphCache to store cached results at the AOTAutograd level, instead of at the Inductor level. ``1`` enables, and any other value disables it.
38
-
By default, the disk location is per username, but users can enable sharing across usernames by specifying ``TORCHINDUCTOR_CACHE_DIR`` (below).
39
-
`TORCHINDUCTOR_AUTOGRAD_CACHE` requires `TORCHINDUCTOR_FX_GRAPH_CACHE` to work. The same cache dir stores cache entries for AOTAutogradCache (under `{TORCHINDUCTOR_CACHE_DIR}/aotautograd`) and FXGraphCache (under `{TORCHINDUCTOR_CACHE_DIR}/fxgraph`).
77
+
This operation populates all the modular caches that will be discussed in the next section, including ``PGO``, ``AOTAutograd``, ``Inductor``, ``Triton``, and ``Autotuning``.
40
78
41
-
TORCHINDUCTOR_CACHE_DIR
42
-
~~~~~~~~~~~~~~~~~~~~~~~~
43
-
This setting specifies the location of all on-disk caches. By default, the location is in the system temp directory under ``torchinductor_<username>``, for example, ``/tmp/torchinductor_myusername``.
44
79
45
-
Note that if ``TRITON_CACHE_DIR`` is not set in the environment, Inductor sets the Triton cache directory to this same temp location, under the Triton subdirectory.
80
+
Modular caching of ``TorchDynamo``, ``TorchInductor``, and ``Triton``
This setting enables the remote FX graph cache feature. The current implementation uses Redis. ``1`` enables caching, and any other value disables it. The following environment variables configure the host and port of the Redis server:
83
+
The aforementioned ``Mega-Cache`` is composed of individual components that can be used without any user intervention. By default, PyTorch Compiler comes with local on-disk caches for ``TorchDynamo``, ``TorchInductor``, and ``Triton``. These caches include:
50
84
51
-
``TORCHINDUCTOR_REDIS_HOST`` (defaults to ``localhost``)
52
-
``TORCHINDUCTOR_REDIS_PORT`` (defaults to ``6379``)
85
+
* ``FXGraphCache``: A cache of graph-based IR components used in compilation.
86
+
* ``TritonCache``: A cache of Triton-compilation results, including ``cubin`` files generated by ``Triton`` and other caching artifacts.
87
+
* ``InductorCache``: A bundle of ``FXGraphCache`` and ``Triton`` cache.
88
+
* ``AOTAutogradCache``: A cache of joint graph artifacts.
89
+
* ``PGO-cache``: A cache of dynamic shape decisions to reduce number of recompilations.
53
90
54
-
Note that if Inductor locates a remote cache entry, it stores the compiled artifact in the local on-disk cache; that local artifact would be served on subsequent runs on the same machine.
91
+
All these cache artifacts are written to ``TORCHINDUCTOR_CACHE_DIR`` which by default will look like ``/tmp/torchinductor_myusername``.
55
92
56
-
TORCHINDUCTOR_AUTOGRAD_REMOTE_CACHE
57
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58
-
Like TORCHINDUCTOR_FX_GRAPH_REMOTE_CACHE, this setting enables the remote AOT AutogradCache feature. The current implementation uses Redis. ``1`` enables caching, and any other value disables it. The following environment variables configure the host and port of the Redis server:
59
-
``TORCHINDUCTOR_REDIS_HOST`` (defaults to ``localhost``)
60
-
``TORCHINDUCTOR_REDIS_PORT`` (defaults to ``6379``)
61
93
62
-
`TORCHINDUCTOR_AUTOGRAD_REMOTE_CACHE`` depends on `TORCHINDUCTOR_FX_GRAPH_REMOTE_CACHE` to be enabled to work. The same Redis server can store both AOTAutograd and FXGraph cache results.
94
+
Remote Caching
95
+
----------------
63
96
64
-
TORCHINDUCTOR_AUTOTUNE_REMOTE_CACHE
65
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
66
-
This setting enables a remote cache for Inductor’s autotuner. As with the remote FX graph cache, the current implementation uses Redis. ``1`` enables caching, and any other value disables it. The same host / port environment variables listed above apply to this cache.
97
+
We also provide a remote caching option for users who would like to take advantage of a Redis based cache. Check out `Compile Time Caching Configurations <https://pytorch.org/tutorials/recipes/torch_compile_caching_configuration_tutorial.html>`__ to learn more about how to enable the Redis-based caching.
67
98
68
-
TORCHINDUCTOR_FORCE_DISABLE_CACHES
69
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
70
-
Set this value to ``1`` to disable all Inductor caching. This setting is useful for tasks like experimenting with cold-start compile times or forcing recompilation for debugging purposes.
71
99
72
100
Conclusion
73
101
-------------
74
102
In this recipe, we have learned that PyTorch Inductor's caching mechanisms significantly reduce compilation latency by utilizing both local and remote caches, which operate seamlessly in the background without requiring user intervention.
75
-
Additionally, we explored the various settings and environment variables that allow users to configure and optimize these caching features according to their specific needs.
0 commit comments