You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add support for custom compile options in torch_xla.compile and… (#9575)
… PJRT backend
Issue description : #9555
This change introduces the ability to pass custom compile options from
Python down to the PJRT backend, allowing users to fine-tune XLA
compilation behavior without modifying core code. Key changes:
* Python API
* Added custom_compile_options parameter to torch_xla.compile for
passing compile-time options as a dict (supports bool, float, int, and
str values).
* Added torch_xla.set_custom_compile_options() utility for setting
compile options globally.
* Added internal binding _XLAC._set_custom_compile_options().
* C++ Runtime
* Added SetCustomCompileOptions() virtual method to ComputationClient
and implemented it in PjRtComputationClient.
* PjRtComputationClient now stores custom_compile_options_ and injects
them into xla::CompileOptions.env_option_overrides during compilation.
* Options are stringified before being passed to XLA for compatibility.
Motivation: This enables advanced users to pass through backend-specific
tuning flags (e.g., enabling experimental optimizations, toggling
partitioning strategies) without hardcoding them, improving flexibility
for research and debugging workflows.
0 commit comments