You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to implement in a neural network a logic that routes dynamically a sample based on some condition. I built a dummy example of how the network should look like and I would like to export this model to MLIR. When I try to do so using torch-mlir, I get an error. I would like to know if the operator torch.cond is not supported or if my implementation is just wrong.
module = export_and_import(cond_model, torch.ones(1, 3, 32, 32), output_type="torch")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jhassani/miniconda3/lib/python3.12/site-packages/torch_mlir/fx.py", line 111, in export_and_import
fx_importer.import_frozen_program(
File "/home/jhassani/miniconda3/lib/python3.12/site-packages/torch_mlir/extras/fx_importer.py", line 901, in import_frozen_program
return self.import_stateless_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jhassani/miniconda3/lib/python3.12/site-packages/torch_mlir/extras/fx_importer.py", line 947, in import_stateless_graph
node_importer.import_nodes(
File "/home/jhassani/miniconda3/lib/python3.12/site-packages/torch_mlir/extras/fx_importer.py", line 1462, in import_nodes
self._import_hop(loc, node, target)
File "/home/jhassani/miniconda3/lib/python3.12/site-packages/torch_mlir/extras/fx_importer.py", line 1566, in _import_hop
raise NotImplementedError(
NotImplementedError: Higher-order operation 'cond' not implemented in the FxImporter (tried '_import_hop_cond')
Additional informations
torch version : 2.7.0.dev20250210+cpu
torchvision version : torchvision-0.22.0.dev20250210+cpu
torch_mlir version : 20250127.357
The text was updated successfully, but these errors were encountered:
I ran the code and got a similar error message. Correct me if I'm wrong:
I think the problem is in how Torch‑MLIR’s FxImporter (and underlying TorchDynamo integration) currently handles higher‑order operations like torch.cond with nested (inline) functions. This means Torch‑MLIR currently isn’t set up to handle like the example above.
Yes, I think so, after working on this the best workaround I found is to split the model into two models and then write the "if" logic on the application level. This is not ideal for me because I want to compile the IR afterwards but it solves the issue.
Issue :
Trying to implement in a neural network a logic that routes dynamically a sample based on some condition. I built a dummy example of how the network should look like and I would like to export this model to MLIR. When I try to do so using torch-mlir, I get an error. I would like to know if the operator torch.cond is not supported or if my implementation is just wrong.
Steps to reproduce :
Just run this code :
You should get this error :
Additional informations
torch version : 2.7.0.dev20250210+cpu
torchvision version : torchvision-0.22.0.dev20250210+cpu
torch_mlir version : 20250127.357
The text was updated successfully, but these errors were encountered: