Skip to content

Unreasonable slow just in time compiling of permutedims when the array dimension is high #42438

@GiggleLiu

Description

@GiggleLiu

Permuting the dimensions of high dimensional tensors is frequently used in tensor networks.

julia> using Random, Profile

julia> (n=24; t = randn(fill(2, n)...); @time permutedims(t, randperm(n)));
 39.144106 seconds (1.21 M allocations: 184.261 MiB, 0.09% gc time, 99.45% compilation time)

julia> (n=24; t = randn(fill(2, n)...); @time permutedims(t, randperm(n)));
  0.194273 seconds (9 allocations: 128.001 MiB, 25.46% gc time)

julia> (n=28; t = randn(fill(2, n)...); @time permutedims(t, randperm(n)));
657.982409 seconds (1.04 M allocations: 2.044 GiB, 0.00% gc time, 99.20% compilation time)

julia> (n=28; t = randn(fill(2, n)...); @time permutedims(t, randperm(n)));
  6.854463 seconds (3.84 k allocations: 2.000 GiB, 1.76% gc time, 0.07% compilation time)

This time is much longer than TensorOperations.tensorcopy (~6s).
The crazy fact (from my observation) is: the compiling time of permutedims scales exponentially with the tensor rank!

Related code block:

@nloops($N, i, P,

Julia version: 1.7.0-beta4

The problem might be related to the compiling of multi-loops.

@kshyatt

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions