Skip to content

Define tensor_product #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Mar 26, 2025
Merged

Define tensor_product #2

merged 17 commits into from
Mar 26, 2025

Conversation

ogauthe
Copy link
Collaborator

@ogauthe ogauthe commented Mar 24, 2025

First step of the refactor detailed in ITensor/BlockSparseArrays.jl#57

OneToOne, tensor_product, dual and flip are now defined in this light weighted package.

As we may get rid of LabelledNumbers.jl, I avoided having it as a dependency.

Copy link

codecov bot commented Mar 24, 2025

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

Thanks for integrating Codecov - We've got you covered ☂️

@mtfishman
Copy link
Member

Thanks @ogauthe. Do you have a use case for dual, dag, and flip being in this package? I think I brought it up because I thought it may be useful for ITensor/TensorAlgebra.jl#39 but for now I went with a different design where it isn't needed. I would lean towards holding off on that for now until we have a compelling use case.

@mtfishman
Copy link
Member

mtfishman commented Mar 24, 2025

Also, do we plan to define ⊗(r1::AbstractGradedUnitRange, r2::AbstractGradedUnitRange) as fusion_product(r1, r2)? That seems like a good definition but then that makes me wonder how that would fit into the current design. If we want that, maybe a design could be:

(r1::AbstractUnitRange, r2::AbstractUnitRange, r_rest::AbstractUnitRange...) = ((r1, r2), r_rest...)

# Fallback definition
(r1::AbstractUnitRange, r2::AbstractUnitRange) = tensor_product(r1, r2)

Then in GradedUnitRangesTensorProductsExt we could define:

TensorProducts.:(r1::AbstractGradedUnitRange, r2::AbstractGradedUnitRange) = fusion_product(r1, r2)

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 24, 2025

I have no opinion on dual package, I put it here since you mentioned. I will remove it.

@mtfishman
Copy link
Member

Relatedly, after we had discussed the idea for introducing this package, it made me wonder how FusionStyle fits in. fusedims is based on the FusionStyle of the array being fused, and will affect how the axes get fused together.

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 24, 2025

I now allow to specialize into fusion_product.

I am unsure about FusionStyle. As we discussed in ITensor/BlockSparseArrays.jl#57, FusionStyle cannot be only about axes, it has to care about arrays. So I think it is better to keep it into TensorAlgebra, where fusedims is defined.

@mtfishman
Copy link
Member

I am unsure about FusionStyle. As we discussed in ITensor/BlockSparseArrays.jl#57, FusionStyle cannot be only about axes, it has to care about arrays. So I think it is better to keep it into TensorAlgebra, where fusedims is defined.

I agree, ideally we wouldn't add that complexity here. The use case I have in mind is if you have a symmetric tensor (say a BlockSparseArray with graded axes) and you want to perform fusedims but not fuse the common sectors, which could be a reasonable way to do a tensor contraction where you want to want to take advantage of more sparsity (that can be helpful for contracting MPO tensors, which are often very sparse). But in that case, fusedims could call out to sector_product to fuse the axes, as opposed to or fusion_product.

@mtfishman mtfishman changed the title define tensor_product and dual Define tensor_product Mar 24, 2025
# This files defines the struct OneToOne
# OneToOne represents the range `1:1` or `Base.OneTo(1)`.

struct OneToOne{T} <: AbstractUnitRange{T} end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this obvious that we want this to be a separate type? Why not just use Base.OneTo(1)?
I wouldn't expect this to be performance critical, and this is simply more work for the compiler?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a dedicated OneToOne can be useful as a dummy axis for the tensor product of zero axis.

# This files defines an interface for the tensor product of two axes
# https://en.wikipedia.org/wiki/Tensor_product

⊗() = tensor_product()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could also be an alias:

Suggested change
() = tensor_product()
const = tensor_product

Then you could define methods for either, instead of both

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not compatible with being either tensor_product or fusion_product depending on the input.

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 26, 2025

Having Bool as a default type parameter for OneToOne leads to several issues (default blockaxes crashes and needed overwrite, other BlockArrays function crash). Base.OneTo has a dedicated line to throw when one tries to construct OneTo(true). I think we should follow the same convention and use Int as default.

@mtfishman
Copy link
Member

Having Bool as a default type parameter for OneToOne leads to several issues (default blockaxes crashes and needed overwrite, other BlockArrays function crash). Base.OneTo has a dedicated line to throw when one tries to construct OneTo(true). I think we should follow the same convention and use Int as default.

Fine with me.

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 26, 2025

I think this is ready to merge.

@mtfishman
Copy link
Member

I think this is ready to merge.

Looks good to me as well, I'll merge.

@mtfishman mtfishman merged commit 6eb616f into ITensor:main Mar 26, 2025
11 checks passed
@ogauthe ogauthe deleted the tensor_product branch March 26, 2025 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants