Skip to content
This repository was archived by the owner on Apr 14, 2025. It is now read-only.

Adapt to TensorProducts.jl #24

Merged
merged 9 commits into from
Mar 27, 2025
Merged

Conversation

ogauthe
Copy link
Collaborator

@ogauthe ogauthe commented Mar 26, 2025

This PR adapts GradedUnitRanges to TensorProducts.jl, as explained in ITensor/BlockSparseArrays.jl#57

Copy link

codecov bot commented Mar 26, 2025

Codecov Report

Attention: Patch coverage is 88.88889% with 1 line in your changes missing coverage. Please review.

Project coverage is 77.53%. Comparing base (c88635b) to head (0fa9aa5).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/fusion.jl 88.88% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #24      +/-   ##
==========================================
+ Coverage   76.70%   77.53%   +0.83%     
==========================================
  Files           9        8       -1     
  Lines         382      365      -17     
==========================================
- Hits          293      283      -10     
+ Misses         89       82       -7     
Flag Coverage Δ
docs 0.00% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@mtfishman
Copy link
Member

@ogauthe this looks good to me, though do you know what's going on with the BlockSparseArrays test failure?

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 26, 2025

I find the distinction tensor_product/fusion_product to be a source of complexity, especially with that is sometimes the first and sometimes the second. The three functions end up being very similar yet having some slight differences that leads to code duplication and confusion.

The only case where the distinction matters is in the process of fusedims and splitdims of abelian symmetries, in order to get the permutation of the blocks. I think we should rather have an explicit unsorted_tensor_product for these specific cases that should be for internal use only. Then we could remove fusion_product and define as an alias for tensor_product (this more or less means current fusion_product becoming tensor_product, current tensor_product becoming internal unsorted_tensor_product)

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 26, 2025

@ogauthe this looks good to me, though do you know what's going on with the BlockSparseArrays test failure?

It tries to load and tensor_product from GradedUnitRanges, where they do not belong any more. Basically BlockSparseArrays.jl also needs its own PR, I am working on it.

@mtfishman
Copy link
Member

The only case where the distinction matters is in the process of fusedims and splitdims of abelian symmetries, in order to get the permutation of the blocks. I think we should rather have an explicit unsorted_tensor_product for these specific cases that should be for internal use only. Then we could remove fusion_product and define as an alias for tensor_product (this more or less means current fusion_product becoming tensor_product, current tensor_product becoming internal unsorted_tensor_product)

That sounds reasonable to me.

@mtfishman
Copy link
Member

@ogauthe this looks good to me, though do you know what's going on with the BlockSparseArrays test failure?

It tries to load and tensor_product from GradedUnitRanges, where they do not belong any more. Basically BlockSparseArrays.jl also needs its own PR, I am working on it.

In that case it sounds like this PR should be marked as breaking.

@mtfishman
Copy link
Member

The only case where the distinction matters is in the process of fusedims and splitdims of abelian symmetries, in order to get the permutation of the blocks. I think we should rather have an explicit unsorted_tensor_product for these specific cases that should be for internal use only. Then we could remove fusion_product and define as an alias for tensor_product (this more or less means current fusion_product becoming tensor_product, current tensor_product becoming internal unsorted_tensor_product)

That sounds reasonable to me.

Also it sounds like with that plan, we could define as const ⊗ = tensor_product as @lkdvos suggested in ITensor/TensorProducts.jl#2.

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 27, 2025

BlockSparseArrays is currently a test dependency for GradedUnitRanges. This makes testing breaking release impossible as GradedUnitRanges is also a (weak) dependency of BlockSparseArrays. We need to move the test for dag to BlockSparseArray.

@mtfishman
Copy link
Member

Agreed, we should remove BlockSparseArrays as a test dependency.

@ogauthe
Copy link
Collaborator Author

ogauthe commented Mar 27, 2025

Not sure about Documentation and Literate Check errors. They complain about version compatibility. Are they expected to fail for a breaking release?

@mtfishman
Copy link
Member

mtfishman commented Mar 27, 2025

Not sure about Documentation and Literate Check errors. They complain about version compatibility. Are they expected to fail for a breaking release?

They use docs/Project.toml which also need to be bumped to GradedUnitRanges v0.2 (it is a bit annoying, ideally we wouldn't need that but if we don't have that compat entry then the CompatHelper workflow will make PRs adding it which can't be disabled right now, and CompatHelper is helpful for keeping the other ones up to date).

@mtfishman
Copy link
Member

Looks like a nice simplification of the code, ready to merge?

@mtfishman mtfishman merged commit a8c095e into ITensor:main Mar 27, 2025
13 checks passed
@ogauthe ogauthe deleted the adapt_TensorProducts branch March 27, 2025 15:37
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants