-
Notifications
You must be signed in to change notification settings - Fork 127
Description
On the road to minimize allocations and re use memory when possible, I'd like to convert most of the functions to work in place when possible, this goes from the higher functions like apply
to lower level functions like the various implementations of factorize
.
Where to start
Most of the times in this functions indices must be grouped, therefore the contraction CombinerTensor
with DenseTensor
Is very common, but as implemented it always creates a new tensor even when it is not necessary, for this reason I created a macro @Combine!
that consider such product and overwrites the ITensor
with DenseStorage
and returns a new ITensor
with the combined indices but that shares the same storage of the starting tensor. This macro can be used (in the future) every time a NDTensor
must be reduced to a 2DTensor
to perform low level calculations.
My Implementation
I have started this project on a personal fork of the library, for the moment I have only created the macro @Combine!
, this is the repo on which I'm working.
I wanted to ask for advises both about the implementation and on how managing the repo, if I can rebase the branch here or is it better to keep it on a fork, and when would it be the case to do a pull request. Thank you in advance for the suggestions.