Skip to content

Conversation

@sanderdemeyer
Copy link
Contributor

@sanderdemeyer sanderdemeyer commented Nov 12, 2025

This implements the exponential of a matrix for both BLASFloats and BigFloats.

I have named these functions exponential and exponential!, instead of the usual exp and exp! from LinearAlgebra. Extending these methods while keeping the current structure using @algdef and @ functiondef results in some naming conflicts. The default for BLASFloats is to use LinearAlgebra.exp!. In TensorKit, we can still stick to the exp naming convention.

@codecov
Copy link

codecov bot commented Nov 12, 2025

Codecov Report

❌ Patch coverage is 93.98496% with 8 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/implementations/exponential.jl 93.69% 7 Missing ⚠️
src/interface/exponential.jl 88.88% 1 Missing ⚠️
Files with missing lines Coverage Δ
ext/MatrixAlgebraKitGenericSchurExt.jl 100.00% <100.00%> (ø)
src/MatrixAlgebraKit.jl 100.00% <ø> (ø)
src/interface/matrixfunctions.jl 68.96% <100.00%> (ø)
src/interface/exponential.jl 88.88% <88.88%> (ø)
src/implementations/exponential.jl 93.69% <93.69%> (ø)

... and 4 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

end

function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
D, V = eigh_full(A, alg.eigh_alg)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not allocate DV yourself then use eigh_full! so that you don't have to copy A?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we actually quite frequently have needed this, how about we add the signature to the @functiondef implementations and have this everywhere?

$f!(A, alg::AbstractAlgorithm) = $f!(A, initialize_output($f!, A, alg), alg)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(recommend doing this in a separate PR, easier to get merged 😉 )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, it is definitely fine to leave it like this and think about it in a separate PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me rephrase: I mean to first add that and then rebase this on top.


function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
D, V = eigh_full(A, alg.eigh_alg)
copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reduced allocation strategy:

Suggested change
copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V))
iV = inv(V)
map!(exp, diagview(D))
mul!(expA, rmul!(V, D), iV)

Copy link
Contributor Author

@sanderdemeyer sanderdemeyer Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has to be map!(exp, diagview(D), diagview(D)) instead of map!(exp, diagview(D)), but good suggestion otherwise. I have also added it for the ExponentialViaEig.
EDIT: the suggested change works only for Julia 1.12 onwards. That's why I will keep the version with
3 arguments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just diagview(D) .= exp.(diagview(D))?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that more efficient than the current code? If not, I'd prefer to keep it that way, since it feels a bit more natural to me.

change some input tests
remove redundant comment
include ComplexF16 in tests
fix unchanged test names and docs
improve allocations
Copy link
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I'm not fully convinced by the interface of exponential(!), especially in its current form and implementation this looks slightly strange.

LinearAlgebra uses an in-place version, i.e. it reuses the input array to return exp!, and looking at the different implementations you have here, it is not obvious that trying to fit this into a exponentiate!(A, expA, alg) signature is really helping us - on the contrary, all this is really doing is creating an additional copy at the end just to make sure that it is allocated in the provided output.
As we discussed for your previous PR, this really is not the purpose of being able to provide the output argument.

For the algorithms, thinking a bit ahead, it might be appropriate to just call these something along the lines of matrix functions via eig, since presumably these approaches are actually generic for all of these implementations.

end

function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
D, V = eigh_full(A, alg.eigh_alg)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me rephrase: I mean to first add that and then rebase this on top.


function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
D, V = eigh_full(A, alg.eigh_alg)
copyto!(expA, V * Diagonal(exp.(diagview(D))) * inv(V))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just diagview(D) .= exp.(diagview(D))?

end

function MatrixAlgebraKit.default_exponential_algorithm(E::Type{T}; kwargs...) where {T <: StridedMatrix{<:Union{BigFloat, Complex{BigFloat}}}}
return ExponentialViaEig(GS_QRIteration(; kwargs...))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return ExponentialViaEig(GS_QRIteration(; kwargs...))
eig_alg = MatrixAlgebraKit.default_eig_alg(E; kwargs...)
return ExponentialViaEig(eig_alg)

or something similar


function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
D, V = eigh_full(A, alg.eigh_alg)
iV = inv(V)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
iV = inv(V)
iV = adjoint(V)

for the hermitian case this shouldn't be a full inverse, but you then do have to adapt the remainder of the implementation to avoid overwriting it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I guess that is my fault; I didn't think of adjoint(V) in the Hermitian case when I suggested this last night. I think one can do something like

diagview(D) .= exp.( diagview(D) ./ 2)
rmul!(V, D)
mul!(expA, V, adjoint(V))

Comment on lines 33 to 35
function initialize_output(::typeof(exponential!), A::Diagonal, ::DiagonalAlgorithm)
return similar(A)
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function initialize_output(::typeof(exponential!), A::Diagonal, ::DiagonalAlgorithm)
return similar(A)
end
initialize_output(::typeof(exponential!), A::Diagonal, ::DiagonalAlgorithm) = A

For diagonal, we try to avoid allocations as much as possible.

# Implementation
# --------------
function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaLA)
copyto!(expA, LinearAlgebra.exp(A))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
copyto!(expA, LinearAlgebra.exp(A))
check_input(exponential!, A, expA, alg)
copyto!(expA, LinearAlgebra.exp(A))

return expA
end

function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaEigh)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also missing check_input here


# Implementation
# --------------
function exponential!(A::AbstractMatrix, expA::AbstractMatrix, alg::ExponentialViaLA)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This implementation is a bit strange, LinearAlgebra actually uses exp!(A) as its expert driver, so at least this is allowed to destroy the input matrix, but additionally you should also try and avoid allocating expA, and simply have expA === A

# -------
function initialize_output(::typeof(exponential!), A::AbstractMatrix, ::AbstractAlgorithm)
n = size(A, 1) # square check will happen later
expA = similar(A, (n, n))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For eigenvalue based decompositions, are there considerations about obtaining complex eigenvalues from real matrices and what this means for the output? I guess the output for a real matrix can always be made real, but does that work with the current implementation + finite numeric precision?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good comment; I think this is one of the main reasons why the matrix exponential is typically not computed via the eigenvalue decomposition in the general (non-Hermitian) case. Also, the eigenvalue decomposition could be (close to) unstable, in the case of Jordan blocks.

Copy link
Contributor Author

@sanderdemeyer sanderdemeyer Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hadn't thought of this, and indeed also the hermitian tests don't capture this. I can change the current code to

map!(exp, diagview(D), diagview(D))
rmul!(V, D)
copyto!(expA, real(V * iV))

to accommodate this. This is not the most elegant solution, but I don't think we can get around this problem for the non-hermitian case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a general note: copyto! really means a memcopy of the data, and has no notion of axes. Almost everywhere you really want copy! instead.

Specifically here, at least it should be:

expA .= real.(V * iV)

otherwise you again have the issue of first allocating a completely new array to then immediately copy it.

@sanderdemeyer
Copy link
Contributor Author

sanderdemeyer commented Nov 13, 2025

Overall I'm not fully convinced by the interface of exponential(!), especially in its current form and implementation this looks slightly strange.

LinearAlgebra uses an in-place version, i.e. it reuses the input array to return exp!, and looking at the different implementations you have here, it is not obvious that trying to fit this into a exponentiate!(A, expA, alg) signature is really helping us - on the contrary, all this is really doing is creating an additional copy at the end just to make sure that it is allocated in the provided output. As we discussed for your previous PR, this really is not the purpose of being able to provide the output argument.

The idea of putting it in this framework is to allow for different sorts of algorithms, i.e. ones that would also work for BigFloats. I get that in the BLASFloat case, we should avoid extra allocations, but could you elaborate on your suggestion for ExponentialViaLA? Do you just want to get rid of the preallocated output?

For the algorithms, thinking a bit ahead, it might be appropriate to just call these something along the lines of matrix functions via eig, since presumably these approaches are actually generic for all of these implementations.

I may be missing your point here, but that was the idea behind the ExponentialViaEigh and ExponentialViaEig naming conventions? Or do you really want separate names for each eig_alg?

@lkdvos
Copy link
Member

lkdvos commented Nov 13, 2025

The idea of putting it in this framework is to allow for different sorts of algorithms, i.e. ones that would also work for BigFloats. I get that in the BLASFloat case, we should avoid extra allocations, but could you elaborate on your suggestion for ExponentialViaLA? Do you just want to get rid of the preallocated output?

I am indeed referring to the preallocated output, not the algorithms part. It really only makes sense to have the option of giving a preallocated output if we are actually able to use this, and for the current implementations you have this is not saving us any work, rather it is increasing it because you add an extra allocation at the beginning and an extra copy at the end.

While it is definitely possible to have initialize_output(exponentiate!, A) = A to just allocate the result in-place, I still wonder if it is then useful to have to implement the boiler plate for fitting it in a framework that we are mostly going to bypass anyways.

I may be missing your point here, but that was the idea behind the ExponentialViaEigh and ExponentialViaEig naming conventions? Or do you really want separate names for each eig_alg?

Sorry I should have explained that better, I meant that I would like to avoid having to also define SqrtViaEig, SinViaEig, ..., and simply have some algorithm that signifies "solve this by doing an eigenvalue decomposition".

@sanderdemeyer
Copy link
Contributor Author

sanderdemeyer commented Nov 13, 2025

I am indeed referring to the preallocated output, not the algorithms part. It really only makes sense to have the option of giving a preallocated output if we are actually able to use this, and for the current implementations you have this is not saving us any work, rather it is increasing it because you add an extra allocation at the beginning and an extra copy at the end.

While it is definitely possible to have initialize_output(exponentiate!, A) = A to just allocate the result in-place, I still wonder if it is then useful to have to implement the boiler plate for fitting it in a framework that we are mostly going to bypass anyways.

Is your suggestion then to just skip the whole @ functiondef and other general frameworks we have to define exponentiate and exponentiate!, or to keep the current framework somewhat, but just remove the preallocated output argument?

Sorry I should have explained that better, I meant that I would like to avoid having to also define SqrtViaEig, SinViaEig, ..., and simply have some algorithm that signifies "solve this by doing an eigenvalue decomposition".

Okay, I see. I agree and will change this.

@Jutho
Copy link
Member

Jutho commented Nov 13, 2025

Regarding the algorithm names, I agree with Lukas and also think we want to have a general
MatrixFunctionViaEig, though it is useful to know about which function exactly, because some functions map real matrices to real matrices (even if complex eigenvalues are involved), and others do not. I think these are the two major categories, so we could just have two different algorithms for that, but I don't know good names for that.

Regarding the role of the output arguments, I only partially agree. The whole point of why we started MatrixAlgebraKit.jl, is because in TensorKit we first want to define the output tensor, and then compute block per block the result, where we want to store the result in the corresponding block of the output tensor. Ideally, yes, the computation is such that we also use that output data as storage during the computation, in such a way that the end result "naturally" ends up there, but if that is difficult, a final copy! can still be useful. We can then always try to improve this later on behind the scenes, but at least TensorKit can be agnostic about this.

Note that the LinearAlgebra exp! is also cheating, and typically also ends up just copying the final result back into A. There is no way to actually compute exp(A) fully in-place. The typical approach via Pade approximations has a ton of allocations. I think I actually have a slightly allocation-friendlier implementation lying around for some things we did in CMPSKit.jl

@Jutho
Copy link
Member

Jutho commented Nov 13, 2025

Regardless of the comment about general matrix functions, it is a fact that the exponential is by far the most useful and common one that we need, so I am also not opposed to first thinking carefully about this one, and having some part of the implementation be specific for matrix exponentials.

In particular, one important consideration that we might want to include in this design, that is specific to our use case, is that we also might be interested in computing exp(-im* δt*H) for real time evolution, where it is probably useful to use eigh for the Hermitian matrix H, but still the end result will be complex. Julia has a cis function for cis(x) = exp(im * x), but unfortunately that comes without a minus sign in the argument. But maybe cis(-δt * H) is not too bad.

@lkdvos
Copy link
Member

lkdvos commented Nov 13, 2025

To comment on the TensorKit interaction, I definitely agree with the purpose, but this is not actually currently the design we ended up with.
Our interface very clearly states that we may use the provided output, but are in no way bound to do so.
This is reflected in the TensorKit implementation, see e.g. https://github.com/QuantumKitHub/TensorKit.jl/blob/feab2072b2909b9a127d871ef58c36e83623c2fc/src/factorizations/matrixalgebrakit.jl#L39
I'm of course happy to rediscuss that design decision, but I'm actually quite happy with how that is shaping out.

So basically there are two comments I have:

On the one hand, there is the question about whether or not there are implementations that benefit from providing an additional output array.
From looking at the implementations here, this is not obvious at all, actually it seems like instead we are simply allocating an additional array at the end to copy the results into, which we might as well have reused the A for, since we already "destroyed" that data anyways.
I am not saying that the implementations have to be fully allocation-free (the LAPACK routines still have workspaces too), I just think that if providing the output is a way to avoid having to do an allocation, it really shouldn't be increasing the allocations instead.
If we can't come up with a case where having this additional array actually helps us out, it might be reasonable to simplify define exponentiate!(A, alg) -> expA where expA === A instead, which would still work for TensorKit in the exact same way.
I definitely agree that this can be achieved by hacking it into the current interface as well, by simply having initialize_output(exponentiate!, A) = A, I just wanted to open up the discussion about whether or not that is something we like, or think this is just boilerplate we aren't using.
Giving it some thought, I'm definitely okay with the initialize_output(...) = A default and going from there, i.e. keeping @functiondef and "hacking" this into that.

On the other hand, given that interface, if there is no way of naturally making the output end up in the provided destination, I would really like to avoid ending up with a final copy!(provided_dest, computed_exp) at the end, even as a "for now" implementation that we can improve on later.
Given that we have to check for in-placeness in TensorKit anyways, I would in this case much rather have initialize_output() = nothing, and simply return the computed_exp instead.
This is the exact same comment and solution as for the generic linear algebra wrappers.

@Jutho
Copy link
Member

Jutho commented Nov 13, 2025

I guess I am a bit confused, because most of the implementations now do actually perform the final step in the calculation in such a way that the result is directly stored in the output array, no? It is only the algorithm that goes via Base/LinearAlgebra that requires the extra copy! step, and that implementation I will happily replace with my own Pade implementation.

But it is also true that, by the time the final step of the calculation is reached; the memory of A, which has already been destroyed and is no longer containing active information, can be reclaimed to store the result, so initialize_output(...) = A could indeed be the right design choice.

sanderdemeyer and others added 4 commits November 19, 2025 15:07
change name to `MatrixFunctionViaEig` etc
change `decompositions` to `matrixfunctions`
add default algorithm for Diagonal matrices
add input checks
add @testthrows to catch non-hermitian matrices being given to MatrixFunctionViaEigh
change default exponential algorithm to e.g. `MatrixFunctionViaEig` of the default `eig_alg`
@github-actions
Copy link

github-actions bot commented Nov 20, 2025

Your PR no longer requires formatting changes. Thank you for your contribution!

@sanderdemeyer
Copy link
Contributor Author

sanderdemeyer commented Nov 26, 2025

The last few commits added the functions exponentiali and exponentiali!, for which it holds that exponentiali(t, A) = exponential(im*t*A). This can be useful for hermitian matrices A, where you still want to use the hermitian eigendecomposition for the matrix A, even though the matrix im*t*A is not per se hermitian.

@sanderdemeyer sanderdemeyer requested review from Jutho and lkdvos December 2, 2025 09:41
@lkdvos
Copy link
Member

lkdvos commented Dec 3, 2025

Could we revisit the names for a second?
I think exponentiali looks a bit off, Julia tends to use im, and the combination of li! at the end all being characters that look very similar make it a bit hard to read too.
Just throwing some thoughts out there:

  • Can we get away with a single function, and distinguish between the two cases using dispatch?
  • Base uses cis (cos + i sin), is there a way to use something similar?
  • It seems strange to support a scalar only in the imaginary case, and not for a real case, I would do either both or neither.
  • There are still a number of suggestions and unaddressed comments, unless the github view is misleading me?

@sanderdemeyer
Copy link
Contributor Author

  • For me, exponentialim! would also be fine, although I really don't have any strong preference (you can assume this will be the case for all future naming discussions). If you have suggestions and/or a strong preference, feel free to either comment it here or just push it directly.
  • The code works perfectly fine for both real and complex scalars. The reason we multiply with i explicitly, is that for real scalars, there would be no benefit in using the method with two arguments w.r.t. just using exponential(t*A). The only benefit stemming from the second method is the ability to use the hermitian eigendecomposition for the exponential of skew-hermitian matrices. The reason why two different names are used, is to make it fit with the current framework of @functiondef, where we need an @functiondef for exponential and an @functiondef n_args = 2 for exponentali, which do not work together. The current framework just doens't fit nicely with defining such constructions, and rather than throwing it all away and implementing it all ourselves for this function (which would not fit nicely with the rest of this package), defining the functions with 2 different names (1) allows us to use the current structure of this package, and (2) fits nicely with the argument of adding the imaginary unit. I would be okay with just not defining exponential(t, A), but @Jutho argued that this could be useful for real-time evolution (e.g. with cluster expansions), which is why I agree with him that this is the best approach.

There are a few comments that occur twice in the view, that I have reacted to once. The comments that I have taken into account and have reacted a 'thumbs up' to, I haven't resolved to make sure you saw they were taken into account. There are, I think, two comments left:

  • the one where you said "Let me rephrase: I mean to first add that and then rebase this on top.". I don't really get what you mean with this comment. If you mean to first merge another PR (that hasn't been opened yet), I would really prefer to switch the order in view of time, since I would like to have the exponential function in the next version of TensorKit.
  • the one where you said "This implementation is a bit strange, LinearAlgebra actually uses exp!(A) as its expert driver, so at least this is allowed to destroy the input matrix, but additionally you should also try and avoid allocating expA, and simply have expA === A". Github marks this comment as outdated, and I don't see which lines you commented this on (this is also already 3 weeks old). So could you either clarify what you mean or, like I said before, just push the desired changes directly if you have strong opinions on them.

@Jutho
Copy link
Member

Jutho commented Dec 3, 2025

I think I finally managed to work away (some of) my backlog of PRs that I had to review, so this one is among the next on my list. I hope to go through it (and if necessary commit changes myself directly) before the end of the week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants