@@ -7,15 +7,17 @@ using ImageCore.MappedArrays: of_eltype
77
88"""
99Although stored as an array, image can also be viewed as a function from discrete grid space
10- Zᴺ to continuous space R (or C if it is complex value). This module provides the discrete
10+ Zᴺ to continuous space R if it is gray image, to C if it is complex-valued image
11+ (MRI rawdata), to Rᴺ if it is colorant image, etc.
12+ This module provides the discrete
1113version of gradient-related operators by viewing image arrays as functions.
1214
1315This module provides:
1416
1517- forward/backward difference [`fdiff`](@ref) are the Images-flavor of `Base.diff`
1618- gradient operator [`fgradient`](@ref) and its adjoint via keyword `adjoint=true`.
17- - divergence operator [`fdiv`](@ref) is the negative sum of the adjoint gradient operator of
18- given vector fields.
19+ - divergence operator [`fdiv`](@ref) computes the sum of discrete derivatives of vector
20+ fields.
1921- laplacian operator [`flaplacian`](@ref) is the divergence of the gradient fields.
2022
2123Every function in this module has its in-place version.
@@ -184,10 +186,11 @@ flaplacian(X::AbstractArray) = flaplacian!(similar(X, maybe_floattype(eltype(X))
184186
185187The in-place version of the laplacian operator [`flaplacian`](@ref).
186188
187- !!! tips Non-allocating
188- This function will allocate a new set of memories to store the intermediate
189- gradient fields `∇X`, if you pre-allcoate the memory for `∇X`, then this function
190- will use it and is thus non-allcating.
189+ !!! tip Avoiding allocations
190+ The two-argument method will allocate memory to store the intermediate
191+ gradient fields `∇X`. If you call this repeatedly with images of consistent size and type,
192+ consider using the three-argument form with pre-allocated memory for `∇X`,
193+ which will eliminate allocation by this function.
191194"""
192195flaplacian! (out, X:: AbstractArray ) = fdiv! (out, fgradient (X))
193196flaplacian! (out, ∇X:: Tuple , X:: AbstractArray ) = fdiv! (out, fgradient! (∇X, X))
@@ -206,7 +209,7 @@ Mathematically, the adjoint operator ∂ᵢ' of ∂ᵢ is defined as `<∂ᵢu,
206209
207210See also the in-place version [`fgradient!(X)`](@ref) to reuse the allocated memory.
208211"""
209- function fgradient (X:: AbstractArray{T,N} ; adjoint= false ) where {T,N}
212+ function fgradient (X:: AbstractArray{T,N} ; adjoint:: Bool = false ) where {T,N}
210213 fgradient! (ntuple (i-> similar (X, maybe_floattype (T)), N), X; adjoint= adjoint)
211214end
212215
@@ -218,12 +221,15 @@ The in-place version of (adjoint) gradient operator [`fgradient`](@ref).
218221The input `∇X = (∂₁X, ∂₂X, ..., ∂ₙX)` is a tuple of arrays that are similar to `X`, i.e.,
219222`eltype(∂ᵢX) == eltype(X)` and `axes(∂ᵢX) == axes(X)` for all `i`.
220223"""
221- function fgradient! (∇X:: NTuple{N, <:AbstractArray} , X; adjoint= false ) where N
224+ function fgradient! (∇X:: NTuple{N, <:AbstractArray} , X; adjoint:: Bool = false ) where N
222225 all (v-> axes (v) == axes (X), ∇X) || throw (ArgumentError (" All axes of vector fields ∇X and X should be the same." ))
223226 for i in 1 : N
224227 if adjoint
225228 # the negative adjoint of gradient operator for forward difference is the backward difference
229+ # see also
230+ # Getreuer, Pascal. "Rudin-Osher-Fatemi total variation denoising using split Bregman." _Image Processing On Line_ 2 (2012): 74-95.
226231 fdiff! (∇X[i], X, dims= i, rev= true )
232+ # TODO (johnnychen94): ideally we can get avoid flipping the signs for better performance.
227233 @. ∇X[i] = - ∇X[i]
228234 else
229235 fdiff! (∇X[i], X, dims= i)
0 commit comments