-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LazyKernelMatrix and lazykernelmatrix #515
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
""" | ||
lazykernelmatrix(κ::Kernel, x::AbstractVector) -> AbstractMatrix | ||
|
||
Construct a lazy representation of the kernel `κ` for each pair of inputs in `x`. | ||
|
||
The result is a matrix with the same entries as [`kernelmatrix(κ, x)`](@ref) but where the | ||
entries are not computed until they are needed. | ||
""" | ||
lazykernelmatrix(κ::Kernel, x) = lazykernelmatrix(κ, x, x) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be good to optimize this for the symmetric case, IMO, similar to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The optimization for the symmetric case is only calculating half the matrix lets say the top half and then redirecting all queries from the bottom half to the top half. (Actually distances simply copies the top half into the bottom half). Since this is lazy, there is probably no point in this optimization because you do not do the calculation from the start. And when you call getindex it does not matter whether you calculate the element in the top or bottom half. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. At least with other lazy iterators in Base it's a common pattern to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point, I'll see if there are ops we can optimize without too much extra code complexity. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there an interface for that? I mean you could use LinearAlgebra.Symmetric since that just wraps the original matrix afaik, but it also simply redirects queries so collect would still cause two calculations since you do a calculation per query. I mean you could just specialize collect I guess |
||
|
||
""" | ||
lazykernelmatrix(κ::Kernel, x::AbstractVector, y::AbstractVector) -> AbstractMatrix | ||
|
||
Construct a lazy representation of the kernel `κ` for each pair of inputs in `x`. | ||
|
||
The result is a matrix with the same entries as [`kernelmatrix(κ, x, y)`](@ref) but where | ||
the entries are not computed until they are needed. | ||
""" | ||
lazykernelmatrix(κ::Kernel, x, y) = LazyKernelMatrix(κ, x, y) | ||
|
||
""" | ||
LazyKernelMatrix(κ::Kernel, x[, y]) | ||
LazyKernelMatrix{T<:Real}(κ::Kernel, x, y) | ||
|
||
Construct a lazy representation of the kernel `κ` for each pair of inputs in `x` and `y`. | ||
|
||
Instead of constructing this directly, it is better to call | ||
[`lazykernelmatrix(κ, x[, y])`](@ref lazykernelmatrix). | ||
""" | ||
struct LazyKernelMatrix{T<:Real,Tk<:Kernel,Tx<:AbstractVector,Ty<:AbstractVector} <: | ||
AbstractMatrix{T} | ||
kernel::Tk | ||
x::Tx | ||
y::Ty | ||
function LazyKernelMatrix{T}(κ::Tk, x::Tx, y::Ty) where {T<:Real,Tk<:Kernel,Tx,Ty} | ||
Base.require_one_based_indexing(x) | ||
Base.require_one_based_indexing(y) | ||
return new{T,Tk,Tx,Ty}(κ, x, y) | ||
end | ||
function LazyKernelMatrix{T}(κ::Tk, x::Tx) where {T<:Real,Tk<:Kernel,Tx} | ||
Base.require_one_based_indexing(x) | ||
return new{T,Tk,Tx,Tx}(κ, x, x) | ||
end | ||
end | ||
function LazyKernelMatrix(κ::Kernel, x::AbstractVector, y::AbstractVector) | ||
# evaluate once to get eltype | ||
T = typeof(κ(first(x), first(y))) | ||
return LazyKernelMatrix{T}(κ, x, y) | ||
end | ||
LazyKernelMatrix(κ::Kernel, x::AbstractVector) = LazyKernelMatrix(κ, x, x) | ||
|
||
Base.Matrix(K::LazyKernelMatrix) = kernelmatrix(K.kernel, K.x, K.y) | ||
function Base.AbstractMatrix{T}(K::LazyKernelMatrix) where {T} | ||
return LazyKernelMatrix{T}(K.kernel, K.x, K.y) | ||
end | ||
|
||
Base.size(K::LazyKernelMatrix) = (length(K.x), length(K.y)) | ||
|
||
Base.axes(K::LazyKernelMatrix) = (axes(K.x, 1), axes(K.y, 1)) | ||
|
||
function Base.getindex(K::LazyKernelMatrix{T}, i::Int, j::Int) where {T} | ||
return T(K.kernel(K.x[i], K.y[j])) | ||
end | ||
for f in (:getindex, :view) | ||
@eval begin | ||
function Base.$f( | ||
K::LazyKernelMatrix{T}, | ||
I::Union{Colon,AbstractVector}, | ||
J::Union{Colon,AbstractVector}, | ||
) where {T} | ||
return LazyKernelMatrix{T}(K.kernel, $f(K.x, I), $f(K.y, J)) | ||
end | ||
end | ||
end | ||
|
||
Base.zero(K::LazyKernelMatrix{T}) where {T} = LazyKernelMatrix{T}(ZeroKernel(), K.x, K.y) | ||
Base.one(K::LazyKernelMatrix{T}) where {T} = LazyKernelMatrix{T}(WhiteKernel(), K.x, K.y) | ||
|
||
function Base.:*(c::S, K::LazyKernelMatrix{T}) where {T,S<:Real} | ||
R = typeof(oneunit(S) * oneunit(T)) | ||
return LazyKernelMatrix{R}(c * K.kernel, K.x, K.y) | ||
end | ||
Base.:*(K::LazyKernelMatrix, c::Real) = c * K | ||
Base.:/(K::LazyKernelMatrix, c::Real) = K * inv(c) | ||
Base.:\(c::Real, K::LazyKernelMatrix) = inv(c) * K | ||
|
||
function Base.:+(K::LazyKernelMatrix{T}, C::UniformScaling{S}) where {T,S<:Real} | ||
if isequal(K.x, K.y) | ||
R = typeof(zero(T) + zero(S)) | ||
return LazyKernelMatrix{R}(K.kernel + C.λ * WhiteKernel(), K.x, K.y) | ||
else | ||
return Matrix(K) + C | ||
end | ||
end | ||
function Base.:+(C::UniformScaling{S}, K::LazyKernelMatrix{T}) where {T,S<:Real} | ||
if isequal(K.x, K.y) | ||
R = typeof(zero(T) + zero(S)) | ||
return LazyKernelMatrix{R}(C.λ * WhiteKernel() + K.kernel, K.x, K.y) | ||
else | ||
return C + Matrix(K) | ||
end | ||
end | ||
function Base.:+(K1::LazyKernelMatrix, K2::LazyKernelMatrix) | ||
if isequal(K1.x, K2.x) && isequal(K1.y, K2.y) | ||
return LazyKernelMatrix(K1.kernel + K2.kernel, K1.x, K1.y) | ||
else | ||
return Matrix(K1) + Matrix(K2) | ||
end | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to export both? Is
lazykernelmatrix
sufficient maybe?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably.