Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove specialized Vector and Matrix constructors #57692

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jishnub
Copy link
Member

@jishnub jishnub commented Mar 9, 2025

The following work even without the specialized methods:

julia> Vector{Int}(undef, 1)
1-element Vector{Int64}:
 1

julia> Matrix{Int}(undef, 1, 1)
1×1 Matrix{Int64}:
 1

Also, converts the dims to Ints earlier in the nothing/missing constructors, potentially leading to fewer methods being compiled.

@jishnub jishnub added the arrays [a, r, r, a, y, s] label Mar 9, 2025
@oscardssmith
Copy link
Member

looks reasonable to me

@giordano
Copy link
Contributor

Is there any performance penalty?

@jishnub
Copy link
Member Author

jishnub commented Mar 10, 2025

The performance seems similar in well-inferred calls. For badly inferred calls, it becomes less predictable.

For array creation (note that there's quite a bit of variation in the numbers for the badly inferred cases, of the order of 10-15 ns in each case):

julia> g1(d) = Array{Int,1}(undef, d...)
g1 (generic function with 2 methods)

julia> g2(d) = Array{Int,2}(undef, d...)
g2 (generic function with 2 methods)

julia> @b (1,) g1
20.321 ns (2 allocs: 64 bytes) # v"1.13.0-DEV.186"
18.651 ns (2 allocs: 64 bytes) # This PR

julia> @b Int[1] g1
178.221 ns (2 allocs: 64 bytes) # v"1.13.0-DEV.186"
200.421 ns (2 allocs: 64 bytes) # This PR

julia> @b Any[1] g1
164.103 ns (2 allocs: 64 bytes) # v"1.13.0-DEV.186"
178.190 ns (2 allocs: 64 bytes) # This PR

julia> @b (1,1) g2
19.079 ns (2 allocs: 80 bytes)
20.287 ns (2 allocs: 80 bytes)

julia> @b [1,1] g2
205.400 ns (2 allocs: 80 bytes)
211.217 ns (2 allocs: 80 bytes)

julia> @b Any[1,1] g2
174.455 ns (2 allocs: 80 bytes)
185.758 ns (2 allocs: 80 bytes)

Overall, it does seem like this makes array creation slower if the dimensions are badly inferred. Such cases are probably not very common, though. I'm unsure why there's this slowdown.

For the missing initializer, where there is a fill! involved as well:

julia> f1(d) = Vector{Union{Int,Missing}}(missing, d...)
f1 (generic function with 1 method)

julia> f2(d) = Matrix{Union{Int,Missing}}(missing, d...)
f2 (generic function with 1 method)

julia> @b (1,) f1
21.629 ns (2 allocs: 80 bytes) # v"1.13.0-DEV.186"
23.324 ns (2 allocs: 80 bytes) # this PR

julia> @b [1] f1
193.886 ns (2 allocs: 80 bytes) # v"1.13.0-DEV.186"
248.468 ns (2 allocs: 80 bytes) # This PR

julia> @b Any[1] f1
201.938 ns (2 allocs: 80 bytes) # v"1.13.0-DEV.186"
263.851 ns (2 allocs: 80 bytes) # This PR

julia> @b (1,1) f2
23.350 ns (2 allocs: 96 bytes) # v"1.13.0-DEV.186"
23.350 ns (2 allocs: 96 bytes) # This PR

julia> @b [1,1] f2
537.571 ns (3 allocs: 128 bytes) # v"1.13.0-DEV.186"
329.024 ns (3 allocs: 128 bytes) # This PR

julia> @b Any[1,1] f2
500.643 ns (3 allocs: 128 bytes) # v"1.13.0-DEV.186"
273.977 ns (3 allocs: 128 bytes) # This PR

In this case, the vector construction is noticeably slower in badly inferred cases, whereas the matrix construction is substantially faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrays [a, r, r, a, y, s]
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants