Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic scalar rand() support #481

Open
dnadlinger opened this issue Nov 13, 2024 · 1 comment
Open

Basic scalar rand() support #481

dnadlinger opened this issue Nov 13, 2024 · 1 comment

Comments

@dnadlinger
Copy link

For use with https://github.com/SciML/PSOGPU.jl, it would be great to have basic support of on-device rand(). A simple MWE, adapted from #406, would be

function kernel(state)
    idx = thread_position_in_grid_1d()
    state[idx] = rand(Float32)
    return
end
n = 10
state = Metal.ones(n)
@metal threads=n kernel(state)

which currently fails with

julia> @metal threads=n kernel(state)
ERROR: InvalidIRError: compiling MethodInstance for kernel(::MtlDeviceVector{Float32, 1}) resulted in invalid LLVM IR
Reason: unsupported call to an unknown function (call to julia.get_pgcstack)

Would a solution such as in https://github.com/JuliaGPU/CUDA.jl/blob/master/src/device/random.jl be appropriate? What would be the most appropriate way to store the state in Metal?

@maleadt
Copy link
Member

maleadt commented Nov 13, 2024

Would a solution such as in https://github.com/JuliaGPU/CUDA.jl/blob/master/src/device/random.jl be appropriate?

Yes, although ideally it should be implemented in GPUArrays.jl now that it's based on KernelAbstractions.jl. This may require some improvements to KA.jl though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants