-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running example population-test with GPU fails. #26
Comments
I'll take a look at this sometime this week. Did you use a released version of the package or the master branch? This repo is mid-refactor so I haven't done the due diligence of making sure every component is working. |
Hi @darsnack, sorry I didn't see your reply straight away. I used installed via the git-master branch. Should I try the bundled package instead? I also noticed that a method signature for function lif!(t::CuVector{<:Real}, I::CuVector{<:Real}, V::CuVector{<:Real}; vrest::CuVector{<:Real}, R::CuVector{<:Real}, tau::CuVector{<:Real}) https://github.com/darsnack/SpikingNN.jl/blob/master/src/models/lif.jl#L100 In my example I have not actually supplied https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L12 Also I should have clarified that Since using master I noticed a branch called refactor/gpu-support and another branch called benchmarking. I also noticed the benchmarking branch has the classic Brunel model, which is cool. Its a great repository by the way, it potentially has a good balance of features and examples. I am also using WaspNet.jl and SpikingNeuralNetworks.jl. I can't yet figure out which is the best Spiking Neural Networks Package. I am busy optimizing SNNs with genetic algorithms in julia using the ISI spike distance of the raster-plots and I might end up making example optimizations that involve all or any of three simulators. |
Cool, glad to see someone trying this code out in a different use case than mine. The branch that I'm currently using for my research is |
Srm0-test.jl runs with GPU if you make the following modifications: using CUDA
CUDA.allowscalar(false)
cpu(x) = x
gpu(x) = x
cpu(x::CuArray) = adapt(Array, x)
gpu(x::Array) = CuArray(x) Next, in the file tests/Srm0-test.jl wrap every SpikingNN element with the using SpikingNN
using Plots
# SRM0 params
η₀ = 5.0
τᵣ = 1.0
vth = 0.5
# Input spike train params
rate = 0.01
T = 15
∂t = 0.01
n = convert(Int, ceil(T / ∂t))
srm = gpu(Neuron(QueuedSynapse(Synapse.Alpha()), SRM0(η₀, τᵣ), Threshold.Ideal(vth)))
input = gpu(ConstantRate(rate))
spikes = excite!(srm, input, n)
# callback to record voltages
voltages = gpu(Float64[])
record = function ()
push!(voltages, getvoltage(srm))
end
# simulate
@time output = simulate!(srm, n; dt = ∂t, cb = record, dense = true)
# plot raster plot
raster_plot = rasterplot(∂t .* spikes, ∂t .* output, label = ["Input", "Output"], xlabel = "Time (sec)",
title = "Raster Plot (\\alpha response)")
xlims!(0, T)
# plot dense voltage recording
plot(∂t .* collect(1:n), voltages,
title = "SRM Membrane Potential with Varying Presynaptic Responses", xlabel = "Time (sec)", ylabel = "Potential (V)", label = "\\alpha response")
# resimulate using presynaptic response
voltages = gpu(Float64[])
srm = gpu(Neuron(QueuedSynapse(Synapse.EPSP(ϵ₀ = 2, τm = 0.5, τs = 1)), SRM0(η₀, τᵣ), Threshold.Ideal(vth)))
excite!(srm, spikes)
@time simulate!(srm, n; dt = ∂t, cb = record, dense = true)
# plot voltages with response function
voltage_plot = plot!(∂t .* collect(1:n), voltages, label = "EPSP response")
xlims!(0, T)
plot(raster_plot, voltage_plot, layout = grid(2, 1))
xticks!(0:T) This code executes using Currays. Note no networks are simulated here. Network simulation breaks due to an array broadcasting error that I don't understand. |
Hi there,
To test if I could simulate spiking neural networks with GPU I modified the population-test.jl file (now called population-gpu-test.jl)
To make the example a bit less trivial I made the neuron population 100 by creating a square neuron weight matrix as such:
https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L12
I also tried to make the neuron activity a bit more balanced by distributing the inputs so only 1/3 of inputs are strong.
https://github.com/russelljjarvis/SpikingNN.jl/blob/master/examples/population-gpu-test.jl#L35-L37
All of these modifications work if I use:
on line 16 but they break if I use
instead.
See the stack trace below.
The text was updated successfully, but these errors were encountered: