-
-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PINNErrorVsTime Benchmark Updates #1159
base: master
Are you sure you want to change the base?
Conversation
@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ?? |
https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/ It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help. |
Yes. Actually I set it to that number just to get rid of that error |
Wait what's the error? |
the error went off on running again |
what error? |
maxiters should be a number greater than 1000 |
can you please just show the error... |
|
I see, that's for the sampling algorithm. You should only need that on Cuhre? |
Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them |
The CI has passed here. And all the code seems to run perfectly. Can you please review ?? |
@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was? |
I don't remember myself, but that PR links to: |
Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅 |
Can you force latest majors and make sure the manifest resolves? |
I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts |
Can you share the resolution errors? |
@ChrisRackauckas These are the resolution errors that occur |
Oh those were turned into extensions. Change |
Sure !! 🫡 |
What is the stack trace? Try to locate the error |
|
Maps in some way to Quadrature Training |
The error AssertionError: f isa IntegralFunction probably indicates that the function passed to Integrals.jl during the quadrature training process in NeuralPDE.jl does not conform to the expected type IntegralFunction. |
@ChrisRackauckas @sathvikbhagavan from what I saw for this error. This error comes from the following function :
This possibly suggests three things
|
After removing this algorithm the training worked fine and it has started generating results |
Does it fail only with |
Yes @sathvikbhagavan , It fails only for HCubatureJL. It's working for other algorithms and pretty fast |
ok, I think I know why - |
Oh. I see. I got your point. I will modify the code and see if this works. Thank you so much for your guidance and explanation ! |
@sathvikbhagavan I think we can replace it with CubatureJLh instead because they are almost the same except that HCubatureJL uses a separate library |
Its alright, for the time being, |
Okay then @sathvikbhagavan . What I am thinking is I will remove HCubatureJL for now and raise an issue on NeuralPDE.jl to add this support. Your thoughts on this ?? |
@ChrisRackauckas, I implemented some code changes and removed HCubature because it doesn't support NeuralPDE.jl as of now. The graph turns out to be something like this : |
That still looks really wonky |
Yeah it is. Need to see what else can be done. But that's sort of an hit an trial I think |
@ChrisRackauckas I went through the code and found that there's some problem with the loss function calculations. I fixed them using the code from the documentation but now it says the losses function is not compatible with NeuralPDE due to the following reason: MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float32})
Stacktrace:
[1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
[2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
[3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float32}, εs::Vector{Vector{Float32}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:382
[4] macro expansion
@ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
[5] macro expansion
@ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
[6] macro expansion
@ .\none:0 [inlined]
[7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, ::Matrix{Float32}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, ::NeuralPDE.var"#7#8", ::Nothing)
@ NeuralPDE .\none:0
[8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr})(::Matrix{Float32}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
@ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
[9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
[10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float32}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
[11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float32}})
@ NeuralPDE .\none:0
...
@ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\sYmAV\src\solve.jl:95
[22] allen_cahn(strategy::QuadratureTraining{Float64, CubaCuhre}, minimizer::Optimisers.Adam, maxIters::Int64)
@ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W0sZmlsZQ==.jl:116
[23] top-level scope
@ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W2sZmlsZQ==.jl:4 |
This comes from the phi layer in Neural PDE |
What changes did you do? What line is erroring? Please be specific and verbose when you report some issue |
I'm sorry if I wasn't clear and verbose at this point. The issue is the following: using NeuralPDE
using Integrals, Cubature, Cuba
using ModelingToolkit, Optimization, OptimizationOptimJL
using Lux, Plots
using DelimitedFiles
using QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum
function allen_cahn(strategy, minimizer, maxIters)
## DECLARATIONS
@parameters t x1 x2 x3 x4
@variables u(..)
Dt = Differential(t)
Dxx1 = Differential(x1)^2
Dxx2 = Differential(x2)^2
Dxx3 = Differential(x3)^2
Dxx4 = Differential(x4)^2
# Discretization
tmax = 1.0
x1width = 1.0
x2width = 1.0
x3width = 1.0
x4width = 1.0
tMeshNum = 10
x1MeshNum = 10
x2MeshNum = 10
x3MeshNum = 10
x4MeshNum = 10
dt = tmax / tMeshNum
dx1 = x1width / x1MeshNum
dx2 = x2width / x2MeshNum
dx3 = x3width / x3MeshNum
dx4 = x4width / x4MeshNum
domains = [t ∈ Interval(0.0, tmax),
x1 ∈ Interval(0.0, x1width),
x2 ∈ Interval(0.0, x2width),
x3 ∈ Interval(0.0, x3width),
x4 ∈ Interval(0.0, x4width)]
ts = 0.0:dt:tmax
x1s = 0.0:dx1:x1width
x2s = 0.0:dx2:x2width
x3s = 0.0:dx3:x3width
x4s = 0.0:dx4:x4width
# Operators
Δu = Dxx1(u(t, x1, x2, x3, x4)) + Dxx2(u(t, x1, x2, x3, x4)) + Dxx3(u(t, x1, x2, x3, x4)) + Dxx4(u(t, x1, x2, x3, x4)) # Laplacian
# Equation
eq = Dt(u(t, x1, x2, x3, x4)) - Δu - u(t, x1, x2, x3, x4) + u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) ~ 0 #ALLEN CAHN EQUATION
initialCondition = 1 / (2 + 0.4 * (x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4)) # see PNAS paper
bcs = [u(0, x1, x2, x3, x4) ~ initialCondition] #from literature
## NEURAL NETWORK
n = 10 #neuron number
chain = Lux.Chain(Lux.Dense(5, n, Lux.σ), Lux.Dense(n, n, Lux.σ), Lux.Dense(n, 1)) #Neural network from OptimizationFlux library
indvars = [t, x1, x2, x3, x4] #phisically independent variables
depvars = [u(t, x1, x2, x3, x4)] #dependent (target) variable
dim = length(domains)
losses = []
error = []
times = []
dx_err = 0.2
error_strategy = GridTraining(dx_err)
discretization_ = PhysicsInformedNN(chain, error_strategy)
@named pde_system_ = PDESystem(eq, bcs, domains, indvars, depvars)
prob_ = discretize(pde_system_, discretization_)
function loss_function_(θ, p)
return prob_.f.f(θ, nothing)
end
cb_ = function (p, l)
deltaT_s = time_ns() #Start a clock when the callback begins, this will evaluate questo misurerà anche il calcolo degli uniform error
ctime = time_ns() - startTime - timeCounter #This variable is the time to use for the time benchmark plot
append!(times, ctime / 10^9) #Conversion nanosec to seconds
append!(losses, l)
loss_ = loss_function_(p, nothing)
append!(error, loss_)
timeCounter = timeCounter + time_ns() - deltaT_s #timeCounter sums all delays due to the callback functions of the previous iterations
#if (ctime/10^9 > time) #if I exceed the limit time I stop the training
# return true #Stop the minimizer and continue from line 142
#end
return false
end
@named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)
discretization = NeuralPDE.PhysicsInformedNN(chain, strategy)
prob = NeuralPDE.discretize(pde_system, discretization)
timeCounter = 0.0
startTime = time_ns() #Fix initial time (t=0) before starting the training
res = Optimization.solve(prob, minimizer, callback=cb_, maxiters=maxIters)
phi = discretization.phi
params = res.minimizer
# Model prediction
domain = [ts, x1s, x2s, x3s, x4s]
u_predict = [reshape([first(phi([t, x1, x2, x3, x4], res.minimizer; device=cdev)) for x1 in x1s for x2 in x2s for x3 in x3s for x4 in x4s], (length(x1s), length(x2s), length(x3s), length(x4s))) for t in ts] #matrix of model's prediction
return [error, params, domain, times, losses]
end Initially the plot looked really wonky and that was because the loss function wasn't being calculated correctly. So I picked up the loss function from the initial implementation which was already present in the SciMLBenchmarks.jl documentation. So the error occurred when I added the following line and made changes accordingly : function loss_function_(θ, p)
return prob_.f.f(θ, nothing)
end The error is this : MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float64})
Stacktrace:
[1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
[2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
[3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float64}, εs::Vector{Vector{Float64}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:382
[4] macro expansion
@ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
[5] macro expansion
@ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
[6] macro expansion
@ .\none:0 [inlined]
[7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, ::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, ::NeuralPDE.var"#7#8", ::Nothing)
@ NeuralPDE .\none:0
[8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr})(::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
@ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
[9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
[10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
@ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
[11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})
@ NeuralPDE .\none:0
...
@ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\sYmAV\src\solve.jl:95
[22] allen_cahn(strategy::QuadratureTraining{Float64, CubaCuhre}, minimizer::Optimisers.Adam, maxIters::Int64)
@ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W0sZmlsZQ==.jl:116
[23] top-level scope
@ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W2sZmlsZQ==.jl:4 Which tells that there's some method error with the matrix. The stacktrace suggests that there is some problem with the Phi layer in NeuralPDE.jl specifically this seems to be coming from pinn_types.jl line 42. But not sure what changes are needed in order to incorporate this |
Try calling the loss function like |
Yeah, this worked. Thank you so much. Also curious about this thing, is there any reference for this, where I can learn this from. This was not that easy to observe for me 😅 |
This is the graph generated this time |
I removed the HCubatureJL algorithm due to its incompatibility with NeuralPDE.jl |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.