Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PINNErrorVsTime Benchmark Updates #1159

Open
wants to merge 22 commits into
base: master
Choose a base branch
from

Conversation

ParamThakkar123
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ??

@ChrisRackauckas
Copy link
Member

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Jan 19, 2025

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

Yes. Actually I set it to that number just to get rid of that error

@ChrisRackauckas
Copy link
Member

Wait what's the error?

@ParamThakkar123
Copy link
Contributor Author

Wait what's the error?

the error went off on running again

@ChrisRackauckas
Copy link
Member

what error?

@ParamThakkar123
Copy link
Contributor Author

maxiters should be a number greater than 1000

@ChrisRackauckas
Copy link
Member

can you please just show the error...

@ParamThakkar123
Copy link
Contributor Author

AssertionError: maxiters for CubaCuhre(0, 0, 0) should be larger than 1000

Stacktrace: [1] __solvebp_call(prob::IntegralProblem{false, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, NeuralPDE.var"#integrand#109"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p),
NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}}, Vector{Float64}, @Kwargs{}}, alg::CubaCuhre,
sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, lb::Vector{Float64}, ub::Vector{Float64}, p::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64) @ IntegralsCuba C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:139 [2] __solvebp_call @ C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:134 [inlined] [3] #__solvebp_call#4 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:95 [inlined] [4] __solvebp_call @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:94 [inlined] [5] #rrule#5 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:17 [inlined] [6] rrule @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:14 [inlined] [7] rrule @ C:\Users\Hp.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144 [inlined] [8] chain_rrule_kw @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined] [9] macro expansion @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined] [10] _pullback @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91 [inlined] [11] solve! @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:84 [inlined] ... @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:162 [55] solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0ca831bf, 0x51abc8eb, 0xda1f388f, 0xf472bcea, 0x7492cfcb), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}, ::Optimisers.Adam; kwargs::@Kwargs{callback::var"#11#18"{var"#loss_function#17"{OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x700712a1, 0x6c1c91c2, 0xa5bfc01b, 0x66a91103, 0x0d12fcff), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}}, Vector{Any}, Vector{Any}, Vector{Any}}, maxiters::Int64}) @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:83 [56] allen_cahn(strategy::QuadratureTraining{CubaCuhre, Float64}, minimizer::Optimisers.Adam, maxIters::Int64) @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:105

@ChrisRackauckas
Copy link
Member

I see, that's for the sampling algorithm. You should only need that on Cuhre?

@ParamThakkar123
Copy link
Contributor Author

I see, that's for the sampling algorithm. You should only need that on Cuhre?

Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them

@ParamThakkar123
Copy link
Contributor Author

The CI has passed here. And all the code seems to run perfectly. Can you please review ??

@ChrisRackauckas
Copy link
Member

@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was?

@ArnoStrouwen
Copy link
Member

I don't remember myself, but that PR links to:
SciML/Integrals.jl#47

@ChrisRackauckas
Copy link
Member

Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅

@ChrisRackauckas
Copy link
Member

Can you force latest majors and make sure the manifest resolves?

@ParamThakkar123
Copy link
Contributor Author

I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts

@ChrisRackauckas
Copy link
Member

Can you share the resolution errors?

@ParamThakkar123
Copy link
Contributor Author

image

image

@ChrisRackauckas These are the resolution errors that occur

@ChrisRackauckas
Copy link
Member

Oh those were turned into extensions. Change using IntegralsCuba, IntegralsCubature into using Integrals, Cuba, Cubature and change the dependencies to directly depending on Cuba and Cubature.

@ParamThakkar123
Copy link
Contributor Author

Sure !! 🫡

@sathvikbhagavan
Copy link
Member

What is the stack trace? Try to locate the error

@ParamThakkar123
Copy link
Contributor Author

AssertionError: f isa IntegralFunction

Stacktrace:
  [1] __solvebp_call(cache::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, alg::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, domain::Tuple{Vector{Float64}, Vector{Float64}}, p::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64)
    @ Integrals C:\Users\Hp\.julia\packages\Integrals\e3NH3\src\Integrals.jl:197
  [2] rrule(::typeof(Integrals.__solvebp), cache::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, alg::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, domain::Tuple{Vector{Float64}, Vector{Float64}}, p::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}; kwargs::@Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64})
    @ IntegralsZygoteExt C:\Users\Hp\.julia\packages\Integrals\e3NH3\ext\IntegralsZygoteExt.jl:53
  [3] kwcall(::@NamedTuple{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(ChainRulesCore.rrule), ::Zygote.ZygoteRuleConfig{Zygote.Context{false}}, ::Function, ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ ChainRulesCore C:\Users\Hp\.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144
  [4] chain_rrule_kw
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined]
  [6] _pullback(::Zygote.Context{false}, ::typeof(Core.kwcall), ::@NamedTuple{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(Integrals.__solvebp), ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ Zygote C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91
  [7] _apply(::Function, ::Vararg{Any})
    @ Core .\boot.jl:838
  [8] adjoint
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\lib\lib.jl:202 [inlined]
  [9] _pullback
    @ C:\Users\Hp\.julia\packages\ZygoteRules\CkVIK\src\adjoint.jl:67 [inlined]
 [10] #__solve#50
    @ C:\Users\Hp\.julia\packages\Integrals\e3NH3\src\Integrals.jl:69 [inlined]
 [11] _pullback(::Zygote.Context{false}, ::Integrals.var"##__solve#50", ::@Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(Integrals.__solve), ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ Zygote C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\Pma4a\src\solve.jl:95
 [52] allen_cahn(strategy::QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}, minimizer::Optimisers.Adam, maxIters::Int64, time_limit::Float64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:122
 [53] allen_cahn(strategy::QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:11

@ParamThakkar123
Copy link
Contributor Author

Maps in some way to Quadrature Training

@ParamThakkar123
Copy link
Contributor Author

The error AssertionError: f isa IntegralFunction probably indicates that the function passed to Integrals.jl during the quadrature training process in NeuralPDE.jl does not conform to the expected type IntegralFunction.

@ParamThakkar123
Copy link
Contributor Author

AssertionError: f isa IntegralFunction

Stacktrace:
  [1] __solvebp_call(cache::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, alg::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, domain::Tuple{Vector{Float64}, Vector{Float64}}, p::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64)
    @ Integrals C:\Users\Hp\.julia\packages\Integrals\e3NH3\src\Integrals.jl:197
  [2] rrule(::typeof(Integrals.__solvebp), cache::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, alg::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, domain::Tuple{Vector{Float64}, Vector{Float64}}, p::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}; kwargs::@Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64})
    @ IntegralsZygoteExt C:\Users\Hp\.julia\packages\Integrals\e3NH3\ext\IntegralsZygoteExt.jl:53
  [3] kwcall(::@NamedTuple{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(ChainRulesCore.rrule), ::Zygote.ZygoteRuleConfig{Zygote.Context{false}}, ::Function, ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ ChainRulesCore C:\Users\Hp\.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144
  [4] chain_rrule_kw
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined]
  [6] _pullback(::Zygote.Context{false}, ::typeof(Core.kwcall), ::@NamedTuple{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(Integrals.__solvebp), ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ Zygote C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91
  [7] _apply(::Function, ::Vararg{Any})
    @ Core .\boot.jl:838
  [8] adjoint
    @ C:\Users\Hp\.julia\packages\Zygote\TWpme\src\lib\lib.jl:202 [inlined]
  [9] _pullback
    @ C:\Users\Hp\.julia\packages\ZygoteRules\CkVIK\src\adjoint.jl:67 [inlined]
 [10] #__solve#50
    @ C:\Users\Hp\.julia\packages\Integrals\e3NH3\src\Integrals.jl:69 [inlined]
 [11] _pullback(::Zygote.Context{false}, ::Integrals.var"##__solve#50", ::@Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, ::typeof(Integrals.__solve), ::Integrals.IntegralCache{false, BatchIntegralFunction{false, SciMLBase.FullSpecialize, Integrals.var"#26#28"{typeof(Integrals.t2ujac), Vector{Float64}, Vector{Float64}, NeuralPDE.var"#integrand#111"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, DataType, CPUDevice}}, Nothing}, Tuple{Vector{Float64}, Vector{Float64}}, ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, @Kwargs{}, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, Integrals.ReCallVJP{Integrals.ZygoteVJP}, @Kwargs{reltol::Float64, abstol::Float64, maxiters::Int64}, Nothing}, ::HCubatureJL{typeof(LinearAlgebra.norm), Nothing}, ::Integrals.ReCallVJP{Integrals.ZygoteVJP}, ::Tuple{Vector{Float64}, Vector{Float64}}, ::ComponentArrays.ComponentVector{Float32, Vector{Float32}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}})
    @ Zygote C:\Users\Hp\.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\Pma4a\src\solve.jl:95
 [52] allen_cahn(strategy::QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}, minimizer::Optimisers.Adam, maxIters::Int64, time_limit::Float64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:122
 [53] allen_cahn(strategy::QuadratureTraining{Float64, HCubatureJL{typeof(LinearAlgebra.norm), Nothing}}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:11

@ChrisRackauckas @sathvikbhagavan from what I saw for this error. This error comes from the following function :

NeuralPDE.QuadratureTraining(quadrature_alg = HCubatureJL(), reltol = 1e-4, abstol = 1e-4, maxiters = 1100, batch = 0),

This possibly suggests three things

  1. Either this function is not compatible for this equation
  2. The function is too complex due to high dimensionality (but it worked previously before the bump so this might not be the case)
  3. There's some type mismatch or instability in the code

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Mar 5, 2025

After removing this algorithm the training worked fine and it has started generating results

@sathvikbhagavan
Copy link
Member

Does it fail only with HCubatureJL? What about other equations? Do they work with this algorithm?

@ParamThakkar123
Copy link
Contributor Author

Does it fail only with HCubatureJL? What about other equations? Do they work with this algorithm?

Yes @sathvikbhagavan , It fails only for HCubatureJL. It's working for other algorithms and pretty fast

@sathvikbhagavan
Copy link
Member

sathvikbhagavan commented Mar 6, 2025

ok, I think I know why - HCubatureJL does not support batching - https://docs.sciml.ai/Integrals/stable/solvers/IntegralSolvers/ but in NeuralPDE, QuadratureTraining is implemented using BatchIntegralFunction, that's why it errors.

@ParamThakkar123
Copy link
Contributor Author

ok, I think I know why - HCubatureJL does not support batching - https://docs.sciml.ai/Integrals/stable/solvers/IntegralSolvers/ but in NeuralPDE, QuadratureTraining is implemented using BatchIntegralFunction, that's why it errors.

Oh. I see. I got your point. I will modify the code and see if this works. Thank you so much for your guidance and explanation !

@ParamThakkar123
Copy link
Contributor Author

ok, I think I know why - HCubatureJL does not support batching - https://docs.sciml.ai/Integrals/stable/solvers/IntegralSolvers/ but in NeuralPDE, QuadratureTraining is implemented using BatchIntegralFunction, that's why it errors.

@sathvikbhagavan I think we can replace it with CubatureJLh instead because they are almost the same except that HCubatureJL uses a separate library

@sathvikbhagavan
Copy link
Member

Its alright, for the time being, HCubatureJL can be removed as it requires changes in NeuralPDE if we want to support it.

@ParamThakkar123
Copy link
Contributor Author

Its alright, for the time being, HCubatureJL can be removed as it requires changes in NeuralPDE if we want to support it.

Okay then @sathvikbhagavan . What I am thinking is I will remove HCubatureJL for now and raise an issue on NeuralPDE.jl to add this support. Your thoughts on this ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas, I implemented some code changes and removed HCubature because it doesn't support NeuralPDE.jl as of now. The graph turns out to be something like this :

image

@ChrisRackauckas
Copy link
Member

That still looks really wonky

@ParamThakkar123
Copy link
Contributor Author

That still looks really wonky

Yeah it is. Need to see what else can be done. But that's sort of an hit an trial I think

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I went through the code and found that there's some problem with the loss function calculations. I fixed them using the code from the documentation but now it says the losses function is not compatible with NeuralPDE due to the following reason:

MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float32})

Stacktrace:
  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
  [3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float32}, εs::Vector{Vector{Float32}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:382
  [4] macro expansion
    @ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
  [6] macro expansion
    @ .\none:0 [inlined]
  [7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, ::Matrix{Float32}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, ::NeuralPDE.var"#7#8", ::Nothing)
    @ NeuralPDE .\none:0
  [8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr})(::Matrix{Float32}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
    @ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
  [9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float32}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
 [10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float32}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
 [11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0b1316ee, 0xc582a8dd, 0x69f01bfe, 0x18cd3792, 0xe80c1816), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float32}})
    @ NeuralPDE .\none:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\sYmAV\src\solve.jl:95
 [22] allen_cahn(strategy::QuadratureTraining{Float64, CubaCuhre}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W0sZmlsZQ==.jl:116
 [23] top-level scope
    @ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W2sZmlsZQ==.jl:4

@ParamThakkar123
Copy link
Contributor Author

This comes from the phi layer in Neural PDE

@sathvikbhagavan
Copy link
Member

What changes did you do? What line is erroring? Please be specific and verbose when you report some issue

@ParamThakkar123
Copy link
Contributor Author

What changes did you do? What line is erroring? Please be specific and verbose when you report some issue

@sathvikbhagavan

I'm sorry if I wasn't clear and verbose at this point. The issue is the following:
I made some changes to the code which I haven't pushed just yet. So this is the code :

using NeuralPDE
using Integrals, Cubature, Cuba
using ModelingToolkit, Optimization, OptimizationOptimJL
using Lux, Plots
using DelimitedFiles
using QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum

function allen_cahn(strategy, minimizer, maxIters)

    ##  DECLARATIONS
    @parameters t x1 x2 x3 x4
    @variables u(..)

    Dt = Differential(t)
    Dxx1 = Differential(x1)^2
    Dxx2 = Differential(x2)^2
    Dxx3 = Differential(x3)^2
    Dxx4 = Differential(x4)^2


    # Discretization
    tmax = 1.0
    x1width = 1.0
    x2width = 1.0
    x3width = 1.0
    x4width = 1.0

    tMeshNum = 10
    x1MeshNum = 10
    x2MeshNum = 10
    x3MeshNum = 10
    x4MeshNum = 10

    dt = tmax / tMeshNum
    dx1 = x1width / x1MeshNum
    dx2 = x2width / x2MeshNum
    dx3 = x3width / x3MeshNum
    dx4 = x4width / x4MeshNum

    domains = [t  Interval(0.0, tmax),
        x1  Interval(0.0, x1width),
        x2  Interval(0.0, x2width),
        x3  Interval(0.0, x3width),
        x4  Interval(0.0, x4width)]

    ts = 0.0:dt:tmax
    x1s = 0.0:dx1:x1width
    x2s = 0.0:dx2:x2width
    x3s = 0.0:dx3:x3width
    x4s = 0.0:dx4:x4width

    # Operators
    Δu = Dxx1(u(t, x1, x2, x3, x4)) + Dxx2(u(t, x1, x2, x3, x4)) + Dxx3(u(t, x1, x2, x3, x4)) + Dxx4(u(t, x1, x2, x3, x4)) # Laplacian


    # Equation
    eq = Dt(u(t, x1, x2, x3, x4)) - Δu - u(t, x1, x2, x3, x4) + u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) ~ 0  #ALLEN CAHN EQUATION

    initialCondition = 1 / (2 + 0.4 * (x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4)) # see PNAS paper

    bcs = [u(0, x1, x2, x3, x4) ~ initialCondition]  #from literature

    ## NEURAL NETWORK
    n = 10   #neuron number
    chain = Lux.Chain(Lux.Dense(5, n, Lux.σ), Lux.Dense(n, n, Lux.σ), Lux.Dense(n, 1))   #Neural network from OptimizationFlux library

    indvars = [t, x1, x2, x3, x4]   #phisically independent variables
    depvars = [u(t, x1, x2, x3, x4)]       #dependent (target) variable

    dim = length(domains)

    losses = []
    error = []
    times = []

    dx_err = 0.2

    error_strategy = GridTraining(dx_err)

    discretization_ = PhysicsInformedNN(chain, error_strategy)
    @named pde_system_ = PDESystem(eq, bcs, domains, indvars, depvars)
    prob_ = discretize(pde_system_, discretization_)

    function loss_function_(θ, p)
        return prob_.f.f(θ, nothing)
    end  

    cb_ = function (p, l)
        deltaT_s = time_ns() #Start a clock when the callback begins, this will evaluate questo misurerà anche il calcolo degli uniform error

        ctime = time_ns() - startTime - timeCounter #This variable is the time to use for the time benchmark plot
        append!(times, ctime / 10^9) #Conversion nanosec to seconds
        append!(losses, l)
        loss_ = loss_function_(p, nothing)
        append!(error, loss_)
        timeCounter = timeCounter + time_ns() - deltaT_s #timeCounter sums all delays due to the callback functions of the previous iterations

        #if (ctime/10^9 > time) #if I exceed the limit time I stop the training
        #    return true #Stop the minimizer and continue from line 142
        #end

        return false
    end

    @named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)

    discretization = NeuralPDE.PhysicsInformedNN(chain, strategy)
    prob = NeuralPDE.discretize(pde_system, discretization)

    timeCounter = 0.0
    startTime = time_ns() #Fix initial time (t=0) before starting the training
    res = Optimization.solve(prob, minimizer, callback=cb_, maxiters=maxIters)

    phi = discretization.phi

    params = res.minimizer

    # Model prediction
    domain = [ts, x1s, x2s, x3s, x4s]

    u_predict = [reshape([first(phi([t, x1, x2, x3, x4], res.minimizer; device=cdev)) for x1 in x1s for x2 in x2s for x3 in x3s for x4 in x4s], (length(x1s), length(x2s), length(x3s), length(x4s))) for t in ts]  #matrix of model's prediction

    return [error, params, domain, times, losses]
end

Initially the plot looked really wonky and that was because the loss function wasn't being calculated correctly. So I picked up the loss function from the initial implementation which was already present in the SciMLBenchmarks.jl documentation. So the error occurred when I added the following line and made changes accordingly :

function loss_function_(θ, p)
        return prob_.f.f(θ, nothing)
end  

The error is this :

MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float64})

Stacktrace:
  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
  [3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float64}, εs::Vector{Vector{Float64}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:382
  [4] macro expansion
    @ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
  [6] macro expansion
    @ .\none:0 [inlined]
  [7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, ::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, ::NeuralPDE.var"#7#8", ::Nothing)
    @ NeuralPDE .\none:0
  [8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr})(::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
    @ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
  [9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
 [10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
 [11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})
    @ NeuralPDE .\none:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\sYmAV\src\solve.jl:95
 [22] allen_cahn(strategy::QuadratureTraining{Float64, CubaCuhre}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W0sZmlsZQ==.jl:116
 [23] top-level scope
    @ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W2sZmlsZQ==.jl:4

Which tells that there's some method error with the matrix. The stacktrace suggests that there is some problem with the Phi layer in NeuralPDE.jl specifically this seems to be coming from pinn_types.jl line 42. But not sure what changes are needed in order to incorporate this

@sathvikbhagavan
Copy link
Member

Try calling the loss function like loss_ = loss_function_(p.u, nothing) and not loss_ = loss_function_(p, nothing)

@ParamThakkar123
Copy link
Contributor Author

Try calling the loss function like loss_ = loss_function_(p.u, nothing) and not loss_ = loss_function_(p, nothing)

Yeah, this worked. Thank you so much. Also curious about this thing, is there any reference for this, where I can learn this from. This was not that easy to observe for me 😅

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas

image

This is the graph generated this time

@ParamThakkar123
Copy link
Contributor Author

I removed the HCubatureJL algorithm due to its incompatibility with NeuralPDE.jl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants