-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DeepONet support of multiple outputs #28
Comments
Hi, this is possible and there are at least two different options to implement this. The first is to just extend the dimension of the output space. In the example I uploaded the output space is one dimensional, but now we want a higher dimensional space:
This should create exactly what you want. But depending on L, the above approach leads to rather large output layers in the Trunk- and Branchnet.
What will work better for your problem, I don't know. |
I tested your first option, it worked. But like you said, it has a scalability issue when the size of the output locations is large. For the second option, I assume training is still in single-output mode, but the spatial coordinate embedding would inform the Trunknet of the sensor location, right? Thanks, |
Correct, just extend the input space of the Trunknet to include the space variable (sensor location) and it should work. See the second code snippet in my previous answer. |
Hi @TomF98 Your last comment is really intriguing because that's what I ultimately would like to do, i.e., interpolating to ungauged locations. But the question is how to model the inter-sensor node relations using deeponet. Is it possible to do some sort of graph convolutional network thing within the torchphysics framework? Thanks.
|
Generally, arbitrary network structures can be used in TorchPhysics. You would just have to create a subclass of the Trunk- or Branchnet class. Especially, all neural networks that a possible in PyTorch should be easy to implement. I would first keep it simple, try out the method mentioned above (using the space variable as an input in the Trunknet) and see how the DeepONet behaves. If a lot of sensor locations are available this may already lead to a good interpolation. |
Hi @TomF98, I made this space+time work, but in the course of doing that I discovered the trained deeponet had a severe overfitting problem. Although this is commonly related to FNN, I wonder if you have specific advices in the context of deeponet training. A. |
Hi @dialuser, great to hear that the space+time approach works in general. Similarly, this can be implemented for DeepONets. But the current |
Hi @TomF98, I discovered there's a data processing error, which caused misalignment in my data. Now everything looks much better. Thank you. A |
Hi
In my use case, I'd like to output at multiple sensor locations. I wonder if this is supported by Torchphysics' deeponet implementation. The output variable would have dimensions of [T, L], where T corresponds to the times defined in Trunknet coordinates, while L is the number of sensors. Thank you.
A.
The text was updated successfully, but these errors were encountered: