Memory usage in colouring #1138
Replies: 2 comments 1 reply
-
I've also tried disabling the generation of the colouring report as suggested in #1134, but this doesn't really help. Disabling the interpolation (so that the forces in the model don't depend on the interpolated values, which is a special case) basically eliminates the memory use, so I'm pretty sure this is the problem. Maybe the solution is to disable the automatic colouring and define the sparsity pattern more carefully manually? |
Beta Was this translation helpful? Give feedback.
-
Hello, The refinement algorithms currently in dymos are all heuristic and can have trouble with memory in some cases when the requested accuracy is not achieved. My first instinct would be to verify that so many segments are necessary. Can you manually instantiate the phase to have 40 segments and see if the error is acceptable? I have a notion of a grid refinement algorithm to implement that allows a bit more control over how many segments can possibly be added, using only h-refinement to preserve good coloring, but that unfortunately isn't really high on my priorities at the moment. |
Beta Was this translation helpful? Give feedback.
-
I'm working on a trajectory optimisation problem with two parallel sets of four linked phases (one after another), which seems to be facing memory issues when performing grid refinement and re-colouring at the end of a successful optimisation. A little more about my setup is given in a previous post: #1120.
Each phase has control and dynamics tandem phases (with the control inputs solved on the former and the dynamics on the latter), and currently I'm using the following initial setup:
phase 1: control: 20 segments, 3rd order; dynamics: 75 segments, 3rd order
phase 2: control: 10 segments, 3rd order; dynamics: 15 segments, 3rd order
phase 3: control: 20 segments, 3rd order; dynamics: 75 segments, 3rd order
phase 4: control: 7 segments, 3rd order; dynamics: 10 segments, 3rd order
The program has no issues running and solving the trajectory optimisation problem even when the number of dynamics segments is double this, but my computer runs out of memory and kills the process when doing a grid refinement and re-colouring with the settings given (and more dense grids). I'm running this on WSL2 (Windows subsystem for Linux) with 50GB of assigned RAM (I have 64GB, so can't allocate much more than this), and a 220GB swap partition (again, almost the maximum), so I'm surprised that this isn't enough memory to perform the colouring. I just wanted to check if these issues are expected for this kind of problem, or if something else seems wrong? One of my problems could be that the dynamics depend on forces which are calculated from interpolated values (via the Python Interpax package), so the chain of partial derivatives is at least a couple of links deep?
An example solve on a denser grid than the one described gave a Jacobian size of (24091, 9389), with 123 forward solves and 0 reverse solves. This is of course a very large number of partial derivatives to evaluate, so memory issues seem possible, but are there any ways that this memory demand can be spread out or reduced, just so that the process doesn't crash? The problem is declared with declare_partials(' * ', ' * ', method='exact') which I know doesn't help, but there is quite high coupling between the states so this probably isn't too far from true. The constraint Jacobian should be a lot more sparse however. Aside from the colouring, it runs fine, requiring much less memory and hardly even using the swap partition (at least pre-grid-refinement).
Thank you for any help.
Beta Was this translation helpful? Give feedback.
All reactions