-
Notifications
You must be signed in to change notification settings - Fork 126
RSDK-12719: Lower the number of arm positions we cache. #5518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
||
| var ( | ||
| arm6JogRatios = []float64{360, 32, 8, 8, 4, 2} | ||
| arm6JogRatios = []float64{90, 16, 8, 8, 4, 2} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An "enhanced" version of this patch is to query how much memory the machine has. Such that we're more conservative with low-memory rpis while getting better precision with bigger boxes.
Happy move this PR in that direction. Or any other direction. Just wanted to get product feedback/a path forward for users unable to upgrade.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who can't upgrade?
This will make a lot of things worse and isn't the direction I think we should go.
We can just not cache at all if the system has less than x amount of ram perhaps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lame - is it easy enough to see available memory?
Availability
Quality
Performance
The above data was generated by running scenes defined in the
|
Starting a conversation via a PR. Apologies if this isn't the best place.
We got a report of crashing once upgrading v101/v102. This was a pretty clear OOM on the first access of motion planning on a lite6 arm (read: building the cache). This was on one of the nanos I think? With 8GB of RAM that's shared with the GPU. I believe a CV model was also loaded.
The reason for this increased consumption was the finer grained jog creating more cache entries. Here are some numbers for memory used by the cache (after forcing a GC) across different versions (
Allocbeing actively used memory):On main which removed the unused
posememory was cut by ~35% from v101/v102. But still greatly increased from v100:This patch brings memory back in line with the v100: