-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide API to request memory from a specific block #92
Comments
I'm certainly open to it. How do you think that regions should be referenced in the API? |
I had hoped you have some opinion on that :) I have not looked at the code for 2 years so without that... the simplest thing might be an integer/index or the base address of the respective block, I presume. I'll propose something in the next days if I find the time. |
Just thinking out loud -- Today I'm leaning toward something that has a sense of "please allocate in this memory region" as being easiest to implement. Because blocks will split and combine, which probably implies we need a secondary way of tracking which block goes with which region. Keeping a sense of address ranges would be easier to tack on without changing the existing implementation. My own preference at the moment would be to use a structure that has start and end address fields that defines an acceptable memory region. malloc_addblock could be amended to return this structure. This could be passed to a the new API to request memory from a specific region, which could scan the list of freelist blocks for one that has memory in the target range and of sufficient size. My normal tendency would be to make this more on the opaque side and to have the APIs return/accept a handle to a registered reason, but I don't really see a reason to not support the use case of manually specifying a memory region. I would be amenable to the type being public type for this. I suppose we could also have a secondary API that can specify start/end addresses and then deal with the internal structure format internally. I don't feel strongly around this. |
After looking at the code I am less convinced that this is a good idea TBH. I am not sure if you were only referring to the malloc_freelist implementation. There all you said makes sense. I am not sure though that not touching the existing implementation is the best way (yet). Separating the list into a list of lists (or array of lists like for FreeRTOS) might also be beneficial for other use cases, e.g., if one wants to separate big from small objects for some reason. The other implementations don't seem to cope with the whole idea at all because we cannot change their malloc implementations or add our own - or am I missing something? For example, in FreeRTOS the regions are gathered within libmemory but after handing them over via As of now I don't think libmemory is a good target for this approach because of the differences in the existing implementations and the scope of the library. I'll take a look at other implementations. Feel free to close if you agree. I'll reopen or add a PR in case I change my mind. Sorry for the noise. |
The library already supports multiple independent blocks of memory. In some systems it might be required to allocate certain buffers in specific locations. In my current use case I have an embedded system with various memories and a microcontroller with an a integrated USB controller. The latter interacts with the CPU via a (complicated) shared memory interface. Due to size constraints I have put the heap in an external memory (PSRAM) that is accessible to the CPU via normal load/store instructions but apparently not for the USB controller. However, the vendor library uses the heap to allocate buffers for USB transfers.
(┛ಠ_ಠ)┛彡┻━┻
As of now, the project uses newlib and https://github.com/DRNadler/FreeRTOS_helpers/ but I might give implementing this in libmemory a shot if you think it is in scope. Any thoughts on the API very welcome. Another embedded allocator that supports this feature is https://github.com/MaJerle/lwmem
The text was updated successfully, but these errors were encountered: