-
Couldn't load subscription status.
- Fork 25
Rewrite dramstore using the reconstructed task architecture #300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Rewrite dramstore using the reconstructed task architecture #300
Conversation
…uZhu1/unified-cache-management into develop_Klukowski_new_dram
| class MemoryPool { | ||
| public: | ||
| MemoryPool(size_t capacity, size_t blockSize) | ||
| : pool_(new char[capacity]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better use pinned memory here
| return 0 | ||
| capacity = int(config.get("capacity", 1073741824)) # Default 1GB | ||
| block_size = int(config.get("kv_block_size", 262144)) # Default 256KB | ||
| device_id = int(config.get("device_id", -1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kv_block_size is a concept within NfsStore, representing the total size of all layers of kv-cache contained in a single block. It seems that it is iosize not kv_block_size
| // TODO: H2D | ||
| status = this->H2D(shard, device); | ||
| } | ||
| this->Done(shard, device, status.Success()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we use Async copy here ,it seems we shouldn't synchronize every shard
| size_t capacity_; | ||
| size_t blockSize_; | ||
|
|
||
| std::unordered_map<std::string, char*> addressMap_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't find any lock here
The memPool is based on PR #285, while the reconstructed dramstore is based on PR #296
Related unittests: ucm/store/test/case/infra/mem_pool_test.cc
Results of the performance test (dramstore_embed_and_fetch.py):
