Skip to content

Conversation

@ChenyuZhu1
Copy link
Contributor

@ChenyuZhu1 ChenyuZhu1 commented Oct 21, 2025

The memPool is based on PR #285, while the reconstructed dramstore is based on PR #296

Related unittests: ucm/store/test/case/infra/mem_pool_test.cc

Results of the performance test (dramstore_embed_and_fetch.py):
image

@ChenyuZhu1 ChenyuZhu1 requested a review from qyh111 October 21, 2025 09:11
@ChenyuZhu1 ChenyuZhu1 requested a review from Wwwzff as a code owner October 22, 2025 07:35
class MemoryPool {
public:
MemoryPool(size_t capacity, size_t blockSize)
: pool_(new char[capacity]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better use pinned memory here

return 0
capacity = int(config.get("capacity", 1073741824)) # Default 1GB
block_size = int(config.get("kv_block_size", 262144)) # Default 256KB
device_id = int(config.get("device_id", -1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kv_block_size is a concept within NfsStore, representing the total size of all layers of kv-cache contained in a single block. It seems that it is iosize not kv_block_size

// TODO: H2D
status = this->H2D(shard, device);
}
this->Done(shard, device, status.Success());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we use Async copy here ,it seems we shouldn't synchronize every shard

size_t capacity_;
size_t blockSize_;

std::unordered_map<std::string, char*> addressMap_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't find any lock here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants