Skip to content

Async FilesystemStore #3931

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

joostjager
Copy link
Contributor

@joostjager joostjager commented Jul 15, 2025

Try out for an async store with eventually consistent writes. It is just using tokio's spawn_blocking, because that is what tokio::fs would otherwise do as well. Using tokio::fs would make it complicated to reuse the sync code.

ldk-node try out: lightningdevkit/ldk-node@main...joostjager:ldk-node:async-fsstore

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Jul 15, 2025

👋 I see @tankyleo was un-assigned.
If you'd like another reviewer assignment, please click here.

@joostjager joostjager changed the title Async fsstore Async FilesystemStore Jul 15, 2025
@joostjager joostjager force-pushed the async-fsstore branch 4 times, most recently from 29b8bcf to 81ad668 Compare July 15, 2025 13:40
let this = Arc::clone(&self.inner);

Box::pin(async move {
tokio::task::spawn_blocking(move || {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, so I'm not sure if spawning blocking tasks for every IO call is the way to go (see for example https://docs.rs/tokio/latest/tokio/fs/index.html#tuning-your-file-io: "To get good performance with file IO on Tokio, it is recommended to batch your operations into as few spawn_blocking calls as possible."). Maybe there are other designs that we should at least consider before moving forward with this approach. For example, we could create a dedicated pool of longer-lived worker task(s) that process a queue?

If we use spawn_blocking, can we give the user control over which runtime this exactly will be spawned on? Also, rather than just doing wrapping, should we be using tokio::fs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mhh, so I'm not sure if spawning blocking tasks for every IO call is the way to go (see for example https://docs.rs/tokio/latest/tokio/fs/index.html#tuning-your-file-io: "To get good performance with file IO on Tokio, it is recommended to batch your operations into as few spawn_blocking calls as possible.").

If we should batch operations, I think the current approach is better than using tokio::fs? Because it already batches the various operations inside kvstoresync::write.

Further batching probably needs to happen at a higher level in LDK, and might be a bigger change. Not sure if that is worth it just for FIlesystemStore, especially when that store is not the preferred store for real world usage?

For example, we could create a dedicated pool of longer-lived worker task(s) that process a queue?

Isn't Tokio doing that already when a task is spawned?

If we use spawn_blocking, can we give the user control over which runtime this exactly will be spawned on? Also, rather than just doing wrapping, should we be using tokio::fs?

With tokio::fs, the current runtime is used. I'd think that that is then also sufficient if we spawn ourselves, without a need to specifiy which runtime exactly?

More generally, I think the main purpose of this PR is to show how an async kvstore could be implemented, and to have something for testing potentially. Additionally if there are users that really want to use this type of store in production, they could. But I don't think it is something to spend too much time on. A remote database is probably the more important target to design for.

}

/// Provides additional interface methods that are required for [`KVStore`]-to-[`KVStore`]
/// data migration.
pub trait MigratableKVStore: KVStore {
pub trait MigratableKVStore: KVStoreSync {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will we solve this for an KVStore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment belongs in #3905?

We might not need to solve it now, as long as we still require a sync implementation alongside an async one? If we support async-only kvstores, then we can create an async version of this trait?

@joostjager
Copy link
Contributor Author

Removed garbage collector, because we need to keep the last written version.

@joostjager joostjager self-assigned this Jul 17, 2025
@joostjager joostjager mentioned this pull request Jul 17, 2025
24 tasks
@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 97d6b3f to 02dce94 Compare July 23, 2025 18:11
Copy link

codecov bot commented Jul 23, 2025

Codecov Report

Attention: Patch coverage is 95.20000% with 6 lines in your changes missing coverage. Please review.

Project coverage is 88.94%. Comparing base (55baa15) to head (dd9e1b5).

Files with missing lines Patch % Lines
lightning-persister/src/fs_store.rs 95.20% 2 Missing and 4 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3931      +/-   ##
==========================================
- Coverage   88.94%   88.94%   -0.01%     
==========================================
  Files         173      173              
  Lines      123794   123902     +108     
  Branches   123794   123902     +108     
==========================================
+ Hits       110114   110199      +85     
- Misses      11227    11244      +17     
- Partials     2453     2459       +6     
Flag Coverage Δ
fuzzing 22.21% <ø> (ø)
tests 88.76% <95.20%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from c061fcd to 2492508 Compare July 24, 2025 08:31
@joostjager joostjager marked this pull request as ready for review July 24, 2025 08:32
@ldk-reviews-bot ldk-reviews-bot requested a review from tankyleo July 24, 2025 08:32
@joostjager joostjager force-pushed the async-fsstore branch 2 times, most recently from 9938dfe to 7d98528 Compare July 24, 2025 09:39
@joostjager joostjager force-pushed the async-fsstore branch 4 times, most recently from d4b7869 to 38ab949 Compare July 25, 2025 13:04
@joostjager
Copy link
Contributor Author

joostjager commented Jul 25, 2025

Updated code to not use an async wrapper, but conditionally expose the async KVStore trait on FilesystemStore.

I didn't yet update the ldk-node branch using this PR, because it seems many other things broke in main again.

@joostjager joostjager requested review from tnull and removed request for tankyleo July 25, 2025 13:51
@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

3 participants