Skip to content

Conversation

zhangchiqing
Copy link
Member

@zhangchiqing zhangchiqing commented Oct 9, 2025

Working towards #7910

⚠️ This PR is not a full refactor of storage operations using the functor pattern; rather, it’s intended to start a discussion around the pattern itself. Please refer to the comments for specific discussion points.

@codecov-commenter
Copy link

codecov-commenter commented Oct 9, 2025

Codecov Report

❌ Patch coverage is 37.05584% with 124 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
storage/operation/functor.go 37.85% 80 Missing and 7 partials ⚠️
storage/operation/guarantees.go 0.00% 24 Missing ⚠️
storage/operation/approvals.go 0.00% 8 Missing ⚠️
storage/operation/payload.go 0.00% 5 Missing ⚠️

📢 Thoughts on this report? Let us know!

}
deferredBlockPersist.AddNextOperation(func(lctx lockctx.Proof, blockID flow.Identifier, rw storage.ReaderBatchWriter) error {
return operation.IndexLatestSealAtBlock(lctx, rw.Writer(), blockID, latestSeal.ID())
return operation.IndexingLatestSealAtBlock(blockID, latestSeal.ID())(lctx, rw)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We mentioned that any database operation requiring a lock context could be refactored using the functor pattern, but I think this case might be an exception—even after applying the refactor.

The functor isn’t particularly useful here since we don’t have the block ID until we start executing the deferred database operations.

I went ahead and refactored it anyway to illustrate my point, but in this case, it doesn’t provide any performance benefits over the original version and only adds unnecessary complexity.

Thoughts?

}

// make sure all payload guarantees are stored
for _, guarantee := range payload.Guarantees {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of storing each individual guarantee, we pass in all the guarantees to be stored together, so that instead of creating N functors, we create only one functor to insert and index N guarantees.

return fmt.Errorf("could not store guarantees: %w", err)
}

// make sure all payload seals are stored
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this PR, I'm using guarantees as an example to apply the functor pattern and raise the discussion. I'd like to get feedbacks first and settle on the functor pattern before applying this to the storing operation of the rest of payloads , such as seals, results etc.

))
}

type CollectionGuaranteeWithID struct {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why creating this struct?

Because we would like to insert N guarantees with one functor, instead of N functors. And we need to bundle all the data before passing to the functor.

Why placing this struct here?

It might not be the best place, but at least it's close to where it is being used (InsertAndIndexGuarantees). I'm open for ideas for a better place for this struct.

return nil
}
errmsg := fmt.Sprintf("InsertAndIndexResultApproval failed with approvalID %v, chunkIndex %v, resultID %v",
approvalID, chunkIndex, resultID)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the err message, the functors like Overwriting and InsertingWIthMismatchCheck is too general that doesn't have the context. So I used WrapError to include more context.

storing := operation.InsertAndIndexResultApproval(approval)

return func(lctx lockctx.Proof) error {
if !lctx.HoldsLock(storage.LockIndexResultApproval) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is redundant, because InsertAndIndexResultApproval will check the lock


// Upserting returns a functor, whose execution will append the given key-value-pair to the provided
// storage writer (typically a pending batch of database writes).
func Upserting(key []byte, val interface{}) func(storage.Writer) error {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replaced by functors

return WrapError("InsertAndIndexGuarantees failed", BindFunctors(
HoldingLock(storage.LockInsertBlock),
WrapError("insert guarantee failed", OverwritingMul(guaranteeIDKeys, guarantees)),
WrapError("index guarantee failed", InsertingMulWithMismatchCheck(collectionIDKeys, guaranteeIDs)),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrapping error for inserting multiple records is a bit challenging, I can't find a cleaner way to provide context for each individual write. So ended up just wrapping the process.

// Caller must ensure guaranteeID equals to guarantee.ID()
// Caller must acquire the [storage.LockInsertBlock] lock
// It returns [storage.ErrDataMismatch] if a different guarantee is already indexed for the collection
func InsertAndIndexGuarantee(guaranteeID flow.Identifier, guarantee *flow.CollectionGuarantee) Functor {
Copy link
Member Author

@zhangchiqing zhangchiqing Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is unused, but only for reference for now.

I initially created this function by combining InsertGuarantee and IndexGuarantee, and refactored with the functor pattern. But later I decided to use InsertAndIndexGuarantees, see comments there for my motivation.

I kept this one here for reference as it's easier to understand, will clean up after the discussion of functor pattern is settled.

@zhangchiqing zhangchiqing marked this pull request as ready for review October 10, 2025 15:42
@zhangchiqing zhangchiqing requested a review from a team as a code owner October 10, 2025 15:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants