-
Notifications
You must be signed in to change notification settings - Fork 199
[Storage] Refactor storage operations with functors #8026
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
} | ||
deferredBlockPersist.AddNextOperation(func(lctx lockctx.Proof, blockID flow.Identifier, rw storage.ReaderBatchWriter) error { | ||
return operation.IndexLatestSealAtBlock(lctx, rw.Writer(), blockID, latestSeal.ID()) | ||
return operation.IndexingLatestSealAtBlock(blockID, latestSeal.ID())(lctx, rw) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We mentioned that any database operation requiring a lock context could be refactored using the functor pattern, but I think this case might be an exception—even after applying the refactor.
The functor isn’t particularly useful here since we don’t have the block ID until we start executing the deferred database operations.
I went ahead and refactored it anyway to illustrate my point, but in this case, it doesn’t provide any performance benefits over the original version and only adds unnecessary complexity.
Thoughts?
} | ||
|
||
// make sure all payload guarantees are stored | ||
for _, guarantee := range payload.Guarantees { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of storing each individual guarantee, we pass in all the guarantees to be stored together, so that instead of creating N functors, we create only one functor to insert and index N guarantees.
return fmt.Errorf("could not store guarantees: %w", err) | ||
} | ||
|
||
// make sure all payload seals are stored |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this PR, I'm using guarantees as an example to apply the functor pattern and raise the discussion. I'd like to get feedbacks first and settle on the functor pattern before applying this to the storing operation of the rest of payloads , such as seals, results etc.
)) | ||
} | ||
|
||
type CollectionGuaranteeWithID struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why creating this struct?
Because we would like to insert N guarantees with one functor, instead of N functors. And we need to bundle all the data before passing to the functor.
Why placing this struct here?
It might not be the best place, but at least it's close to where it is being used (InsertAndIndexGuarantees). I'm open for ideas for a better place for this struct.
return nil | ||
} | ||
errmsg := fmt.Sprintf("InsertAndIndexResultApproval failed with approvalID %v, chunkIndex %v, resultID %v", | ||
approvalID, chunkIndex, resultID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the err message, the functors like Overwriting and InsertingWIthMismatchCheck is too general that doesn't have the context. So I used WrapError to include more context.
storing := operation.InsertAndIndexResultApproval(approval) | ||
|
||
return func(lctx lockctx.Proof) error { | ||
if !lctx.HoldsLock(storage.LockIndexResultApproval) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is redundant, because InsertAndIndexResultApproval will check the lock
|
||
// Upserting returns a functor, whose execution will append the given key-value-pair to the provided | ||
// storage writer (typically a pending batch of database writes). | ||
func Upserting(key []byte, val interface{}) func(storage.Writer) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replaced by functors
return WrapError("InsertAndIndexGuarantees failed", BindFunctors( | ||
HoldingLock(storage.LockInsertBlock), | ||
WrapError("insert guarantee failed", OverwritingMul(guaranteeIDKeys, guarantees)), | ||
WrapError("index guarantee failed", InsertingMulWithMismatchCheck(collectionIDKeys, guaranteeIDs)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrapping error for inserting multiple records is a bit challenging, I can't find a cleaner way to provide context for each individual write. So ended up just wrapping the process.
// Caller must ensure guaranteeID equals to guarantee.ID() | ||
// Caller must acquire the [storage.LockInsertBlock] lock | ||
// It returns [storage.ErrDataMismatch] if a different guarantee is already indexed for the collection | ||
func InsertAndIndexGuarantee(guaranteeID flow.Identifier, guarantee *flow.CollectionGuarantee) Functor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is unused, but only for reference for now.
I initially created this function by combining InsertGuarantee and IndexGuarantee, and refactored with the functor pattern. But later I decided to use InsertAndIndexGuarantees
, see comments there for my motivation.
I kept this one here for reference as it's easier to understand, will clean up after the discussion of functor pattern is settled.
Working towards #7910