-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDDS-11784. Allow aborting FSO multipart uploads with missing parent directories #7700
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sokui Thanks for the patch.
Please check the test failures
getMultipartKeyFSO
is used by a lot of MPU flows. Furthermore, multipartInfoTable
will be accessed two times although. I think we can create another function similar to getMultipartKeyFSO
by passing the OmMultipartKeyInfo
and use that only for the abort case. This means switching the order in S3MultipartAbortRequest
by first getting from multipartInfoTable
and then to openFileTable
. All other implementations are welcome.
Also let's add a simple test as suggested in #7566 (comment)
For example, there is a directory "/a" and there is a pending MPU key with path "/a/mpu_key" that was initiated but haven't been completed / aborted yet. After the MPU key was initiated, the directory "/a" is deleted, and since mpu_key has not been completed yet and does not exist in fileTable, the DIRECTORY_NOT_EMPTY will not be thrown in OMKeyDeleteRequestWithFSO#validateAndUpdateCache. Therefore mpu_key is orphaned and when it's completed / aborted, it will fail in OMFileRequest#getParentID since the parent directory has been deleted.
My consideration is that I originally implemented the exact way you describe. But considering the above reasons, I change to directly modify For the interface, I noticed For the tests, I will take a look the failure. For the new test you suggested, do you know if there is similar testing existing so that I can reference it? I am not super familiar with ozone code base. So if there is no such similar code there, could you pls show me some code snippet which I can start with. Really appreciate it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My consideration is that org.apache.hadoop.ozone.om.request.util.OMMultipartUploadUtils#getMultipartOpenKey is used by multiple places, including S3ExpiredMultipartUploadsAbortRequest and S3MultipartUploadAbortRequest. By updating one place here, it benefits for two places (both not work currently). Secondly, if my current implementation of getMultipartKeyFSO is more reliable, there is no reason to only limit this benefit to S3MultipartUploadAbortRequest. All the other places should use this as well.
My worry is that there might be some places where the expectations for OMMultipartUploadUtils#getMultipartKeyFSO
is to access the open key/file table or there might some places where the multipartInfoTable entry does not exist yet, which might result in NPE which might crash the OM (we should handle the possible NPE). However, I'm OK as long as there are no test regressions.
For the tests, I will take a look the failure. For the new test you suggested, do you know if there is similar testing existing so that I can reference it? I am not super familiar with ozone code base. So if there is no such similar code there, could you pls show me some code snippet which I can start with. Really appreciate it!
You can start with TestOzoneClientMultipartUploadWithFSO
integration test.
final OmMultipartKeyInfo multipartKeyInfo = | ||
getMultipartInfoTable().get(nonFSOMultipartKey); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please handle null OmMultipartKeyInfo
if the entry does not exist in multipartInfoTable
to prevent NPE that might crash all OMs since RuntimeException
is not caught in validateAndUpdateCache
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
final String nonFSOMultipartKey = | ||
getMultipartKey(volume, bucket, key, uploadId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's rename nonFSOMultipartKey
to multipartKey
. nonFSOMultipartKey
name is quite confusing since we are dealing with FSO MPU key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH, I think multipartKey
is ambitious. A multipartKey
could be a FSO multipartKey or nonFSO multipartKey, since here we are dealing with FSO, that's why I intentionally name it as nonFSOMultipartKey
so that the reader can understand we should not directly return this value.
If you think multipartKey
is conventionally mean nonFSO multipartKey, I can rename it. Pls let me know. Thanks
I started with some test with the following code. But It seem I cannot delete the directory.
It gave me the following error:
I checked the
Just wonder if it is deleting a key or a directory? And if it is directory, why I got the above exception? |
What changes were proposed in this pull request?
HDDS-11784 adding missing parent directories for MPU abort and expired abort request
We observed lots of open key (files) in our FSO enabled ozone cluster. And these are all incomplete MPU keys.
When I tried to abort MPU by using s3 cli as below, I got the exception complaining about the parent directory is not found.
Exceptions in the log
This issue is because of missing the parent directories. The causation and the solution is explained here: #7566 (comment).
There is another PR which we closed. We had some conversation about which approach should be taken. Pls reference here: #7566
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-11784
How was this patch tested?
It tested by the CI. Also we validate it in our cluster.