-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v9fs:Add socket support #15597
base: master
Are you sure you want to change the base?
v9fs:Add socket support #15597
Conversation
[Experimental Bot, please feedback here] This PR mostly meets the NuttX requirements, but could be improved by providing more specific details. Here's a breakdown: Strengths:
Weaknesses & Suggestions for Improvement:
In short: The PR is on the right track, but needs more detail, particularly in the Impact and Testing sections, to fully meet the NuttX requirements. Be as explicit and thorough as possible to ensure reviewers can properly assess the changes. |
@crafcat7 please fix the document error |
Done,now it has passed the document check. |
*/ | ||
|
||
if (ret < 0 || | ||
(!(flags & MSG_WAITALL) && ret < msg->msg_iov[i].iov_len)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update the nrecv if ret < iov_len
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation is designed so that if an error occurs in one of the segments when receiving multiple segments of iov, the received data will be returned.
If we are not in MSG_WAITALL
, then as long as there is one recv, we can directly return ret
.
In the case of multiple segments of iov, when we ensure that ret >= 0
, each segment of iov is received only once and returns nrev
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ret
already indicates that you have received some data, why not return this value to the user?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming msg->msg_iovlen > 1
and MSG_WAITALL
, nbytes
will be returned only after all iov_lens
are filled. If a recv error occurs in the middle, the received nbytes
will be told to the caller, because nbytes
has been added to ret
in for.
If it is not MSG_WAITALL
, we only need to receive it once and tell the caller directly, which is equivalent to the original method.
So I would like to ask if your idea is to fill each iov_len
segment even if there is no MSG_WAITALL
flag?
@@ -88,11 +88,6 @@ ssize_t psock_recvmsg(FAR struct socket *psock, FAR struct msghdr *msg, | |||
return -EINVAL; | |||
} | |||
|
|||
if (msg->msg_iovlen != 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not implement multi message recv in this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's the work for the furture improvement, your contribution is welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let us hold this commit. the protocol layer does not have multi-segment data mapping implement, so I think to resolve the multi-segment reception problem in VFS, rather than requiring similar changes for each protocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -21,4 +21,9 @@ config V9FS_VIRTIO_9P | |||
depends on DRIVERS_VIRTIO | |||
default n | |||
|
|||
config V9FS_SOCKET_9P |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config V9FS_SOCKET_9P | |
config V9FS_SOCKET |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but viritio is V9FS_VIRTIO_9P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current naming format is V9FS_[Transport NAME]_9P
If we change to V9FS_SOCKET
, then VIRTIO should also be changed to V9FS_VIRTIO
Summary: 1.Add new api for socket parsing header - v9fs_parse_size 2.Add socket driver for 9pfs Signed-off-by: chenrun1 <[email protected]>
To prepare for supporting multiple iov in each protocol. Signed-off-by: Zhe Weng <[email protected]>
Summary: Implement multiple iovlen recvmsg in inet_sockif, only TCP mode support at current. Signed-off-by: chenrun1 <[email protected]>
Signed-off-by: chenrun1 <[email protected]>
Summary
In #13001, I added a new distributed file system in NuttX, which only supports VIRTIO as a data transmission method.
In this PR, I implemented a Socket-based transmission method, so that V9FS can access the content on the Host based on the Socket method, which is suitable for more scenarios.
I also added the V9FS document to explain how to use it.
In short:
- mplemented a socket-based v9fs driver
- Enhanced Net-related, now TCP supports multi-segment IOV sending
- Added V9FS Document
Impact
Testing
Build Host(s): Linux x86
Target(s): qemu
Passed local file system related tests