-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path248
More file actions
34 lines (17 loc) · 11.9 KB
/
248
File metadata and controls
34 lines (17 loc) · 11.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Hi, welcome to the SNIA Computational Storage Standards presentation for the Compute Memory and Storage Summit 2024. I am Bill Martin. I am co-chair of the SNIA Computational Storage TWG, as well as co-chair of the SNIA Technical Council. And I work for Samsung Semiconductors. And I'd like to turn it over to my co-presenter, Jason Molgaard.
Hello, everyone. I'm Jason Molgaard, and I'm also a co-chair with Bill on the SNIA Computational Storage Technical Workgroup and the SNIA Technical Council. I work for Solidigm.
All right, so briefly on the agenda, we'll talk about what's going on with the SNIA Computational Storage standardization. We'll take a look at the SNIA Computational Storage architecture, as well as the API. Do a little comparison with SNIA and NVMe computational storage to understand what's going on between the two organizations. And then take a look briefly at a new initiative that we're working on, computational storage with SDXI.
So the current progress of the TWG, we've got two main documents that we've been developing for some time. We've got the architectural document. The version 1.0 was released in August of 2022. It was awarded the most innovative memory technology at FMS in 2022. And in the meantime, we've been working on a 1.1 with two key enhancements. We've got security enhancements for multiple tenants, and those changes have been incorporated into a draft. And we've got sequencing of commands, and we're finalizing the incorporation of that text into the 1.1 as well. On the API, the 1.0 was released in October of 2023, and it received the most innovative memory technology awarded at FMS last year as well. And we have a 1.1 under development, which is primarily enhancements and editorial corrections.
So now let's briefly look at the architecture.
SNIA has defined three different architectural models for computational storage shown in these three diagrams on this slide. Over on the left, we have the computational storage processor, which has computational storage resources in a device connected to the fabric, but no actual device storage. In the middle, we have a computational storage drive, which has computational storage resources in what would be considered a more traditional solid state drive or storage device. So there would actually be storage media in this device in addition to the compute resources. On the right, we have the computational storage array. This would be like an array that you're familiar with with computational storage resources, so it can perform compute in that array. The drives connected in that array may also be computational storage drives. And just a bit of nomenclature, the CSx is the abbreviation for a computational storage device, and that is any one of the computational storage processor, computational storage drive, or computational storage array.
As I mentioned, one of the enhancements that we've been working on is the sequencing of commands. And so sequencing, the intent is to enable sequences of CSFs to execute in succession. So the sequence could be invoked. It would execute a series of steps in order. And then those steps are actual computational storage functions that we want to execute in quick succession with minimal host involvement. And this is all handled by an aggregator CSF that we are defining and including in that 1.1. And that aggregator CSF manages the execution of the sequence, tracks the completion of each CSF. It could be downloaded or pre-installed into your device, and it enables fixed sequence or variable sequences defined by parameters. Error handling will also be handled either by the aggregator or the host. So look for this coming in the 1.1 in the near future. So now I'm going to turn it over to Bill to talk a little bit about security.
Thank you, Jason. So just a brief overview. In version 1.1, we made some assumptions for what we put into version 1.0 for security. The assumptions were that this would consist of a single physical host or virtual host with one or more computational storage devices, that the host is responsible for the security of the ecosystem that the CSxes operate within. So we're not looking at trying to secure the environment. That's the host responsibility. And that CSx security requirements are comparable to the security requirements common to SSDs and HDDs. There are elevated privileges necessary for operations of the security considerations. Next slide, Jason.
And so what are we doing for version 1.1? This is really what we wanted to get to here, is to show you where we're going. So the assumptions have broadened for 1.1. So this assumption is an environment that consists of multiple physical hosts or multiple virtual hosts with one or more CSxes. The CSx security requirements are still comparable to the security requirements to the SSDs and HDDs in a multi-tenant environment. So we've moved up to the multi-tenant environment. What do we need? So we need trust relationships between the host, whether it's a virtual host or a physical host, and the device. In order for a trust relationship, you need identification. So that's the exchange between the participating parties, the host and the device. You need authentication. So authentication is done following identification. It is the exchange of authentication information with the same element as the identification is done with. Authorization is done following authentication to authorize specific actions on specific resources. It may be done at a lower level element than the element that was authenticated. And then finally, there's access control. And it controls access to the elements of the CSx that are within the scope of the authorization. And it may be access to a CSE, a CSEE, or a CSF. The key here is that different elements of the trust relationship may be at different levels. So identification and authentication may be at the CSx level, while authorization may be at the CSEE that is within the CSx. And access control may be at a CSF that's activated within the CSEE. So all of those specific details are given in version 1.1 of the architecture document.
The other document that we're working on is the API.
And so for an overview of what's going on with the API, the API is one set of functions for all CSx types. The API is designed to hide the device details, hardware connectivity, and all of that, and simply present an interface for applications that allow those applications to operate utilizing a computational storage device and utilizing the SNIA CS API library. So it abstracts the device details like discovery, access, management, storage and memory access, download, and executing of CSFs. So if there are things that are available on the device, then those functions are passed on down to the device. There are some things like memory management that is actually handled by the driver. And so that API works with the driver to provide the memory management. The biggest thing here is also that APIs are operating system agnostic so that you can utilize the interface for an application that runs on any OS. And each OS would then have its own library applicable to that OS that allows for operation on each of those.
Next, I'd like to talk briefly about how SNIA and NVMe computational storage work together.
So the NVMe computational storage and SNIA computational storage architecture both have released standards. So NVMe computational storage was ratified in January of this year. It contains two different command set standards. One is computational storage command set. The other is the subsystem local memory command set. But the key here is NVMe computational storage implements the SNIA computational storage model. So if you look at the architecture, that architecture is reflected in what was done in NVMe. And the two organizations worked closely together to develop those two coordinated standards, the SNIA standard being the overall architecture, while the NVMe standard was an actual individual implementation on NVMe. Finally, the SNIA API supports the NVMe computational storage standard. So the API was developed with the NVMe computational storage standard in mind. And the functions that are provided there support that standard.
And with that, I'll turn it back over to Jason to talk about computational storage and SDXI.
Thank you, Bill. So yeah, one of the things that the computational storage TWG is doing is we have a subset of people working in a subgroup with the SDXI technical workgroup that's at SNIA. And if you're not familiar with what SDXI is, it stands for Smart Data Accelerator Interface. And it's essentially a standard for memory to memory data movement and acceleration. It is extensible, forward compatible, independent of I/O interconnect technology. And it provides data transformation features. The version 1.0 was published in November of 2022. And I encourage you to go take a look at the link when you get the slides, and you can read more about it.
You may be asking yourself, well, why would computational storage be interested in a data mover? And I think the answer is there's potentially several different answers. One being that you want to perform computation where it is best in any type of computational storage architecture. So in other words, if you have a device that has the appropriate compute, and it does not have the data that you need to operate on, then you may want to get it moved over to that device and perform the operation over there. And ideally, you want to do that in a peer-to-peer type operation where you're not burdening the host. After all, the goal of computational storage is to offload the host. And so I think that that's definitely one very important thing. And without question, the other trend that we're seeing is that memory is everywhere. And because that memory is everywhere, that means the data could actually be placed anywhere. And once again, you may need to get it into the right location for computation. SDXI also has some transformations or some small computational ability, if you will. So if you want to transform the data as you're moving it, that could actually be part of the computation solution that you're wanting to deploy in your computational storage device. And so that also makes for a really great pairing with computational storage, because it is the engine, the computational storage engine, effectively, for an operation that you want to complete, for a function you want to complete, at the same time that you're moving the data. This picture, I understand it's kind of busy, shows all of the different places where it might make sense to have data movement and where you could have SDXI traffic helping to move that data. Certainly, I'm not going to cover every single arrow on this slide, but the red arrows are within the device. And the green arrow is where the device initiates the transfer. And the green arrow is where the host initiates the transfer. And so you could certainly-- you see that the many paths have both red and green arrows, where either the device or the host could initiate a transfer to move some data into your computational storage device. And part of the goal here from our development is not only to identify these use cases and figure out how it would work, but also keep in mind all of the work that's gone into the CS API, as well as a lib SDXI that the SDXI TWG is developing, so that they play together in a seamless fashion. So this is very much in its infancy and definitely requires more development. But if this is interesting to you, then by all means, we definitely encourage you to come join us.
And that's true for not only the SDXI and computational storage, but in general with everything that we've got going on in either in SNIA or the computational storage TWG. So with that, thank you for joining today. We're glad to have you listen to this and watch this video. On behalf of Bill Martin, I'm Jason Molgaard. Have a great rest of your day.