-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path394
More file actions
20 lines (10 loc) · 6.92 KB
/
394
File metadata and controls
20 lines (10 loc) · 6.92 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Hello, this is Anil Godbole, Marketing Workgroup Co-Chair of the CXL Consortium. I will begin by thanking MemVerge, folks, for giving me this opportunity to give an update on the CXL happenings on behalf of the CXL Consortium.
I'll begin by saying that the Consortium membership continues to grow. Speaking of the interest, which is growing in the CXL protocol, today we are at 280-plus member companies. And now we have a whole ecosystem of vendors, and also adopters and potential customers, for various CXL-related products.
I want to give you a feel for that ecosystem on this slide. As you can see, there are more than, or there are, eight different categories of CXL products shown here. And we have companies offering some product.
So, before I go on, I want to share the industry trends, which you will all agree with. AI adoption continues to grow for all kinds of productivity enhancement applications, as we all know. Every day, someone will tell you about, 'Oh, have you seen this tool? It'll take meeting minutes, it will do this. It'll do that, right? It'll summarize your meetings.' Data footprints also continue to grow, right? Everyone will agree with that. Everyone's doing something on their phone. Every day, so much data gets created. And all that data is not just stored, but needs to be analyzed. And that's creating demand for higher compute and memory resources inside the data center. But as the cost of memory continues to rise, right? We all know DRAM is not quite scaling as fast as we would like it to. But of course, the manufacturers are doing their best, and they still continue to offer at the best price they can. But overall, the memory ends up being the higher bill of material, the highest bill of material item, in a server cost, in a server BOM today, right? And so, enterprises then are seeking ways to reduce the overall memory costs, either by doing disaggregation techniques, like memory pooling, or even adoption. Or reuse of lower-cost memory, right? All in all, these trends are acting as big demand drivers for all things which the CXL protocol has to offer.
So, with all those demand drivers I just showed you in the previous slide, various analysts have come up with an estimated time for CXL-related devices. I'm sharing one here from the Yolo Group here, all right? So, they all made their estimates, and so, like, for 2026, they expect about 2.1 billion TAM, which continues to grow, and by 2028, will be up to 15.8 billion. Now, keep in mind that only now the major CPU manufacturers, like Intel, are finally offering broad availability of CXL-capable hosts. To date, so, Intel, from Intel, the CXL was offered only on selected SKUs. But with the launch of their BHS, or the Birch Stream platform, every single CPU on that platform will have CXL capability. And keeping up with that is also the ecosystem of vendors offering devices. There are so many vendors offering CXL version 2.2 devices right now. And I'll show you. I'll give you a feel for that in the next slide.
And here's the thing, right? Here's the report from the latest, the compliance workgroup, right? The last event happened in December 2024. And the big proof of the growth of the ecosystem, or the availability of devices, can be seen from this report. I highlighted the 50-plus devices thing. So, after the end of last year, we now have more than 50 devices, various different CPUs, and CXL devices, which are being offered, which have been qualified, which have, you know, which have passed the compliance testing, and now they are on the CXL Integrators List.
And we also continue to update the spec, right? As you can see, this shows the timeline of how the spec has evolved, and the latest rendition was version 3.2, which was released back in December. And again, keep in mind, as the new demand or new feature, you know, shows up, demand for new features shows up, the CXL Consortium is very aware of that, and they will update the spec appropriately, accordingly.
And here's just a graphic way I like to put the evolution of the CXL spec. So, the version 1.1 was only really about adding memory at the server node itself. That's shown at the lowest, the lowest, the lower side, on the lowest tier there. Then, CXL 2.0 was, you could say, rack scale or scale-up, another way. So, within the rack now, you can imagine with the support for memory pooling, there'll be a memory, just a bunch of memory, or whatever you want to call it; there'll be a blade in there, which is just memory with a CXL switch in the front. So, this one can take CXL requests from other hosts within the rack and offer, you know, a memory on loan, right? So, that was like the CXL version 2.2. And then 3.2 took it a step further. We not only offer multiple-level switches, so now the one CXL cluster can be, can accommodate up to 4,000 different hosts or endpoints, right? And also, we added the memory-sharing feature. So, big, big application, like MapReduce-like applications, where you do big data crunching, can now take advantage of this architecture, right? With memory sharing, you don't need to send data across from one host to another over Ethernet or InfiniBand networks. You can just pass that; the data is all there in the memory pool, and you can simply pass the reference from one host to another.
And here's a more detailed slide. So, we always keep up with every spec. What are the major, major new things which got added? Of course, we don't have time to go through all that. You can—I'm going to include it as a reference. But roughly speaking, 3.2, like I said, added the higher node capability of CXL, added memory sharing, also added the trusted execution environment, or, which is a protocol to extend confidential computing. Today, confidential computing was mainly contained to the host and the immediately attached memory. But now CXL offers memory devices attachment, so that is being extended there, right? So, you all can take a look at all these different features.
But I think that's my—that was my last slide. I'll just summarize by saying that version 3.2, which was the last one, we optimized the features. I listed those bullets below, right? We optimized the CXL memory device monitoring and management. This is about the big feature, what they call the hotness monitoring unit, as you know. So, CXL memory, if the page within the, within the CXL device gets hot, now it's going to be migrated, right? Today's CPU cannot keep up keeping track of all the different CXL pages. And with the device doing that, that should be a big help to the CPU. And then we did various other updates. Take a look. And so, looking forward, right, so we continue, as I think I already said, that Consortium Technical Working Group continues to develop the spec to keep up with the demand for new features in the marketplace, right? And then also, we'll put in a plug for the CXL Consortium. Join the Consortium. We are already strong, but more participants, the better. Thanks once again for the attention. And that is my talk. That's my update.