Skip to content

Conversation

NicolasHug
Copy link
Contributor

@NicolasHug NicolasHug commented Sep 25, 2025

This is the first of a long series of PR that implements our own NVCUVID decoder backend via a new BETA CUDA interface.

High-level context

  • NVDEC == hardware decoder present on an Nvidia GPU. It's literally a physical piece of silicon that can decode videos.
  • NVCUVID == the C library that we can use to program NVDEC. Docs are here.

Currently in main, we support CUDA decoding by going through the NVCUVID implementation of FFmpeg. This makes it very easy for us, but we have limited control of the underlying resources. Specifically, caching the underlying CUDecoder can lead to massive performance gains, but relying on FFmpeg's NVCUVID implementation doesn't allow us to control that.

So this PR implements a separate, independent CUDA decoding path (the BETA CUDA interface), where we directly rely on NVCUVID. A lot of our implementation was inspired by DALI's.

Design principles

As much as possible, I tried for this new decoder to require minimal changes from our existing code-base, and mostly just treat it purely as a new extension by creating a new DeviceInterface.

Crucially, we're still demuxing through FFmpeg. That is, we still rely on FFmpeg to generate the AVPacket from a stream. This is key: it means we can leave the SingleStreamDecoder largely untouched. All of the new decoding logic is encapsulated in a new BetaCudaDeviceInterface which exposes new methods, like:

  • sendPacket(AVPacket) which is the moral equivalent of avcodec_send_packet(AVPacket)
  • receiveFrame(AVFrame), which is the moral equivalent of avcodec_receive_frame(AVFrame).

This wasn't trivial, because NVCUVID isn't designed to work well with the non-blocking send/receive API of FFmpeg (see design alternatives section below).

What is and isn't supported right now

Support is very limited ATM, and definitely buggy (don't worry, everything is private).

  • h264 only
  • seek_mode="exact" only
  • index-based APIs only
  • no backwards seeks

In terms of features, adding more codec support as well as approximate seek_mode will be top priority. Supporting time-based and backwards seeks is lower pri, and may never be supported. It will depend on how hard that is.

There are plenty other things that aren't supported, and a million TODOs. Also, most guards preventing you from doing bad things aren't yet in place: if you look at the new decoder the wrong way it will be angry at you and deadlock (at best). I will be working through all the features, bugs, and guards, in follow-ups.

How to review this

  • Ignore the header files in nvcuvid_include. They're the headers from NVCUVID. We have to vendor them because they're not part of the normal CUDA toolkit.
  • Start with the tests to get an idea of the Python API, and of what is currently supported. You'll see that we can request the new BETA interface by passing device="cuda:0:beta". That's not a legible pytorch device string, so we can consider this API to be private. It is subject to change anyway. For now we just need a convenient way to expose this in Python, for testing.
  • Take a look at _video_decoder.py to further explore how the Python API is exposed.
  • Take a brief look at the changes made to the DeviceInterface registration in DeviceInterface.[h, cpp]: previously, we could only register one interface per device type (one interface for CPU, one for CUDA, etc.). Now, we can register multiple interfaces per device type with the "device_variant" key. This is how we can enable both device="cuda" and device="cuda:0:beta"` and have both the default and the beta CUDA interfaces.
  • Now look at SingleStreamDecoder and pay attention to the new extension points added to the DeviceInterface. You'll see sendPacket, receiveFrame, and a few other things. They mirror the existing FFmpeg APIs. Don't get too scared by the new bit-stream filtering (BSF) logic: it's just something that converts an AVPacket into an other AVPacket, with a different binary format. It's needed for some codecs. Eventually, we'll move that away from the SingleStreamDecoder, and encapsulate it within the interface (this is a TODO).
  • Now you can start looking at the code of BetaCudaDeviceInterface. You're now in a whole new rabbit hole with a lot of new concepts. We're literally writing our own decoder here, and that's not something we've done yet in TorchCodec (we were always relying on FFmpeg so far). There's too much to write to describe how it works, and anything I write now will likely be obsolete within a few days, so let's go over it in our sync.

Previous design considerations (rejected)

I discussed a lot of that during meetings already, but just for ref:

  • I thought about registering ourselves as a HWAccel object. The HWAccel registration is what FFmpeg does in its own NVCUVID backend (which is what we use as our current CUDA interface). There would be a major upside of registering ourselves as a HWAcccel : we would be able to rely on FFmpeg’s native parser and FFmpeg’s frame-reordering logic, instead of relying on NVCUVID parser and having to implement the frame recording logic ourselves. See this? This is converting FFmpeg’s parser info (right side) to NVCUVID’s built-in frame info (left side). Among other things, this frame meta-data is needed for the NVDEC hardware decoder to know which frame it should decode first when there are frame dependencies (like B-frames). If we were to implement our own HWAcccel we’d have to implement this metadata mapping ourselves, for all codecs. That’s obviously far from trivial in terms of required knowledge, but more importantly, it’s technically impossible: all of the FFmpeg parser info is private. It’s not ABI stable and it relies on private headers anyway. So NVCUVID’s claim that users can rely on a third-party parser instead of their own isn't really true for us. No one, except for FFmpeg themselves, can use the FFmpeg parser. We have to build our own parser (not happening), or rely on NVCUVID’s parser.

  • NVCUVID is designed around callabcks which are triggered when a packet is being parsed. Crucially there is the pfnDisplayPicture callback which is triggered when a frame is fully decoded and ready to be displayed, in display order (which is great). Unfortunately, we cannot rely on this callback: it is triggered by the parser within a call to cuvidParseVideoData(packet), which means that the only way to know that a frame is ready is to send a packet. I guess that probably makes sense in streaming applications? But that's incompatible with FFmpeg send/receive non-blocking APIs, where we should be able to query whether a frame is ready without having to send a packet. So if we were to rely on this callback, we'd likely need massive changes to our SingleStreamDecoder architecture - which is something we don't want to do. So, we can't rely on this callback, and we have to figure out the frame display order ourselves. EDIT: I might be wrong about this. We may be able to rely on the pfnDisplayPicture callback after all.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 25, 2025
@NicolasHug NicolasHug changed the title BETA CUDA interface: NVCUVID decoder implementation BETA CUDA interface: NVCUVID decoder implementation 1/N Sep 25, 2025
@facebook-github-bot
Copy link
Contributor

@NicolasHug has imported this pull request. If you are a Meta employee, you can view this in D83333067.

"Provided (width * height / 256): ",
videoFormat->coded_width * videoFormat->coded_height / 256,
" vs supported:",
caps.nMaxMBCount);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any pointers to docs that talk about macroblocks? I can intuit roughly what this is, but a reference would be great. Particularly if it explains the magic 256.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it refers to this which is a video encoding/decoding concept. My very very rough understanding is that a frame is divided into macroblocks (square patches) during the encoding process (https://github.com/leandromoreira/digital_video_introduction#1st-step---picture-partitioning)

Arguably the error message isn't very user friendly, but I don't think we can keep it user-friendly while keeping it informative?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yeah, I don't think we have any hope of having an informative error message here. I'm more thinking about our own understanding of where 256 came from when understanding the code. Is it coming from a limitation in the NVDEC, a limitation from the h264 format, a combination of them, or something else entirely?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I understand, it's a codec capability limitation, not necessarily specific to h264. The 256 constant comes from the description of that field in the header:

  unsigned int nMaxMBCount; /**< OUT: Max supported macroblock count
                                      CodedWidth*CodedHeight/256 must be <=
                               nMaxMBCount             */

unsigned int frameRateNum = videoFormat_.frame_rate.numerator;
unsigned int frameRateDen = videoFormat_.frame_rate.denominator;
int64_t duration = static_cast<int64_t>((frameRateDen * timeBase_.den)) /
(frameRateNum * timeBase_.num);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably pull the time utilities out of SingleStreamDecoder and put them in FFMPEGCommon. It's reasonable to do since they're really time utilities for FFmpeg.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have a similar logic within SingleStreamDecoder where we compute a duration from the frame rate and the timebase. Maybe I misunderstand?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't do this same calculation in SingleStreamDecoder? My thinking was that we should try to keep all calculations with AVRationals in convenience functions to make sure we're doing the right thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, the only uses of AVRational I can find in SingleStreamDecoder is

double ptsToSeconds(int64_t pts, const AVRational& timeBase) {
  // To perform the multiplication before the division, av_q2d is not used
  return static_cast<double>(pts) * timeBase.num / timeBase.den;
}

int64_t secondsToClosestPts(double seconds, const AVRational& timeBase) {
  return static_cast<int64_t>(
      std::round(seconds * timeBase.den / timeBase.num));
}

but that's a converting ints to and from floats. Here, we're computing a duration while keeping all pts as int.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW the code above will fail with a zero division error if videoFormat_.frame_rate is not set, which happens in our AV1 video. I'll fix that ASAP when adding AV1 support

Comment on lines +304 to +309
// Free the original packet's data which isn't needed anymore, and move the
// fields of the filtered packet into the original packet. The filtered packet
// fields are re-set by av_packet_move_ref, so when it goes out of scope and
// gets destructed, it's not going to affect the original packet.
av_packet_unref(packet.get());
av_packet_move_ref(packet.get(), filteredPacket.get());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ... I hate this, I think?

We're modifying the input ReferenceAVPacket& packet internal buffers inplace. It means that when you call interface_->sendPacket(avPacket) (and thus applyBSF(avPacket)) , the avPacket is modified after the call.

I think the alternative is for applyBSF() to return the new filtered packet. But that's not trivial either: this means we need to declare the new AutoAVPacket in the right place, to prevent it from being freed before it's used. I think we'd need to do that either:

  • In sendPacket(), and then pass it down to applyBSF() - meh?
  • or at construction of the interface, by storing the AutoAVPacket as a field - meh ?

CC @scotts

unsigned height;
cudaVideoChromaFormat chromaFormat;
unsigned int bitDepthLumaMinus8;
unsigned char numDecodeSurfaces;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be explicit about integer size. I know we're getting these values from the CUDA code, but we can be explicit about what we're storing them in. For example, I'm not actually sure what integer size width will be here - I think that will become a uint32_t, but I'm not actually sure what the C++ rules are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an oversight on my part!
I defined them as unsigned int now - is that what you wanted, or do you prefer types like uint32_t?

@scotts
Copy link
Contributor

scotts commented Sep 30, 2025

@NicolasHug, forgot to say yesterday: amazing work! I think we're on the right path, this is very promising.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants