Skip to content

Conversation

jim60105
Copy link
Contributor

@jim60105 jim60105 commented Jul 3, 2025

This is a reissue of #1098 and I mentioned it in the closing comment of PR #1133.
This version of the code is currently released through the https://github.com/jim60105/docker-whisperX project.

jim60105 added 4 commits July 3, 2025 05:48
- Add a .python-version file to pin the project to Python 3.11
- Update README setup to require CUDA toolkit 12.8 instead of 12.4 (Linux and Windows)
- Raise the project’s minimum Python requirement to ≥3.10,<3.13
- Bump torch dependency from 2.6.0 to 2.7.1
- Switch the PyTorch CUDA wheel index from cu124 to cu128

Signed-off-by: CHEN, CHUN <[email protected]>
…n README"

This reverts commit 6fe0a87.

The issue of relying on two different versions of CUDNN in this project has been resolved.

Signed-off-by: CHEN, CHUN <[email protected]>
- Only download torch from PyTorch; obtain all other packages from PyPI. There is a chance it can run on Python 3.9.
- Restrict numpy, onnxruntime, pandas to be compatible with Python 3.9

Signed-off-by: CHEN, CHUN <[email protected]>
- Add triton version 3.3.0 or newer to the dependencies to support arm64 architecture.

Signed-off-by: CHEN, CHUN <[email protected]>
@jim60105 jim60105 changed the title build: update dependencies and Python version requirements build: bump torch to 2.7.1 and CUDA 12.8 support Jul 3, 2025
@fhalde
Copy link

fhalde commented Jul 5, 2025

My understanding is that PyTorch comes pre-built with the required CUDA runtime libraries included. I’m a bit confused then, who are the official CUDA installation instructions meant for in that case? Thanks!

@jim60105
Copy link
Contributor Author

jim60105 commented Jul 5, 2025

Doc from Pytorch

https://pytorch.org/get-started/locally/

Doc from NVIDIA

Windows

https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html

Windows WSL

https://docs.nvidia.com/cuda/wsl-user-guide/index.html#nvidia-compute-software-support-on-wsl-2

Linux, OSX

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/

Then on a native Linux workstation with Podman, you probably work with CDI.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html

That's a lot. I always have Windows users install NV's CUDA Toolkit exe, and let Linux users read that document. (...Linux users probably know what they should do.)
And within the pip scope, get the source through PyTorch index to avoid compatibility issues.
In fact, pip cannot obtain all CUDA dependencies; I have encountered this issue in other projects.
In my experience, these instructions are the most reliable.

@wywwzjj
Copy link

wywwzjj commented Jul 6, 2025

Thanks

@Saicheg
Copy link

Saicheg commented Jul 16, 2025

Really looking forward for this to be merged in order to upgrade my infrastructure!

jim60105 added 2 commits July 17, 2025 15:14
- Add a platform marker to the triton dependency to skip it on Windows, as triton does not support Windows.

Signed-off-by: Jim Chen <[email protected]>
@kalvin807
Copy link

❯ uv add git+https://github.com/m-bain/whisperx --rev 251602f1220dc21d336d46eb1ebe100da74813cf
Using CPython 3.11.11
Creating virtual environment at: .venv
Resolved 123 packages in 754ms
error: Distribution `torch==2.7.1+cu128 @ registry+https://download.pytorch.org/whl/cu128` can't be installed because it doesn't have a source distribution or wheel for the current platform

hint: You're on macOS (`macosx_15_0_arm64`), but `torch` (v2.7.1+cu128) only has wheels for the following platforms: `manylinux_2_28_aarch64`, `manylinux_2_28_x86_64`, `win_amd64`; consider adding your platform to `tool.uv.required-environments` to ensure uv resolves to a version with compatible wheels

Thanks for the patch!

It does not install in macos via git install.
I believes it need extra setting in uv to tell uv install cpu pytorch in mac -> https://docs.astral.sh/uv/guides/integration/pytorch/#configuring-accelerators-with-optional-dependencies

- macOS uses CPU-only PyTorch from pytorch-cpu index
- Linux and Windows use CUDA 12.8 PyTorch from pytorch index
- triton only installs on Linux with CUDA 12.8 support
- Update lockfile to support multi-platform builds
@jim60105
Copy link
Contributor Author

@kalvin807 Please try using 0e7153b to see if it works on Mac.

@kalvin807
Copy link

it installed successfully. thanks

@humblenginr
Copy link

Thanks a bunch @jim60105 ! It works for me with cuda 12.8, torch 2.7.1 and cudnn 9.10.

I hope this gets merged soon @m-bain @seset

@levon003
Copy link

I've started using this branch in a CPU-based Docker image running on Linux, and it's working fine. Thanks for facing down these dependencies!

@lefnire
Copy link

lefnire commented Aug 31, 2025

thanks @jim60105 , spend countless hours down the libcudnn_ops_infer.so.8 rabbit-hole. pip-installed your branch and bingo.

@T-Atlas
Copy link

T-Atlas commented Sep 6, 2025

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet