Skip to content

Conversation

@HelloCard
Copy link
Contributor

nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used (Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] [4/4] /root/miniconda3/bin/x86_64-conda-linux-gnu-c++ sampling/sampling.cuda.o sampling/renorm.cuda.o sampling/flashinfer_sampling_binding.cuda.o -shared -L/root/miniconda3/lib -L/root/miniconda3/lib64 -L/root/miniconda3/lib64/stubs -lcudart -lcuda -o sampling/sampling.so (Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] FAILED: [code=1] sampling/sampling.so (Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] /root/miniconda3/bin/x86_64-conda-linux-gnu-c++ sampling/sampling.cuda.o sampling/renorm.cuda.o sampling/flashinfer_sampling_binding.cuda.o -shared -L/root/miniconda3/lib -L/root/miniconda3/lib64 -L/root/miniconda3/lib64/stubs -lcudart -lcuda -o sampling/sampling.so (Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] /root/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: cannot find -lcuda: No such file or directory

because in WSL2, cuda.so set at /usr/lib/wsl/lib.
microsoft/WSL#8587

And add $cuda_home/lib to find libcudart.so in \root\miniconda3\lib, install by command "conda install cuda -c nvidia"

📌 Description

Add the path for indexing related libs in WSL2 in cpp_ext.py, as well as the path for related libs after installing cuda using the conda command in WSL2.

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

After the change, -lcudart -lcuda has work right in my env.

nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used
(Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] [4/4] /root/miniconda3/bin/x86_64-conda-linux-gnu-c++ sampling/sampling.cuda.o sampling/renorm.cuda.o sampling/flashinfer_sampling_binding.cuda.o -shared -L/root/miniconda3/lib -L/root/miniconda3/lib64 -L/root/miniconda3/lib64/stubs -lcudart -lcuda -o sampling/sampling.so
(Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] FAILED: [code=1] sampling/sampling.so
(Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] /root/miniconda3/bin/x86_64-conda-linux-gnu-c++ sampling/sampling.cuda.o sampling/renorm.cuda.o sampling/flashinfer_sampling_binding.cuda.o -shared -L/root/miniconda3/lib -L/root/miniconda3/lib64 -L/root/miniconda3/lib64/stubs -lcudart -lcuda -o sampling/sampling.so
(Worker_TP0 pid=5917) ERROR 10-10 12:22:46 [multiproc_executor.py:671] /root/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: cannot find -lcuda: No such file or directory

because in WSL2, cuda.so set at /usr/lib/wsl/lib.
microsoft/WSL#8587

And add $cuda_home/lib to find libcudart.so in \root\miniconda3\lib, install by command "conda install cuda -c nvidia"
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @HelloCard, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses common linking errors encountered when building projects that depend on CUDA libraries within a WSL2 environment, particularly when CUDA is managed via Conda. By extending the linker search paths in the build configuration, it ensures that the necessary CUDA runtime and driver libraries are correctly found, resolving compilation failures related to missing -lcuda and -lcudart.

Highlights

  • WSL2 CUDA Library Path: Added /usr/lib/wsl/lib to the linker flags (ldflags) to ensure that the cuda.so library can be correctly located when compiling in a Windows Subsystem for Linux 2 (WSL2) environment.
  • Conda CUDA Library Path: Included $cuda_home/lib in the linker flags to resolve issues with finding libcudart.so when CUDA is installed using the conda install cuda -c nvidia command.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a build issue on WSL2 where the linker couldn't find CUDA libraries by adding the necessary library paths. The fix is correct for WSL2 and conda environments. My review points out that hardcoding the WSL-specific path could be problematic and suggests a more robust, conditional approach. I also noticed a small typo in the pull request title ('problom').


ldflags = [
"-shared",
"-L/usr/lib/wsl/lib",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding the WSL-specific path /usr/lib/wsl/lib may cause issues on non-WSL Linux systems. It would be more robust to add this path conditionally. You could check for WSL and append the flag only when needed. For example:

if sys.platform == 'linux' and 'microsoft' in os.uname().release.lower():
    ldflags.append('-L/usr/lib/wsl/lib')

This would require restructuring the ldflags list initialization slightly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It already uses the path "/root/miniconda3/lib64" in the link, which does not exist on my device, so I think this situation will be widespread and adding new identification logic for this may reduce the reliability of the logic, so this proposed change requires more discussion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would encourage adding some functions like find_cuda_lib() and find_cudart_lib(), where the internal logic could be platform dependent, @HelloCard wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, cuda_lib comes from the graphics driver, while cudart_lib comes from the CUDA component.
Graphics drivers are often located in fixed paths and don't benefit from automatic search logic. Retrieving cuda_lib from a fixed path is sufficient.
I believe cudart_lib could benefit from automatic search path logic. CUDA can be installed in three ways: conda, pip, and the official NVIDIA installer. CUDA installed via conda is already automatically indexed using $cuda_home.
So, introducing automatic search logic would help resolve the cudart_lib indexing issues caused by installing CUDA via pip or the official NVIDIA installer.
However, using pip to install CUDA is a bit of a stretch... While packages like vllm depend on a large number of NVIDIA components, it's hard to say how much overlap these components have with CUDA, and whether this would lead users to consider using pip to install the remaining CUDA-Toolkit components to save storage space.
On the other hand, for CUDA installed via the official NVIDIA installer, the more common practice is to encourage users to add environment variables in their .bashrc file. Saving users installation steps by using automated search logic might not be appropriate.
Thus, I believe the current fixed-path library search logic is sufficiently reliable. Adding additional automated search logic would not provide significant benefits, so this design can be postponed. @yzh119

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Find*** is a common design in cmake (and for cuda, cmake has a FindCUDA function: https://cmake.org/cmake/help/latest/module/FindCUDA.html, which has some hardcoded logic for different platforms: https://github.com/Kitware/CMake/blob/52d3d4dd388973883bc8d3f9b7eb243c0699e812/Modules/FindCUDA.cmake

I agree that CUDA installation variants are relatively limited, so using fixed paths is reasonable. However, I have two minor requests:

  • Please add comments indicating that these paths are WSL-specific.
  • Could you verify the CUDA 13.0 packaging structure in WSL? On Linux, certain runtime libraries (e.g., libcuda.so, libnvrtc.so, libcublas.so) are located in $cuda_home/lib64/stubs for CUDA 13. I'm unsure whether WSL follows a similar structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that few flashinfer users use WSL. Testing the path layout for CUDA 13.0 on WSL would be helpful, so I'll give it a try...

But the conda CUDA installation instructions show:
conda install nvidia/label/cuda-13.0.0::cuda-nvcc
Solving environment: failed

LibMambaUnsatisfiableError: Encountered problems while solving:

  • nothing provides cuda-version >=13.0,<13.1.0a0 needed by cuda-nvcc-dev_linux-64-13.0.48-0

Could not solve for environment specs
The following package could not be installed
└─ cuda-nvcc is not installable because it requires
└─ cuda-nvcc_linux-64 13.0.48.* , which requires
└─ cuda-nvcc-dev_linux-64 13.0.48.* , which requires
└─ cuda-version >=13.0,<13.1.0a0 , which does not exist (perhaps a missing channel).
This means it requires me to install the graphics card driver for CUDA 13.0 on my Windows system outside of WSL. This is very tedious for me. I use dual 2080Ti graphics cards in SLI, and changing drivers can easily cause various problems, even requiring a complete operating system reinstallation. Therefore, I can't complete this test at the moment.

I also found the NVIDIA installer for CUDA 13.0:
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=runfile_local

I'll try to install it later. @yzh119

Copy link
Contributor Author

@HelloCard HelloCard Oct 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(base) root@DESKTOP-PEPA2G9:/mnt/c/Users/IAdmin# sh cuda_13.0.2_580.95.05_linux.run
===========
= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-13.0/

Please make sure that
 -   PATH includes /usr/local/cuda-13.0/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-13.0/lib64, or, add /usr/local/cuda-13.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-13.0/bin
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 580.00 is required for CUDA 13.0 functionality to work.

same problem.
But at least...
I added the comment. @yzh119

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One clue is that perhaps cuda-pathfinder can be used to index the paths of related libraries.
cupy/cupy#8013 (comment)

@HelloCard HelloCard changed the title Fix "cannot find -lcuda & -lcudart" problom in WSL2 Fix "cannot find -lcuda & -lcudart" problem in WSL2 Oct 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants