Skip to content

Can i use qqq in llm-comprefessor? #1281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
191220042 opened this issue Mar 25, 2025 · 1 comment
Closed

Can i use qqq in llm-comprefessor? #1281

191220042 opened this issue Mar 25, 2025 · 1 comment
Assignees
Labels
enhancement New feature or request question Further information is requested

Comments

@191220042
Copy link

Hello,i find qqq has been supportted by vllm, i wonder how can i get the qqq model

@191220042 191220042 added the enhancement New feature or request label Mar 25, 2025
@dsikka dsikka self-assigned this Mar 25, 2025
@dsikka dsikka added the question Further information is requested label Mar 25, 2025
@dsikka
Copy link
Collaborator

dsikka commented Mar 25, 2025

Hi @191220042

For int4 compression, llmcompressor currently supports gptq and soon, awq quantization.

@dsikka dsikka closed this as completed Mar 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants