-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用 8 x 4090 微调 llama3-1-8B 后,询问无输出反馈。 #241
Comments
@KMnO4-zx 大佬😭 |
same question. |
update: base模型和lora权重合并后保存为新模型(merge_and_unload、save_pretrained)后,使用vllm可以完成部署推理。 #教程中的推理方式无输出。(仅限llama3.1,Qwen2-7b、bilibili-index、DeepSeek等正常) |
可能是版本的问题,最近更新了 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
如图,跑了一遍仓库里的代码。
微调结束后,使用 checkpoint-699,输入 prompt 后大模型没有给任何输出。
求好心人解答这是怎么回事
The text was updated successfully, but these errors were encountered: