Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLaMA3_1-8B-Instruct WebDemo 部署 打开Web页面报错 #235

Open
vistar-terry opened this issue Aug 8, 2024 · 4 comments
Open

LLaMA3_1-8B-Instruct WebDemo 部署 打开Web页面报错 #235

vistar-terry opened this issue Aug 8, 2024 · 4 comments

Comments

@vistar-terry
Copy link

vistar-terry commented Aug 8, 2024

2024-08-09_00-28-40
请问这个该怎么解决?

完整报错:
Traceback (most recent call last): File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script exec(code, module.__dict__) File "C:\Users\Vistar\Desktop\llma3.1\chatBot.py", line 30, in <module> tokenizer, model = get_model() ^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 168, in wrapper return cached_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 197, in __call__ return self._get_or_create_cached_value(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 224, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 280, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\Desktop\llma3.1\chatBot.py", line 25, in get_model model = AutoModelForCausalLM.from_pretrained(mode_name_or_path, rope_scaling=rope_scaling, torch_dtype=torch.bfloat16).cuda() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\transformers\models\auto\auto_factory.py", line 524, in from_pretrained config, kwargs = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 989, in from_pretrained return config_class.from_dict(config_dict, **unused_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\transformers\configuration_utils.py", line 772, in from_dict config = cls(**config_dict) ^^^^^^^^^^^^^^^^^^ File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\transformers\models\llama\configuration_llama.py", line 161, in __init__ self._rope_scaling_validation() File "C:\Users\Vistar\.conda\envs\py312cu121\Lib\site-packages\transformers\models\llama\configuration_llama.py", line 182, in _rope_scaling_validation raise ValueError( ValueError: rope_scalingmust be a dictionary with two fields,typeandfactor, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

@KMnO4-zx
Copy link
Contributor

KMnO4-zx commented Aug 9, 2024

transformers版本不对,请保持与教程版本一致

@vistar-terry
Copy link
Author

对的呀,和教程一样
2024-08-09_21-13-54

@KMnO4-zx
Copy link
Contributor

KMnO4-zx commented Aug 9, 2024

https://www.codewithgpu.com/i/datawhalechina/self-llm/self-llm-llama3.1

那你试一下上面那个镜像行不?也可能是 4.43.1

@zhanghhyyaa
Copy link

更新一下transformer 就好了 pip install --upgrade transformers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants