请教mindnlp中预训练模型的tokenize的用法 #595
Closed
ZengLei-dev
started this conversation in
General
Replies: 1 comment 2 replies
-
这个应该是个bug,直接tokenizer("hello world")试下 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
请问tokenizer = RobertaTokenizer.from_pretrained('roberta-base') ;然后 tokenizer.tokenize(‘hello world’)报错参数给的太多;请问mindnlp中的分词是怎么使用的?谢谢
Beta Was this translation helpful? Give feedback.
All reactions