Skip to content

Commit 8aade7b

Browse files
committed
Fixed typo at line 340 of README.md
Under the Chat Completion section, about the fourth line under it (line 340), we have: 'The model will will format the messages into a single prompt using the following order of precedence:'... 'will' appeared twice. This update fix that.
1 parent 7c4aead commit 8aade7b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ The high-level API also provides a simple interface for chat completion.
337337
Chat completion requires that the model knows how to format the messages into a single prompt.
338338
The `Llama` class does this using pre-registered chat formats (ie. `chatml`, `llama-2`, `gemma`, etc) or by providing a custom chat handler object.
339339

340-
The model will will format the messages into a single prompt using the following order of precedence:
340+
The model will format the messages into a single prompt using the following order of precedence:
341341
- Use the `chat_handler` if provided
342342
- Use the `chat_format` if provided
343343
- Use the `tokenizer.chat_template` from the `gguf` model's metadata (should work for most new models, older models may not have this)

0 commit comments

Comments
 (0)