Skip to content
This repository has been archived by the owner on Jun 22, 2024. It is now read-only.

Commit

Permalink
Cleaning up a few of the yamls to fix the yamls template (#45)
Browse files Browse the repository at this point in the history
* Update gorilla.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update codellama-7b-instruct.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update gpt4all-j-groovy.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update gpt4all-j.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update gpt4all-l13b-snoozy.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update guanaco.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update hippogriff.yaml

Signed-off-by: lunamidori5 <[email protected]>

* Update hypermantis.yaml

Signed-off-by: lunamidori5 <[email protected]>

---------

Signed-off-by: lunamidori5 <[email protected]>
  • Loading branch information
lunamidori5 committed Nov 7, 2023
1 parent 6bedc93 commit 3d4f59c
Show file tree
Hide file tree
Showing 8 changed files with 18 additions and 17 deletions.
3 changes: 2 additions & 1 deletion codellama-7b-instruct.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,13 @@ urls:
- https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF

config_file: |
backend: llama
context_size: 4096
parameters:
model: codellama-7b-instruct.Q4_K_M.gguf
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 4096
template:
chat_message: llama2-chat-message
Expand Down
4 changes: 2 additions & 2 deletions gorilla.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ license: "N/A"

config_file: |
backend: llama
context_size: 1024
f16: true
parameters:
model: gorilla
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
f16: true
template:
completion: gorilla
chat: gorilla
Expand Down
4 changes: 2 additions & 2 deletions gpt4all-j-groovy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ urls:
- https://gpt4all.io
config_file: |
backend: gpt4all-j
context_size: 1024
parameters:
model: ggml-gpt4all-j-v1.3-groovy.bin
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
template:
completion: "gpt4all-completion"
chat: gpt4all-chat
Expand All @@ -33,4 +33,4 @@ prompt_templates:
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:
### Response:
4 changes: 2 additions & 2 deletions gpt4all-j.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ urls:
- https://gpt4all.io
config_file: |
backend: gpt4all-j
context_size: 1024
parameters:
model: ggml-gpt4all-j.bin
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
template:
completion: "gpt4all-completion"
chat: gpt4all-chat
Expand All @@ -33,4 +33,4 @@ prompt_templates:
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:
### Response:
4 changes: 2 additions & 2 deletions gpt4all-l13b-snoozy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ urls:
- https://gpt4all.io
config_file: |
backend: gpt4all-llama
context_size: 1024
parameters:
model: ggml-gpt4all-j-v1.3-groovy.bin
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
template:
completion: "gpt4all-completion"
chat: gpt4all-chat
Expand All @@ -33,4 +33,4 @@ prompt_templates:
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:
### Response:
4 changes: 2 additions & 2 deletions guanaco.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ urls:

config_file: |
backend: llama
context_size: 1024
parameters:
model: guanaco
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
template:
completion: guanaco-completion
chat: guanaco-chat
Expand All @@ -34,4 +34,4 @@ prompt_templates:
content: |
### Instruction:
{{.Input}}
### Response:
### Response:
8 changes: 4 additions & 4 deletions hippogriff.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,18 @@ license: "N/A"

config_file: |
backend: llama
context_size: 1024
f16: true
mmap: true
parameters:
model: hippogriff
top_k: 40
temperature: 0.1
top_p: 0.95
context_size: 1024
roles:
user: "USER:"
system: "SYSTEM:"
assistant: "ASSISTANT:"
f16: true
mmap: true
template:
completion: hippogriff-completion
chat: hippogriff-chat
Expand All @@ -30,4 +30,4 @@ prompt_templates:
ASSISTANT:
- name: "hippogriff-completion"
content: |
{{.Input}}
{{.Input}}
4 changes: 2 additions & 2 deletions hypermantis.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ license: ""

config_file: |
backend: llama
context_size: 2048
parameters:
model: hypermantis
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 2048
template:
completion: hypermantis-completion
chat: hypermantis-chat
Expand All @@ -38,4 +38,4 @@ prompt_templates:
### Response:
ASSISTANT:
ASSISTANT:

0 comments on commit 3d4f59c

Please sign in to comment.