Commit cd9a6b9
Swati Allabadi
Using torch_qaic gradScaler and making lora_dropout=0.05 (#320)
1. In case of finetuning on qaic, torch_qaic gradScaler will be used
2. Moving back to lora_dropout = 0.05 on ML Framework team's ask.
Signed-off-by: Swati Allabadi <[email protected]>
Co-authored-by: Swati Allabadi <[email protected]>1 parent b7e926e commit cd9a6b9
File tree
2 files changed
+6
-3
lines changed- QEfficient/finetune
- configs
- utils
2 files changed
+6
-3
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
19 | 19 | | |
20 | 20 | | |
21 | 21 | | |
22 | | - | |
| 22 | + | |
23 | 23 | | |
24 | 24 | | |
25 | 25 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
24 | 24 | | |
25 | 25 | | |
26 | 26 | | |
| 27 | + | |
27 | 28 | | |
28 | 29 | | |
29 | 30 | | |
| |||
60 | 61 | | |
61 | 62 | | |
62 | 63 | | |
63 | | - | |
64 | 64 | | |
65 | 65 | | |
66 | 66 | | |
| |||
92 | 92 | | |
93 | 93 | | |
94 | 94 | | |
95 | | - | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
96 | 99 | | |
97 | 100 | | |
98 | 101 | | |
| |||
0 commit comments