You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/quick-start.md
+49-34Lines changed: 49 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -224,7 +224,8 @@ Choose based on your goals for this tutorial.
224
224
225
225
For this tutorial, we'll use text from the [OpenWebText](https://skylion007.github.io/OpenWebTextCorpus/) dataset. This dataset is a free approximation of the WebText data OpenAI used for GPT-2, and it's perfect for our test run!
226
226
227
-
Create a configuration file for the dataset preparation. Copy the following content:
227
+
Create a configuration file for the dataset preparation.
228
+
Save the following as `./fast-llm-tutorial/prepare-config.yaml``:
228
229
229
230
=== "Small"
230
231
@@ -242,10 +243,15 @@ Create a configuration file for the dataset preparation. Copy the following cont
242
243
243
244
tokenizer:
244
245
path: fast-llm-tutorial/pretrained-model
246
+
247
+
splits: # (3)!
248
+
training: 0.9
249
+
validation: 0.1
245
250
```
246
251
247
252
1. Processing speed scales linearly with the number of CPUs.
248
253
2. This small dataset restricts to the first 10K records of the OpenWebText dataset to speed up the process. If you want to use the full dataset, replace with `openwebtext`.
254
+
3. 90% train, 10% validation. These settings need to be adjusted based on the size of your dataset.
249
255
250
256
=== "Big"
251
257
@@ -263,11 +269,14 @@ Create a configuration file for the dataset preparation. Copy the following cont
263
269
264
270
tokenizer:
265
271
path: fast-llm-tutorial/pretrained-model
272
+
273
+
splits: # (2)!
274
+
training: 0.99
275
+
validation: 0.01
266
276
```
267
277
268
278
1. Processing speed scales linearly with the number of CPUs.
269
-
270
-
Save it as `./fast-llm-tutorial/prepare-config.yaml`.
279
+
2. 99% train, 1% validation. These settings need to be adjusted based on the size of your dataset.
271
280
272
281
Fast-LLM ships with a `prepare` command that will download and preprocess the dataset for you.
273
282
@@ -498,33 +507,36 @@ Save the following as `fast-llm-tutorial/train-config.yaml`:
1. For the small run, we'll stop after 100 iterations.
522
535
2. The trained model will be saved in `Transformers` Llama format to `fast-llm-tutorial/experiment/export/llama/100` at the end of the small run. You can also save as a `Fast-LLM` checkpoint by setting the `format` to `fast_llm`.
523
536
3. Entirely optional, but it's a good idea to track your training progress with Weights & Biases. Replace `null` with your own W&B entity name. If you don't want to use W&B, just ignore this section.
524
-
3. Adjust the number of sequences per GPU based on GPU memory. For SmolLM2-135M at 1024 sequenced length and a 80GB GPU, a `micro_batch_size` of 60 should work well.
525
-
4. Must be divisible by the number of GPUs and the `micro_batch_size`. At 1024 tokens per sequence, 480 corresponds to about 500,000 tokens per batch.
526
-
5. Location of the dataset metadata file generated in Step 4.
527
-
6. 90% train, 10% validation, 0% test. These settings need to be adjusted based on the size of your dataset.
537
+
4. Adjust the number of sequences per GPU based on GPU memory. For SmolLM2-135M at 1024 sequenced length and a 80GB GPU, a `micro_batch_size` of 60 should work well.
538
+
5. Must be divisible by the number of GPUs and the `micro_batch_size`. At 1024 tokens per sequence, 480 corresponds to about 500,000 tokens per batch.
539
+
6. Location of the dataset metadata files generated in Step 4.
528
540
7. Format of the pretrained model. Since SmolLM is a Llama model, we set this to `llama`.
529
541
8. We'll train SmolLM2-135M from scratch. You can set to `yes` to continue training from a checkpoint (if you put one in the model directory).
530
542
9. By default, Fast-LLM uses FlashAttention for faster training. If you're using Volta GPUs, set this to `no`.
@@ -556,32 +568,36 @@ Save the following as `fast-llm-tutorial/train-config.yaml`:
@@ -592,15 +608,14 @@ Save the following as `fast-llm-tutorial/train-config.yaml`:
592
608
4. Adjust the number of sequences per GPU based on GPU memory. Considering a 4k token sequence length and 80GB GPUs, a `micro_batch_size` of 1 should work well.
593
609
5. Must be divisible by the number of GPUs and the `micro_batch_size`. At 4k tokens per sequence, 512 corresponds to about 2.1 million tokens per batch.
594
610
6. Location of the dataset metadata file generated in Step 4.
595
-
7. 99% train, 1% validation, 0% test. These settings need to be adjusted based on the size of your dataset. If you're using a smaller dataset, you need to increase the validation split.
596
-
8. These are good default optimizer settings for training models.
597
-
9. We are using a cosine decay schedule with linear warmup. After reaching the peak learning rate `base` at `warmup_iterations`, the learning rate will decay to `minimum` at `decay_iterations`, following a cosine curve. The minimum learning rate should be 1/10th of the base learning rate per Chinchilla.
598
-
10. Format of the pretrained model. Since it's a Llama model, we set this to `llama`.
599
-
11. We want to continue training Llama-3.1-8B from a checkpoint. If you're training from scratch, set this to `no`.
600
-
12. By default, Fast-LLM uses FlashAttention for faster training. If you're using Volta GPUs, set this to `no`.
601
-
13. Configure Fast-LLM to use the fused cross-entropy loss implementation rather than the default Triton implementation for models with a large vocabulary size such as Llama-3.1-8B. This avoids issues with block size limitations in our current Triton code.
602
-
14. We are using ZeRO stage 2 for this tutorial. You can set this to `1`, `2`, or `3` for ZeRO-1, ZeRO-2, or ZeRO-3, respectively.
603
-
15. `bf16` (bfloat16, or Brain Floating Point 16) is supported on Ampere GPUs and higher. On Volta GPUs, use `fp16` (half-precision floating point) for training instead of `bf16`.
611
+
7. These are good default optimizer settings for training models.
612
+
8. We are using a cosine decay schedule with linear warmup. After reaching the peak learning rate `base` at `warmup_iterations`, the learning rate will decay to `minimum` at `decay_iterations`, following a cosine curve. The minimum learning rate should be 1/10th of the base learning rate per Chinchilla.
613
+
9. Format of the pretrained model. Since it's a Llama model, we set this to `llama`.
614
+
10. We want to continue training Llama-3.1-8B from a checkpoint. If you're training from scratch, set this to `no`.
615
+
11. By default, Fast-LLM uses FlashAttention for faster training. If you're using Volta GPUs, set this to `no`.
616
+
12. Configure Fast-LLM to use the fused cross-entropy loss implementation rather than the default Triton implementation for models with a large vocabulary size such as Llama-3.1-8B. This avoids issues with block size limitations in our current Triton code.
617
+
13. We are using ZeRO stage 2 for this tutorial. You can set this to `1`, `2`, or `3` for ZeRO-1, ZeRO-2, or ZeRO-3, respectively.
618
+
14. `bf16` (bfloat16, or Brain Floating Point 16) is supported on Ampere GPUs and higher. On Volta GPUs, use `fp16` (half-precision floating point) for training instead of `bf16`.
604
619
605
620
## 🔑 (Optional) Step 6: Add Your Weights & Biases API Key
In this section we show how to configure datasets through a series of examples
6
+
7
+
We already saw an example dataset configuration in the [quick-start guide](../quick-start.md), where we prepared a simple dataset and split it into training and validation sub-datasets, and used these to train a small model. This was done by:
8
+
9
+
1. Defining a dataset preparation configuration.
10
+
2. Running `fast-llm prepare` with said configuration. This generated some binary files along with two fast-llm configuration files, `fast-llm-tutorial/dataset/fast_llm_config_training.yaml` and `fast-llm-tutorial/dataset/fast_llm_config_validation.yaml`.
11
+
3. Defining a fast-llm data configuration that use those datasets:
4. Running `fast-llm training` with said configuration.
25
+
26
+
In this section we are interested in generalizing step 3. For more details on steps 1 and 2, please refer to the quick-start guide or [this example](data-configuration.md).
27
+
28
+
## Example 1: Blending multiple datasets
29
+
30
+
In this example, we have three datasets and want to sample from each of them during training with probabilities 0.70, 0.25 and 0.05. For this, we use the `blended` type which takes other datasets as arguments:
31
+
32
+
```yaml
33
+
data:
34
+
datasets:
35
+
Training:
36
+
type: blended
37
+
datasets:
38
+
- type: file
39
+
path: path/to/dataset_0.yaml
40
+
- type: file
41
+
path: path/to/dataset_1.yaml
42
+
- type: file
43
+
path: path/to/dataset_2.yaml
44
+
weights: [0.70, 0.25, 0.05]
45
+
```
46
+
47
+
!!! note "Dataset wrappers"
48
+
The `blended` dataset wrapper is one example of the many dataset wrappers available in fast-llm. Such wrappers may be nested (almost) arbitrarily to generate the dataset scheme that fits your needs. Fast-LLM will use the `type` argument to dynamically select the appropriate configuration class(es). With some effort you can even create your own wrapper!
49
+
50
+
## Example 2: Configure shuffling
51
+
52
+
In this example, we have a large dataset that comes pre-shuffled, so shuffling in unnecessary for the first epoch.
53
+
54
+
```yaml
55
+
data:
56
+
datasets:
57
+
Training:
58
+
type: file
59
+
path: path/to/dataset.yaml
60
+
sampling:
61
+
shuffle: skip_first_epoch
62
+
```
63
+
64
+
## Example 3: Disable shuffling for validation
65
+
66
+
In this example, we want to disable shuffling entirely, but only for the validation dataset. We can do this with the `sampled` dataset wrapper:
67
+
68
+
```yaml
69
+
data:
70
+
datasets:
71
+
Training:
72
+
type: file
73
+
path: path/to/training_dataset.yaml
74
+
Validation:
75
+
type: sampled
76
+
dataset:
77
+
type: file
78
+
path: path/to/validation_dataset.yaml
79
+
80
+
sampling:
81
+
shuffle: disabled
82
+
```
83
+
84
+
!!! note "More about sampling configuration"
85
+
Sampling parameters may be globally defined through data configuration (example 2), dataset wrapper(s) (examples 3, 4), or both (example 5). In the case where a dataset sampling is configured with both methods (or multiple nested wrappers), (innermost) wrapper overrides the data (or next-to-innermost wrapper) for the explicitly defined fields (and only those).
86
+
87
+
## Example 4: Set sampling seed for individual datasets
88
+
89
+
In this example, we have a blend of datasets as in example 1, but we wish to set the seed for each dataset individually for reproducibility reasons. For this, we use the `seed` field of the `sampling` wrapper:
90
+
91
+
```yaml
92
+
data:
93
+
datasets:
94
+
Training:
95
+
type: blended
96
+
datasets:
97
+
- type: sampled
98
+
dataset:
99
+
type: file
100
+
path: path/to/dataset_0.yaml
101
+
sampling:
102
+
seed:1234
103
+
- type: sampled
104
+
dataset:
105
+
type: file
106
+
path: path/to/dataset_0.yaml
107
+
sampling:
108
+
seed:2345
109
+
- type: sampled
110
+
dataset:
111
+
type: file
112
+
path: path/to/dataset_0.yaml
113
+
sampling:
114
+
seed:3456
115
+
weights: [0.70, 0.25, 0.05]
116
+
```
117
+
118
+
!!! note "Default seed"
119
+
In the absence of explicit seed, Fast-LLM uses a default seed (`data.sampling`'s default) instead, and uses seed shifts to ensure different seeds for each phase and for the various blended datasets.
120
+
121
+
## Example 5: Advanced scenario
122
+
123
+
In this example, we combine everything we learned so far to create a complex scenario, where:
124
+
125
+
* The training dataset is a blend consists of two datasets, one of them being itself a blend of three datasets.
126
+
* All datasets except for one come pre-shuffled, so can skip shuffling for the first epoch.
127
+
* We want to set the seed explicitly for the validation and innermost blended datasets, but keep the default seed for the others.
If a dataset configuration is especially complex and makes the dataset configuration excessively big, or is reused across many experiments, you may want to save it to a yaml file and refer to it un the config using a `file` dataset. This can be used to reduce the present example to
173
+
```yaml
174
+
data:
175
+
datasets:
176
+
Training:
177
+
type: file
178
+
path: path/to/training_dataset_config.yaml
179
+
Validation:
180
+
type: file
181
+
path: path/to/validation_dataset_config.yaml
182
+
sampling:
183
+
shuffle: skip_first_epoch
184
+
```
185
+
In fact, all the elementary datasets from file we've been using so far are of this format, and consist of more elementary `memmap` datasets optionally wrapped with `blended` and/or `slice` wrappers.
0 commit comments