Skip to content

Commit 1fedce5

Browse files
committed
Document Sync by Tina
1 parent 83fca9d commit 1fedce5

File tree

4 files changed

+12
-12
lines changed

4 files changed

+12
-12
lines changed

docs/stable/cli/cli_api.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -62,16 +62,16 @@ sllm-cli deploy [OPTIONS]
6262
- `--backend <backend_name>`
6363
- Overwrite the backend in the default configuration.
6464

65-
- `--num_gpus <number>`
65+
- `--num-gpus <number>`
6666
- Overwrite the number of GPUs in the default configuration.
6767

6868
- `--target <number>`
6969
- Overwrite the target concurrency in the default configuration.
7070

71-
- `--min_instances <number>`
71+
- `--min-instances <number>`
7272
- Overwrite the minimum instances in the default configuration.
7373

74-
- `--max_instances <number>`
74+
- `--max-instances <number>`
7575
- Overwrite the maximum instances in the default configuration.
7676

7777
##### Examples
@@ -92,7 +92,7 @@ sllm-cli deploy --model facebook/opt-1.3b --backend transformers
9292

9393
Deploy using a model name and overwrite multiple configurations:
9494
```bash
95-
sllm-cli deploy --model facebook/opt-1.3b --num_gpus 2 --target 5 --min_instances 1 --max_instances 5
95+
sllm-cli deploy --model facebook/opt-1.3b --num-gpus 2 --target 5 --min-instances 1 --max-instances 5
9696
```
9797

9898
##### Example Configuration File (`config.json`)
@@ -275,15 +275,15 @@ sllm-cli fine-tuning [OPTIONS]
275275
```
276276

277277
##### Options
278-
- `--base_model <model_name>`
278+
- `--base-model <model_name>`
279279
- Base model name to be fine-tuned
280280
- `--config <config_path>`
281281
- Path to the JSON configuration file.
282282

283283
##### Example
284284
```bash
285-
sllm-cli fine-tuning --base_model <model_name>
286-
sllm-cli fine-tuning --base_model <model_name> --config <path_to_ft_config_file>
285+
sllm-cli fine-tuning --base-model <model_name>
286+
sllm-cli fine-tuning --base-model <model_name> --config <path_to_ft_config_file>
287287
```
288288

289289
##### Example Configuration File (`ft_config.json`)

docs/stable/serve/storage_aware_scheduling.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Replace `/path/to/your/models` with the actual path where you want to store the
3333

3434
### Step 3: Enable Storage Aware Scheduling in Docker Compose
3535

36-
The Docker Compose configuration is already located in the `examples/storage_aware_scheduling` directory. To activate storage-aware scheduling, ensure the `docker-compose.yml` file includes the necessary configurations(`sllm_head` service should include the `--enable_storage_aware` command).
36+
The Docker Compose configuration is already located in the `examples/storage_aware_scheduling` directory. To activate storage-aware scheduling, ensure the `docker-compose.yml` file includes the necessary configurations(`sllm_head` service should include the `--enable-storage-aware` command).
3737

3838
:::tip
3939
Recommend to adjust the number of GPUs and `mem_pool_size` based on the resources available on your machine.

docs/stable/store/quickstart.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -134,11 +134,11 @@ Thus, for fist-time users, you have to load the model from other backends and th
134134

135135
1. Download the model from HuggingFace and save it in the ServerlessLLM format:
136136
``` bash
137-
python3 examples/sllm_store/save_vllm_model.py --model_name facebook/opt-1.3b --storage_path $PWD/models --tensor_parallel_size 1
137+
python3 examples/sllm_store/save_vllm_model.py --model-name facebook/opt-1.3b --storage-path $PWD/models --tensor-parallel-size 1
138138

139139
```
140140

141-
You can also transfer the model from the local path compared to download it from network by passing the `--local_model_path` argument.
141+
You can also transfer the model from the local path compared to download it from network by passing the `--local-model-path` argument.
142142

143143
After downloading the model, you can launch the checkpoint store server and load the model in vLLM through `sllm` load format.
144144

docs/stable/store/rocm_quickstart.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,8 @@ docker exec -it sllm_store_server /bin/bash
6565
Try to save and load a transformer model:
6666

6767
``` bash
68-
python3 examples/save_transformers_model.py --model_name "facebook/opt-1.3b" --storage_path "/models"
69-
python3 examples/load_transformers_model.py --model_name "facebook/opt-1.3b" --storage_path "/models"
68+
python3 examples/save_transformers_model.py --model-name "facebook/opt-1.3b" --storage-path "/models"
69+
python3 examples/load_transformers_model.py --model-name "facebook/opt-1.3b" --storage-path "/models"
7070
```
7171
Expected output:
7272

0 commit comments

Comments
 (0)