Skip to content

Commit d183441

Browse files
committed
revert: proto -> yaml markup
1 parent 0659dc9 commit d183441

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
156156

157157
The section of model config file specifying this parameter will look like:
158158

159-
```yaml
159+
```proto
160160
parameters: {
161161
key: "DISABLE_OPTIMIZED_EXECUTION"
162162
value: { string_value: "true" }
@@ -175,7 +175,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
175175

176176
To enable inference mode, use the configuration example below:
177177

178-
```yaml
178+
```proto
179179
parameters: {
180180
key: "INFERENCE_MODE"
181181
value: { string_value: "true" }
@@ -195,7 +195,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
195195

196196
To disable cuDNN, use the configuration example below:
197197

198-
```yaml
198+
```proto
199199
parameters: {
200200
key: "DISABLE_CUDNN"
201201
value: { string_value: "true" }
@@ -210,7 +210,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
210210

211211
To enable weight sharing, use the configuration example below:
212212

213-
```yaml
213+
```proto
214214
parameters: {
215215
key: "ENABLE_WEIGHT_SHARING"
216216
value: { string_value: "true" }
@@ -228,7 +228,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
228228

229229
To enable cleaning of the CUDA cache after every execution, use the configuration example below:
230230

231-
```yaml
231+
```proto
232232
parameters: {
233233
key: "ENABLE_CACHE_CLEANING"
234234
value: { string_value: "true" }
@@ -251,7 +251,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
251251
252252
To set the inter-op thread count, use the configuration example below:
253253

254-
```yaml
254+
```proto
255255
parameters: {
256256
key: "INTER_OP_THREAD_COUNT"
257257
value: { string_value: "1" }
@@ -277,7 +277,7 @@ Triton exposes some flags to control the execution mode of the TorchScript model
277277
278278
To set the intra-op thread count, use the configuration example below:
279279

280-
```yaml
280+
```proto
281281
parameters: {
282282
key: "INTRA_OP_THREAD_COUNT"
283283
value: { string_value: "1" }
@@ -321,7 +321,7 @@ where the input tensors are placed as follows:
321321
322322
To set the model instance group, use the configuration example below:
323323

324-
```yaml
324+
```proto
325325
instance_group {
326326
count: 2
327327
kind: KIND_GPU
@@ -356,7 +356,7 @@ The following PyTorch settings may be customized by setting parameters on the
356356

357357
For example:
358358

359-
```yaml
359+
```proto
360360
parameters: {
361361
key: "NUM_THREADS"
362362
value: { string_value: "4" }

0 commit comments

Comments
 (0)