Skip to content

Commit

Permalink
Client library - updated API documentation (openvinotoolkit#1010)
Browse files Browse the repository at this point in the history
  • Loading branch information
mzegla committed Nov 18, 2021
1 parent ded54a5 commit 07073e6
Show file tree
Hide file tree
Showing 22 changed files with 479 additions and 553 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,5 @@ __pycache__
*user_config*
*.swp
dist/
lib/
!client/python/lib
32 changes: 9 additions & 23 deletions client/python/lib/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,21 +60,14 @@ Simply run `docker build` with the Dockerfile of your choice to get the minimal
```python
import ovmsclient

config = {
"address": "localhost",
"port": 9000
}

client = ovmsclient.make_grpc_client(config=config)
client = ovmsclient.make_grpc_client("localhost:9000")
```

**Create and send model status request:**
```python
status_request = ovmsclient.make_grpc_status_request(model_name="model")
status_response = client.get_model_status(status_request)
status_response.to_dict()
model_status = client.get_model_status(model_name="model")

# Examplary status_response.to_dict() output:
# Examplary status_response:
#
# {
# "1": {
Expand All @@ -88,11 +81,9 @@ status_response.to_dict()

**Create and send model metadata request:**
```python
metadata_request = ovmsclient.make_grpc_metadata_request(model_name="model")
metadata_response = client.get_model_metadata(metadata_request)
metadata_response.to_dict()
model_metadata = client.get_model_metadata(model_name="model")

# Exemplary metadata_response.to_dict() output. Values for model:
# Exemplary metadata_response. Values for model:
# https://docs.openvinotoolkit.org/latest/omz_models_model_resnet_50_tf.html
#
#{
Expand Down Expand Up @@ -121,17 +112,12 @@ metadata_response.to_dict()

with open(<path_to_img>, 'rb') as f:
img = f.read()
predict_request = ovmsclient.make_grpc_predict_request(
{ "map/TensorArrayStack/TensorArrayGatherV3": img },
model_name="model")
predict_response = client.predict(predict_request)
predict_response.to_dict()
inputs = {"map/TensorArrayStack/TensorArrayGatherV3": img}
results = client.predict(inputs=inputs, model_name="model")

# Examplary predict_response.to_dict() output:
# Examplary results:
#
#{
# "softmax_tensor": [[0.01, 0.03, 0.91, ... , 0.00021]]
#}
# [[0.01, 0.03, 0.91, ... , 0.00021]]
#
```

Expand Down
17 changes: 2 additions & 15 deletions client/python/lib/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,10 @@

`TensorProto` processing:
- [make_tensor_proto](make_tensor_proto.md)
- [make_ndarray](make_ndarray.md)

Creating server requests:
- [make_grpc_predict_request](make_grpc_predict_request.md)
- [make_grpc_metadata_request](make_grpc_metadata_request.md)
- [make_grpc_status_request](make_grpc_status_request.md)

Creating clients:
- [make_grpc_client](make_grpc_client.md)


*Note*: Above functions are also aliased in the following namespaces:
- [grpcclient](grpcclient.md)
- [make_http_client](make_http_client.md)


---
Expand All @@ -25,8 +16,4 @@ Creating clients:

Client classes:
- [GrpcClient](grpc_client.md)

Server response classes:
- [GrpcPredictResponse](grpc_predict_response.md)
- [GrpcModelMetadataResponse](grpc_model_metadata_response.md)
- [GrpcModelStatusResponse](grpc_model_status_response.md)
- [HttpClient](http_client.md)
179 changes: 128 additions & 51 deletions client/python/lib/docs/grpc_client.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L26"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>
<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L37"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>

## <kbd>class</kbd> `GrpcClient`

Expand All @@ -10,126 +10,203 @@
### <kbd>method</kbd> `get_model_metadata`

```python
get_model_metadata(request)
get_model_metadata(model_name, model_version, timeout)
```

Send [`GrpcModelMetadataRequest`](https://github.com/openvinotoolkit/model_server/blob/develop/client/python/lib/ovmsclient/tfs_compat/grpc/requests.py#L31) to the server and return response.
Request model metadata.


**Args:**

- <b>`request`</b>: `GrpcModelMetadataRequest` object.
- <b>`model_name`</b>: name of the requested model. Accepted types: `string`.
- <b>`model_version`</b> <i>(optional)</i>: version of the requested model. Accepted types: `positive integer`. Value 0 is special and means the latest served version will be chosen <i>(only in OVMS, TFS requires specific version number provided)</i>. Default value: 0.
- <b>`timeout`</b> <i>(optional)</i>: time in seconds to wait for the response from the server. If exceeded, TimeoutError is raised.
Accepted types: `positive integer`, `positive float`. Value 0 is not accepted. Default value: 10.0.


**Returns:**
`GrpcModelMetadataResponse` object
Dictionary with model metadata in form:

``` python

{
"model_version": <version_number>,
"inputs": {
<input_name>: {
"shape": <input_shape>,
"dtype": <input_dtype>,
},
...
},
"outputs":
<output_name>: {
"shape": <output_shape>,
"dtype": <output_dtype>,
},
...
}
}

```


**Raises:**

- <b>`TypeError`</b>: if request argument is of wrong type.
- <b>`ValueError`</b>: if request argument has invalid contents.
- <b>`ConnectionError`</b>: if there was an error while sending request to the server.
- <b>`TypeError`</b>: if provided argument is of wrong type.
- <b>`ValueError`</b>: if provided argument has unsupported value.
- <b>`ConnectionError`</b>: if there is an issue with server connection.
- <b>`TimeoutError`</b>: if request handling duration exceeded timeout.
- <b>`ModelNotFound`</b>: if model with specified name and version does not exist
in the model server.
- <b>`BadResponseError`</b>: if server response in malformed and cannot be parsed.


**Examples:**

```python

config = {
"address": "localhost",
"port": 9000
}
client = make_grpc_client(config)
request = make_model_metadata_request("model")
response = client.get_model_metadata(request)
import ovmsclient
client = ovmsclient.make_grpc_client("localhost:9000")
# request metadata of the specific model version, with timeout set to 2.5 seconds
model_metadata = client.get_model_metadata(model_name="model", model_version=1, timeout=2.5)
# request metadata of the latest model version
model_metadata = client.get_model_metadata(model_name="model")

```

---

<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L89"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>
<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L78"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>

### <kbd>method</kbd> `get_model_status`

```python
get_model_status(request)
get_model_status(model_name, model_version, timeout)
```

Send [`GrpcModelStatusRequest`](https://github.com/openvinotoolkit/model_server/blob/develop/client/python/lib/ovmsclient/tfs_compat/grpc/requests.py#L37) to the server and return response.
Request model status.


**Args:**

- <b>`request`</b>: `GrpcModelStatusRequest` object.
- <b>`model_name`</b>: name of the requested model. Accepted types: `string`.
- <b>`model_version`</b> <i>(optional)</i>: version of the requested model. Accepted types: `positive integer`. Value 0 means that status of all model versions will be returned. Default value: 0.
- <b>`timeout`</b> <i>(optional)</i>: time in seconds to wait for the response from the server. If exceeded, TimeoutError is raised.
Accepted types: `positive integer`, `positive float`. Value 0 is not accepted. Default value: 10.0.


**Returns:**
`GrpcModelStatusResponse` object
Dictionary with model status in form:

``` python

{
...
<version_number>: {
"state": <model_version_state>,
"error_code": <error_code>,
"error_message": <error_message>
},
...
}
```


**Raises:**

- <b>`TypeError`</b>: if request argument is of wrong type.
- <b>`ValueError`</b>: if request argument has invalid contents.
- <b>`ConnectionError`</b>: if there was an error while sending request to the server.
- <b>`TypeError`</b>: if provided argument is of wrong type.
- <b>`ValueError`</b>: if provided argument has unsupported value.
- <b>`ConnectionError`</b>: if there is an issue with server connection.
- <b>`TimeoutError`</b>: if request handling duration exceeded timeout.
- <b>`ModelNotFound`</b>: if model with specified name and version does not exist
in the model server.
- <b>`BadResponseError`</b>: if server response in malformed and cannot be parsed.


**Examples:**

```python

config = {
"address": "localhost",
"port": 9000
}
client = make_grpc_client(config)
request = make_model_status_request("model")
response = client.get_model_status(request)
import ovmsclient
client = ovmsclient.make_grpc_client("localhost:9000")
# request status of the specific model version, with timeout set to 2.5 seconds
model_status = client.get_model_status(model_name="model", model_version=1, timeout=2.5)
# request status of all model versions
model_status = client.get_model_status(model_name="model")

```


---

<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L33"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>
<a href="../../../../client/python/lib/ovmsclient/tfs_compat/grpc/serving_client.py#L44"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a>

### <kbd>method</kbd> `predict`

```python
predict(request)
predict(inputs, model_name, model_version, timeout)
```

Send [`GrpcPredictRequest`](https://github.com/openvinotoolkit/model_server/blob/develop/client/python/lib/ovmsclient/tfs_compat/grpc/requests.py#L25) to the server and return response.
Request prediction on provided inputs.


**Args:**

- <b>`request`</b>: `GrpcPredictRequest` object.
- <b>`inputs`</b>: dictionary in form
```python
{
...
<input_name>:<input_data>
...
}
```
Following types are accepted:

| Key | Value type |
|---|---|
| input_name | string |
| input_data | python scalar, python list, numpy scalar, numpy array, TensorProto |

If provided **input_data** is not TensorProto, the `make_tensor_proto` function with default parameters will be called internally.

- <b>`model_name`</b>: name of the requested model. Accepted types: `string`.
- <b>`model_version`</b> <i>(optional)</i>: version of the requested model. Accepted types: `positive integer`. Value 0 is special and means the latest served version will be chosen <i>(only in OVMS, TFS requires specific version number provided)</i>. Default value: 0.
- <b>`timeout`</b> <i>(optional)</i>: time in seconds to wait for the response from the server. If exceeded, TimeoutError is raised.
Accepted types: `positive integer`, `positive float`. Value 0 is not accepted. Default value: 10.0.


**Returns:**
`GrpcPredictResponse` object
- if model has one output: `numpy ndarray` with prediction results
- if model has multiple outputs: `dictionary` in form:
```python
{
...
<output_name>:<prediction_result>
...
}
```
Where `output_name` is a `string` and `prediction_result` is a `numpy ndarray`


**Raises:**

- <b>`TypeError`</b>: if request argument is of wrong type.
- <b>`ValueError`</b>: if request argument has invalid contents.
- <b>`ConnectionError`</b>: if there was an error while sending request to the server.

- <b>`TypeError`</b>: if provided argument is of wrong type.
- <b>`ValueError`</b>: if provided argument has unsupported value.
- <b>`ConnectionError`</b>: if there is an issue with server connection.
- <b>`TimeoutError`</b>: if request handling duration exceeded timeout.
- <b>`ModelNotFound`</b>: if model with specified name and version does not exist
in the model server.
- <b>`BadResponseError`</b>: if server response in malformed and cannot be parsed.
- <b>`InvalidInputError`</b>: if provided inputs do not match model's inputs


**Examples:**

```python

config = {
"address": "localhost",
"port": 9000
}
client = make_grpc_client(config)
request = make_predict_request({"input": [1, 2, 3]}, "model")
response = client.predict(request)
import ovmsclient
client = ovmsclient.make_grpc_client("localhost:9000")
inputs = {"input": [1, 2, 3]}
# request prediction on specific model version, with timeout set to 2.5 seconds
results = client.predict(inputs=inputs, model_name="model", model_version=1, timeout=2.5)
# request prediction on the latest model version
results = client.predict(inputs=inputs, model_name="model")

```

Expand Down
40 changes: 0 additions & 40 deletions client/python/lib/docs/grpc_model_metadata_response.md

This file was deleted.

Loading

0 comments on commit 07073e6

Please sign in to comment.