You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this requires discussion on how things models are going to be loaded in the future,
especially
presently a single model/workflow is being loaded (predict_wrapper in endpoints.py) on startup via config.yaml .this is even before accessing active configs from builder backend via an API. this isn't aligning with fetching configs from builder and then create inference functions for each of those models. Therefore we need to have different PREDICT_API for each of the CLIENT_CONFIGS, and start rewriting for that way.
continuing 1, means we need to load models only after receiving/creation of config from builder-backend. implies, no models will be loaded on startup (before collecting active/deployed configs from builder). This might be a breaking change to this repository.
Can we break/fork out the repo since we are serving two masters (dialogy template & core slu) at the same time ? every move forward we have to think of backward compatibility.
The text was updated successfully, but these errors were encountered:
this requires discussion on how things models are going to be loaded in the future,
especially
predict_wrapper
in endpoints.py) on startup viaconfig.yaml
.this is even before accessing active configs from builder backend via an API. this isn't aligning with fetching configs from builder and then create inference functions for each of those models. Therefore we need to have differentPREDICT_API
for each of theCLIENT_CONFIGS
, and start rewriting for that way.The text was updated successfully, but these errors were encountered: