Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restructure how models are loaded for core-slu-service. #26

Open
greed2411 opened this issue Jun 9, 2021 · 0 comments
Open

restructure how models are loaded for core-slu-service. #26

greed2411 opened this issue Jun 9, 2021 · 0 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@greed2411
Copy link
Member

this requires discussion on how things models are going to be loaded in the future,

especially

  1. presently a single model/workflow is being loaded (predict_wrapper in endpoints.py) on startup via config.yaml .this is even before accessing active configs from builder backend via an API. this isn't aligning with fetching configs from builder and then create inference functions for each of those models. Therefore we need to have different PREDICT_API for each of the CLIENT_CONFIGS, and start rewriting for that way.
  2. continuing 1, means we need to load models only after receiving/creation of config from builder-backend. implies, no models will be loaded on startup (before collecting active/deployed configs from builder). This might be a breaking change to this repository.
  3. Can we break/fork out the repo since we are serving two masters (dialogy template & core slu) at the same time ? every move forward we have to think of backward compatibility.
@greed2411 greed2411 self-assigned this Jun 9, 2021
@ltbringer ltbringer added the help wanted Extra attention is needed label Jun 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants