Skip to content

How to instantiate one single model instance shared by all threads #4

@Emerald01

Description

@Emerald01

Thank you for this insightful example to host ML model via Celery and FastAPI. In your blog, you mentioned that "There will be a PredictTask object for each worker process. Therefore if a worker has four threads then the model will be loaded four times (each is then stored in memory as well". This is exactly my concern. I think each thread would try to instantiate its own model instance, but if the model is really expensive or some states need be shared by the calling threads, then we want only one single model instance shared by all threads, but how could we do it properly? Any suggestion?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions