Skip to content

Is there a way to use a local embedding model? #508

Open
@firdausai

Description

@firdausai

From my testing, embedding and indexing new paper seems to be the one eating my rate limit and quota, so I am looking to use a locally hosted embedding model. The docs said that paper-qa accepts any embedding model name supported by litellm, but I don't see a section for using locally hosted embedding model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions