Skip to content

Commit

Permalink
slide app readme faq and optional persistence between runs (#6914)
Browse files Browse the repository at this point in the history
GitOrigin-RevId: 6fb0b0c8d390b1b23753aa86a3c902129ca907f4
  • Loading branch information
berkecanrizai authored and Manul from Pathway committed Jul 10, 2024
1 parent 0f0a74f commit 18452aa
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
25 changes: 24 additions & 1 deletion examples/pipelines/slides_ai_search/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,19 @@ This folder contains several components necessary for setting up and running the

## How to run the project

First, clone the Pathway LLM App Repository

```bash
git clone https://github.com/pathwaycom/llm-app.git
```

Make sure you are in the right directory:
```bash
cd examples/pipelines/slides_search
cd examples/pipelines/slides_ai_search
```

> Note: If your OpenAI API usage is throttled, you may want to change the `run_mode` in the `SlideParser` to `run_mode="sequential"` instead of the `"parallel"`.
### Locally
Running the whole demo without Docker is not suggested as there are three components.

Expand Down Expand Up @@ -238,3 +246,18 @@ conn.pw_ai_answer("introduction slide")
## Not sure how to get started?

Let's discuss how we can help you build a powerful, customized RAG application. [Reach us here to talk or request a demo!](https://pathway.com/solutions/enterprise-generative-ai?modal=requestdemo)


## FAQ

> I am getting OpenAI API throttle limit errors.
- You can change `run_mode` in `SlideParser` to `run_mode="sequential"`. This will parse images one by one, however, this will significantly slow down the parsing stage.

> UI shows that my file is being indexed, but I don't have that file in the inputs.
- App mounts `storage` folder from the Docker to the local file system. This helps the file server serve the content. This folder is not cleaned between the runs, files from the previous runs will be staying here. You can remove the folder after closing the app to get rid of these.

> Can I use other vision LMs or LLMs?
- Yes, you can configure the `OpenAIChat` to reach local LLMs or swap it with any other LLM wrappers (such as `LiteLLMChat`) to use other models. Make sure your model of choice supports vision inputs.

> Can I persist the cache between the runs?
- Yes, you can uncomment the `- ./Cache:/app/Cache` under the `app:/volumes:` section inside the `docker-compose.yml` to allow caching between the runs. You will see that requests are not repeated in the next runs.
1 change: 1 addition & 0 deletions examples/pipelines/slides_ai_search/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ services:
- ./data:/app/data
- ./storage/pw_dump_files:/app/storage/pw_dump_files
- ./storage/pw_dump_images:/app/storage/pw_dump_images
# - ./Cache:/app/Cache

nginx:
container_name: file_server
Expand Down

0 comments on commit 18452aa

Please sign in to comment.