Skip to content

Conversation

ardatan
Copy link
Member

@ardatan ardatan commented Sep 10, 2025

The following disables the caching of the query plans, the caching is enabled by default.

query_planner:
   cache:
      enabled: true | false

@ardatan ardatan requested a review from dotansimha September 10, 2025 12:40
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ardatan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request provides a new configuration option that allows operators to disable the query plan cache within the router. This enhancement offers greater flexibility for performance tuning, debugging, or specific deployment scenarios where caching query plans might not be desirable, giving users more granular control over the router's behavior.

Highlights

  • Configuration: Introduces a new "no_plan_cache" boolean field within the "query_planner" configuration, allowing users to disable query plan caching. This field defaults to "false", meaning caching remains enabled by default.
  • Query Plan Execution: Modifies the "plan_operation_with_cache" function to conditionally bypass the existing query plan cache. If "no_plan_cache" is set to "true" in the router configuration, the function will directly plan the operation without attempting to retrieve it from or store it in the cache.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new configuration flag, no_plan_cache, to disable query plan caching. The implementation is mostly correct, but I've found a couple of issues.

In bin/router/src/pipeline/query_plan.rs, the new logic for when caching is disabled misses a special case for handling introspection queries, which will cause them to fail. I've suggested a fix that also points out code duplication that could be addressed with a larger refactoring.

In lib/router-config/src/query_planner.rs, the new configuration field is missing a doc comment. I've suggested adding one for consistency and clarity.

Overall, this is a useful feature, and with these changes, it should be ready to merge.

Copy link

github-actions bot commented Sep 10, 2025

k6-benchmark results

     ✓ response code was 200
     ✓ no graphql errors
     ✓ valid response structure

     █ setup

     checks.........................: 100.00% ✓ 232173      ✗ 0    
     data_received..................: 6.8 GB  226 MB/s
     data_sent......................: 91 MB   3.0 MB/s
     http_req_blocked...............: avg=4.54µs  min=711ns  med=1.82µs  max=14.13ms  p(90)=2.57µs  p(95)=2.93µs  
     http_req_connecting............: avg=1.44µs  min=0s     med=0s      max=11.12ms  p(90)=0s      p(95)=0s      
     http_req_duration..............: avg=18.89ms min=1.84ms med=17.98ms max=80.97ms  p(90)=26.11ms p(95)=29.14ms 
       { expected_response:true }...: avg=18.89ms min=1.84ms med=17.98ms max=80.97ms  p(90)=26.11ms p(95)=29.14ms 
     http_req_failed................: 0.00%   ✓ 0           ✗ 77411
     http_req_receiving.............: avg=129.4µs min=25.8µs med=40.49µs max=26.85ms  p(90)=93.02µs p(95)=370.72µs
     http_req_sending...............: avg=25.25µs min=5.36µs med=10.74µs max=20.68ms  p(90)=16.33µs p(95)=29.39µs 
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s       p(90)=0s      p(95)=0s      
     http_req_waiting...............: avg=18.74ms min=1.78ms med=17.85ms max=79.41ms  p(90)=25.88ms p(95)=28.84ms 
     http_reqs......................: 77411   2575.293736/s
     iteration_duration.............: avg=19.37ms min=4.33ms med=18.34ms max=228.17ms p(90)=26.55ms p(95)=29.67ms 
     iterations.....................: 77391   2574.62838/s
     vus............................: 50      min=50        max=50 
     vus_max........................: 50      min=50        max=50 

Copy link

github-actions bot commented Sep 10, 2025

🐋 This PR was built and pushed to the following Docker images:

Image Names: ghcr.io/graphql-hive/router

Platforms: linux/amd64,linux/arm64

Image Tags: ghcr.io/graphql-hive/router:pr-414 ghcr.io/graphql-hive/router:sha-a6f1871

Docker metadata
{
"buildx.build.ref": "builder-156eebee-815c-42bc-a508-e89015a3b3fa/builder-156eebee-815c-42bc-a508-e89015a3b3fa0/gpn4wu4ay8g3t2rqef6n1efnt",
"containerimage.descriptor": {
  "mediaType": "application/vnd.oci.image.index.v1+json",
  "digest": "sha256:42a9204b94232c445d01af32e8f6090b70a1f3cb36034998e826d2af03a251fe",
  "size": 1609
},
"containerimage.digest": "sha256:42a9204b94232c445d01af32e8f6090b70a1f3cb36034998e826d2af03a251fe",
"image.name": "ghcr.io/graphql-hive/router:pr-414,ghcr.io/graphql-hive/router:sha-a6f1871"
}

#[serde(default)]
pub allow_expose: bool,
#[serde(default)]
pub no_plan_cache: bool,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think no_plan_cache: true feels like you flip the condition too many times?

why not just cache: false (and then the default is true)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not do cache: boolean, but be explicit.
What if we add redis cache of plans? What if we have an in-memory cache and redis at the same time and we want to disable one of them?

Imo, we should either has an option to disable caching entirely with a config option like { cache: { enabled: boolean } } or { cache: { in_memory: { enabled: boolean } } }.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way we can have options for the cache that applies to in-memory, redis, disk-based whatever and also space for configs specific to each option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So are we going to have seperate cache storage options for each feature instead of having one single cache storage options for all features?
I think one cache storage can be shared by multiple features. So caching can be a boolean flag for query planning.

cache:
  redis: ...
query_plan:
   cache: false
respobse_cache:
  ...

vs

query_plan:
  cache:
    redis: ...
respobse_cache:
  cache:
    redis: ...

Copy link
Contributor

@kamilkisiela kamilkisiela Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

query_plan:
  cache:
    redis: ...
respobse_cache:
  cache:
    redis: ...

Does not mean you need to pass connection params twice, it could be done on higher level:

redis: ...
query_plan:
  cache:
    redis: 
      size: 1200
      key_prefix: ...
respobse_cache:
  cache:
    redis: 
      size: 300
      key_prefix: ...

You see my point?

Copy link
Member Author

@ardatan ardatan Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never seen this type of nested storage-specific settings before, but I don't have a strong opinion against. So I'll move it under cache for now, then we can tackle the storage-specific options later on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is cache.enabled now. Do you think it looks good now?

let filtered_operation_for_plan = &normalized_operation.operation_for_plan;

if app_state.router_config.query_planner.no_plan_cache {
if filtered_operation_for_plan.selection_set.is_empty()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's share the logic of producing the plan whether or not the cache is enabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it looks better now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants