Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Level 2 Caching #2619

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

jodydonetti
Copy link
Collaborator

@jodydonetti jodydonetti commented Mar 14, 2025

Why make this change?

Work for #2543 (Add Level 2 Caching).

What is this change?

First code pushed on this branch.

Summary of the changes:

  • updated package reference for FusionCache from 1.0.0 to 2.1.0
  • added package references to ZiggyCreatures.FusionCache.Serialization.SystemTextJson, Microsoft.Extensions.Caching.StackExchangeRedis and ZiggyCreatures.FusionCache.Backplane.StackExchangeRedis to support L2+backplane on Redis
  • added support for L2+backplane in the initial config (Program.cs)
  • added RuntimeCacheOptions on top of EntityCacheOptions: EntityCacheOptions was used to, I suppose, avoid having 2 similar classes since they had the same shape, but now that is not true anymore
  • added initial version of RuntimeCacheLevel2Options to contain the related config values
  • aligned some tests that broke because they were directly instantiating and passing EntityCacheOptions instead of the new RuntimeCacheOptions
  • added new support classes (eg: RuntimeCacheOptionsConverterFactory)
  • added CACHE_ENTRY_TOO_LARGE message instead of the previously used CACHE_KEY_TOO_LARGE one, which was used when the entry was too large (not the key)
  • changed the generic T in DabCacheService.GetOrSetAsync, from JsonElement (same name but a totally different thing than JsonElement class) to TResult (which was already used in the other GetOrSetAsync overload, to avoid confusion
  • fixed some typos (eg: "ommitted" instead of "omitted" etc)
  • added some temporary // TODO comments to get back to them later

Still missing:

  • although I already worked on the config generation side of things (eg: not just reading the config, but also generating it via the CLI), I think I haven't finished it yet
  • there are also some points where I'm not sure the convention I extrapolated is fully correct (eg: a config sub-object should be generated only if it's not null or should we do a comparison with the inner default values to see if we should skip it as a whole?)
  • since DAB is using FusionCache, it's also using Auto-Recovery: becaus of this, I'd like to tweak some of the default settings (eg: allow background distributed operations to make things faster, etc). But we'd have to first check if we want to expose some options to drive this or we go full-auto-mode

Not (necessarily) part of this effort, but already mentioned and to keep in mind:

  • avoid calculating each cache entry size, since SizeLimit is not currently used, nor it is possible to use it via some config (so, for now, it's useless)
  • start thinking about enabling some resiliency features like Fail-Safe, Soft Timeouts, etc
  • in the future it may be nice to introduce Tagging to automatically evict all the cached entries containing data about an entity after an update to said entity. It would be great

How was this tested?

Tests run locally:

  • Unit Tests

Sample Request(s)

N/A

@jodydonetti jodydonetti changed the title Add L2+backplane support + tests + fix typos Add Level 2 Caching Mar 14, 2025
@jodydonetti
Copy link
Collaborator Author

@microsoft-github-policy-service agree

@jodydonetti
Copy link
Collaborator Author

I'll add that both L2 and the backplane are working, but I'd like to see the backplane in action on a real multi-instance DAB test: any advice on the best way to do this?

Also, I have to add the specific config part for the backplane, but I'm still thinking if it's really necessary or not: the "it just works" way can be to auto-use the backplane if the underlying L2 provider supports it (in our case: "redis"). But I'm pondering between ease of use of the default experience and total control...

@Aniruddh25
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@jodydonetti
Copy link
Collaborator Author

/azp run

Copy link

Commenter does not have sufficient privileges for PR 2619 in repo Azure/data-api-builder

@jodydonetti
Copy link
Collaborator Author

jodydonetti commented Mar 18, 2025

After a couple of minor commits I think this is ready for a review.

Tests

If someone with the needed privileges (maybe @Aniruddh25 ?) can do an /azp run it should now pass all the tests.

Backplane Experience: It Just Works

About the backplane: my opinion is that in the future we may add the ability to specifcy a different provider for the backplane than for L2 (maybe with different options, like connection string, etc), but the default experience should be as much as possible an "it just works" way, based on the specific provider.

The only supported provider currently, Redis, has both abilities (L2 and backplane): because of this, I think we can be good like this for the first release, without any additional explicit option (and btw, I'm already sharing the same IConnectionMultiplexer instance between L2 and backplane, to use less connections).

Some Notes

A couple of additional notes:

  • although I already worked on the config generation side of things (eg: not just reading the config, but also generating it via the CLI), I'm not 100% sure it is already completely correct. Are there best practices for this?
  • there are also some points where I'm not sure the convention I extrapolated is fully correct (eg: a config sub-object should be generated only if it's not null or should we do a comparison with the inner default values to see if we should skip it as a whole?). Are there hints on this?

Pending

Still pending (but not related to this PR):

  • we should discuss about not calculating each cache entry size, since SizeLimit is not currently used, nor it is possible to use it via some config (so, for now, it's useless). Any opinion?

Please let me know, thanks!

@Aniruddh25
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@jodydonetti
Copy link
Collaborator Author

PS: although with this PR there would be full support for L2+backplane, as mentioned elsewhere it would be good to have some sort of "app name" or "app id", to use as a prefix or similar to support multiple DAB apps running over the same Redis instance. If it's complicated to come with such a new global config/concept, an alternative would be to have a new config in the top-level cache config node, something like "prefix" here:

{
	"$schema": "https://github.com/Azure/data-api-builder/releases/download/v0.10.23/dab.draft.schema.json",
	"data-source": {
		// ...
	},
	"runtime": {
		"rest": {
			// ...
		},
		"graphql": {
			// ...
		},
		"cache": {
			"enabled": true,
			"ttl-seconds": 10,
			"prefix": "blahblah", // HERE
			"level-2": {
				// ...
			}
		},
		// ...
	}
}

This would be cache-specific, and quite easy to add.

After the merge of this first PR (if all goes well) I can add this pretty fast.

@@ -33,6 +33,7 @@ public record EntityCacheOptions
[JsonConstructor]
public EntityCacheOptions(bool? Enabled = null, int? TtlSeconds = null)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we still using EntityCacheOptions as a part of the request flow, seems its been usurped by RuntimeCacheOptions, do we still need this class?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me yes because the shapes of the 2 configs are different.
For example in the runtime one there's the provider, which is not in the entity one.
My 2 cents.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya, makes sense, these 2 classes share a lot of code though, so maybe we can split the common functionality out to a base record that they both inherit?

private class RuntimeCacheOptionsConverter : JsonConverter<RuntimeCacheOptions>
{
/// <summary>
/// Defines how DAB reads an entity's cache options and defines which values are
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reads an entity's cache options or reads the runtime cache options? maybe a copy paste error.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah sorry, copy/paste error in the comments.

}

/// <summary>
/// When writing the EntityCacheOptions back to a JSON file, only write the ttl-seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar as above, this should be RuntimeCacheOptions right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

@@ -27,8 +27,9 @@ public class DabCacheService

// Log Messages
private const string CACHE_KEY_EMPTY = "The cache key should not be empty.";
private const string CACHE_KEY_TOO_LARGE = "The cache key is too large.";
//private const string CACHE_KEY_TOO_LARGE = "The cache key is too large.";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove commented code

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, I left there for now because I was unsure if there was also a need for a "key too large" check (currently missing).
Can remove it if that is not the case.

@aaronburtle
Copy link
Contributor

/azp run

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants