-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Level 2 Caching #2619
base: main
Are you sure you want to change the base?
Add Level 2 Caching #2619
Conversation
@microsoft-github-policy-service agree |
I'll add that both L2 and the backplane are working, but I'd like to see the backplane in action on a real multi-instance DAB test: any advice on the best way to do this? Also, I have to add the specific config part for the backplane, but I'm still thinking if it's really necessary or not: the "it just works" way can be to auto-use the backplane if the underlying L2 provider supports it (in our case: |
/azp run |
Azure Pipelines successfully started running 6 pipeline(s). |
/azp run |
Commenter does not have sufficient privileges for PR 2619 in repo Azure/data-api-builder |
After a couple of minor commits I think this is ready for a review. TestsIf someone with the needed privileges (maybe @Aniruddh25 ?) can do an Backplane Experience: It Just WorksAbout the backplane: my opinion is that in the future we may add the ability to specifcy a different provider for the backplane than for L2 (maybe with different options, like connection string, etc), but the default experience should be as much as possible an "it just works" way, based on the specific provider. The only supported provider currently, Redis, has both abilities (L2 and backplane): because of this, I think we can be good like this for the first release, without any additional explicit option (and btw, I'm already sharing the same Some NotesA couple of additional notes:
PendingStill pending (but not related to this PR):
Please let me know, thanks! |
/azp run |
Azure Pipelines successfully started running 6 pipeline(s). |
PS: although with this PR there would be full support for L2+backplane, as mentioned elsewhere it would be good to have some sort of "app name" or "app id", to use as a prefix or similar to support multiple DAB apps running over the same Redis instance. If it's complicated to come with such a new global config/concept, an alternative would be to have a new config in the top-level {
"$schema": "https://github.com/Azure/data-api-builder/releases/download/v0.10.23/dab.draft.schema.json",
"data-source": {
// ...
},
"runtime": {
"rest": {
// ...
},
"graphql": {
// ...
},
"cache": {
"enabled": true,
"ttl-seconds": 10,
"prefix": "blahblah", // HERE
"level-2": {
// ...
}
},
// ...
}
} This would be cache-specific, and quite easy to add. After the merge of this first PR (if all goes well) I can add this pretty fast. |
@@ -33,6 +33,7 @@ public record EntityCacheOptions | |||
[JsonConstructor] | |||
public EntityCacheOptions(bool? Enabled = null, int? TtlSeconds = null) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we still using EntityCacheOptions as a part of the request flow, seems its been usurped by RuntimeCacheOptions, do we still need this class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me yes because the shapes of the 2 configs are different.
For example in the runtime one there's the provider, which is not in the entity one.
My 2 cents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya, makes sense, these 2 classes share a lot of code though, so maybe we can split the common functionality out to a base record that they both inherit?
private class RuntimeCacheOptionsConverter : JsonConverter<RuntimeCacheOptions> | ||
{ | ||
/// <summary> | ||
/// Defines how DAB reads an entity's cache options and defines which values are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reads an entity's cache options or reads the runtime cache options? maybe a copy paste error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah sorry, copy/paste error in the comments.
} | ||
|
||
/// <summary> | ||
/// When writing the EntityCacheOptions back to a JSON file, only write the ttl-seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar as above, this should be RuntimeCacheOptions right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
@@ -27,8 +27,9 @@ public class DabCacheService | |||
|
|||
// Log Messages | |||
private const string CACHE_KEY_EMPTY = "The cache key should not be empty."; | |||
private const string CACHE_KEY_TOO_LARGE = "The cache key is too large."; | |||
//private const string CACHE_KEY_TOO_LARGE = "The cache key is too large."; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove commented code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, I left there for now because I was unsure if there was also a need for a "key too large" check (currently missing).
Can remove it if that is not the case.
/azp run |
Azure Pipelines successfully started running 6 pipeline(s). |
Why make this change?
Work for #2543 (Add Level 2 Caching).
What is this change?
First code pushed on this branch.
Summary of the changes:
1.0.0
to2.1.0
ZiggyCreatures.FusionCache.Serialization.SystemTextJson
,Microsoft.Extensions.Caching.StackExchangeRedis
andZiggyCreatures.FusionCache.Backplane.StackExchangeRedis
to support L2+backplane on RedisProgram.cs
)RuntimeCacheOptions
on top ofEntityCacheOptions
:EntityCacheOptions
was used to, I suppose, avoid having 2 similar classes since they had the same shape, but now that is not true anymoreRuntimeCacheLevel2Options
to contain the related config valuesEntityCacheOptions
instead of the newRuntimeCacheOptions
RuntimeCacheOptionsConverterFactory
)CACHE_ENTRY_TOO_LARGE
message instead of the previously usedCACHE_KEY_TOO_LARGE
one, which was used when the entry was too large (not the key)T
inDabCacheService.GetOrSetAsync
, fromJsonElement
(same name but a totally different thing thanJsonElement
class) toTResult
(which was already used in the otherGetOrSetAsync
overload, to avoid confusion// TODO
comments to get back to them laterStill missing:
null
or should we do a comparison with the inner default values to see if we should skip it as a whole?)Not (necessarily) part of this effort, but already mentioned and to keep in mind:
How was this tested?
Tests run locally:
Sample Request(s)
N/A