Conversation
📝 WalkthroughWalkthroughAdds optional notification type filtering and a helper to list distinct notification types for a user; expands preloads to include user_settings; exposes the types filter and available types in GraphQL and updates resolver to return available types alongside paginated notifications. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Resolver as GraphQL Resolver
participant AppNotif as AppNotifications
participant DB as Database
Client->>Resolver: get_current_user_notifications(types: ["watchlist","insight"])
Resolver->>Resolver: build_opts(types)
Resolver->>AppNotif: list_notifications_for_user(user_id, opts)
AppNotif->>AppNotif: maybe_filter_by_types (apply types filter)
AppNotif->>DB: query notifications (with preloads: roles, user_settings)
DB-->>AppNotif: notifications
AppNotif-->>Resolver: paginated notifications
Resolver->>AppNotif: list_available_notification_types_for_user(user_id)
AppNotif->>DB: query distinct notification.type for user
DB-->>AppNotif: available types
AppNotif-->>Resolver: available types
Resolver-->>Client: {notifications: [...], available_notification_types: [...]}
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
9f26dfb to
093509e
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (3)
lib/sanbase_web/graphql/schema/types/app_notification_types.ex (1)
29-29: Considernon_null(list_of(non_null(:string)))for a stricter, accurate GraphQL contract.The resolver always assigns this field to a list (possibly empty), so it will never be
null. Declaring it as nullable (list_of(:string)) allows clients to incorrectly assume the field may be absent and requires extra null-checks on the frontend.♻️ Proposed change
- field(:available_notification_types, list_of(:string)) + field(:available_notification_types, non_null(list_of(non_null(:string))))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/schema/types/app_notification_types.ex` at line 29, The field declaration field(:available_notification_types, list_of(:string)) is too permissive given the resolver always returns a list; change it to a non-null list of non-null strings by using non_null(list_of(non_null(:string))) for the GraphQL type on the :available_notification_types field so clients can rely on a present, non-null list of strings (update the field declaration in the same module where :available_notification_types is defined).test/sanbase/app_notifications/app_notifications_test.exs (1)
169-224: Good coverage forlist_available_notification_types_for_user/1including the deleted-notification exclusion case.One minor observation: in the "returns distinct types" test (line 183),
entity_type: "watchlist"is used for the"publish_insight"notification. This is semantically incorrect but has no effect on what's being tested here. Worth aligning with the actual domain type ("insight") to avoid misleading future readers.♻️ Proposed nit
- for {type, entity_id} <- [ - {"create_watchlist", 1}, - {"publish_insight", 2}, - {"create_watchlist", 3} - ] do - {:ok, notification} = - AppNotifications.create_notification(%{ - type: type, - user_id: author.id, - entity_type: "watchlist", - entity_id: entity_id - }) + for {type, entity_type, entity_id} <- [ + {"create_watchlist", "watchlist", 1}, + {"publish_insight", "insight", 2}, + {"create_watchlist", "watchlist", 3} + ] do + {:ok, notification} = + AppNotifications.create_notification(%{ + type: type, + user_id: author.id, + entity_type: entity_type, + entity_id: entity_id + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/sanbase/app_notifications/app_notifications_test.exs` around lines 169 - 224, The test "returns distinct notification types for the user" uses entity_type: "watchlist" for a "publish_insight" notification which is semantically incorrect; update the notification creation in that test (the loop creating notifications via AppNotifications.create_notification/1) so that when type == "publish_insight" it sets entity_type: "insight" (or simply change the tuple for that case to use entity_type "insight") to match the domain model while keeping the rest of the test unchanged and still asserting against AppNotifications.list_available_notification_types_for_user/1.lib/sanbase_web/graphql/resolvers/app_notification_resolver.ex (1)
10-16:list_available_notification_types_for_userruns unconditionally on every request.The extra
DISTINCTquery is executed for everyget_current_user_notificationscall regardless of whether the client selectsavailableNotificationTypes. Moving it to a dedicated field resolver on:app_notifications_paginatedwould make it lazy — only executed when explicitly requested — and avoids the always-on overhead.♻️ Proposed approach
In
app_notification_types.ex, add a field resolver:object :app_notifications_paginated do field(:notifications, list_of(:app_notification)) field(:cursor, :cursor) - field(:available_notification_types, list_of(:string)) + field :available_notification_types, non_null(list_of(non_null(:string))) do + resolve(fn %{user_id: user_id}, _args, _resolution -> + {:ok, AppNotifications.list_available_notification_types_for_user(user_id)} + end) + end endIn the resolver, pass
user_idinto the map instead of calling the function eagerly:- available_types = AppNotifications.list_available_notification_types_for_user(user.id) - {:ok, Map.put(paginated, :available_notification_types, available_types)} + {:ok, Map.put(paginated, :user_id, user.id)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/resolvers/app_notification_resolver.ex` around lines 10 - 16, The code eagerly calls AppNotifications.list_available_notification_types_for_user(user.id) inside the main resolver (where you call AppNotifications.list_notifications_for_user and AppNotifications.wrap_with_cursor), causing an unconditional DISTINCT query; instead, modify the main resolver (the block using list_notifications_for_user |> wrap_with_cursor) to include the user id in the returned paginated map (e.g., add :user_id or :current_user_id) and remove the direct call to list_available_notification_types_for_user, then implement a dedicated field resolver for the :app_notifications_paginated type in app_notification_types.ex that, when the available_notification_types field is requested, calls AppNotifications.list_available_notification_types_for_user/1 using the user id from the paginated map; this makes the DISTINCT query lazy and only runs when the client requests availableNotificationTypes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@lib/sanbase_web/graphql/resolvers/app_notification_resolver.ex`:
- Around line 10-16: The code eagerly calls
AppNotifications.list_available_notification_types_for_user(user.id) inside the
main resolver (where you call AppNotifications.list_notifications_for_user and
AppNotifications.wrap_with_cursor), causing an unconditional DISTINCT query;
instead, modify the main resolver (the block using list_notifications_for_user
|> wrap_with_cursor) to include the user id in the returned paginated map (e.g.,
add :user_id or :current_user_id) and remove the direct call to
list_available_notification_types_for_user, then implement a dedicated field
resolver for the :app_notifications_paginated type in app_notification_types.ex
that, when the available_notification_types field is requested, calls
AppNotifications.list_available_notification_types_for_user/1 using the user id
from the paginated map; this makes the DISTINCT query lazy and only runs when
the client requests availableNotificationTypes.
In `@lib/sanbase_web/graphql/schema/types/app_notification_types.ex`:
- Line 29: The field declaration field(:available_notification_types,
list_of(:string)) is too permissive given the resolver always returns a list;
change it to a non-null list of non-null strings by using
non_null(list_of(non_null(:string))) for the GraphQL type on the
:available_notification_types field so clients can rely on a present, non-null
list of strings (update the field declaration in the same module where
:available_notification_types is defined).
In `@test/sanbase/app_notifications/app_notifications_test.exs`:
- Around line 169-224: The test "returns distinct notification types for the
user" uses entity_type: "watchlist" for a "publish_insight" notification which
is semantically incorrect; update the notification creation in that test (the
loop creating notifications via AppNotifications.create_notification/1) so that
when type == "publish_insight" it sets entity_type: "insight" (or simply change
the tuple for that case to use entity_type "insight") to match the domain model
while keeping the rest of the test unchanged and still asserting against
AppNotifications.list_available_notification_types_for_user/1.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
test/sanbase_web/api_call_limit/api_call_limit_api_test.exs (1)
653-653: Consider asserting API call status in the setup loop.
for _ <- 1..30, do: make_api_call(context.apikey_conn, [])discards each response. If the calls are unexpectedly blocked (e.g., test isolation issue),quota2may not differ fromquota, causing the assertion at line 657 to fail with a confusing message rather than at the API-call level. Assertingstatus == 200here would make failures easier to diagnose.♻️ Proposed change
- for _ <- 1..30, do: make_api_call(context.apikey_conn, []) + for _ <- 1..30 do + assert make_api_call(context.apikey_conn, []).status == 200 + end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/sanbase_web/api_call_limit/api_call_limit_api_test.exs` at line 653, The setup loop calls make_api_call(context.apikey_conn, []) 30 times but discards responses; change it to capture and assert each call returned HTTP 200 to fail fast and surface blocked calls — e.g., for each invocation of make_api_call use pattern matching or an assertion on the response status (reference make_api_call and context.apikey_conn) so every iteration checks status == 200 before proceeding to compute quota2/quota.lib/sanbase/api_call_limit/api_call_limit_ets.ex (1)
165-175: Cold-start ETS update logic is sound; minor consistency gap on return assertion.The new path correctly avoids a DB overwrite by reading the authoritative quota first, then applying usage only in ETS. The mutex ensures no race between
get_quota_from_db_and_update_etsanddo_upate_ets_usage.Line 170 silently discards the return value of
do_upate_ets_usage, while line 213 assertstrue = do_upate_ets_usage(...). Sincedo_upate_ets_usageterminates withtrue = :ets.update_element(...)(raising on failure anyway), this is not a runtime bug — but the inconsistency reduces defensive clarity.♻️ Proposed consistency fix
{:ok, %{quota: quota} = metadata} -> - do_upate_ets_usage(entity_key, quota, count, result_byte_size, metadata) + true = do_upate_ets_usage(entity_key, quota, count, result_byte_size, metadata) :ok🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase/api_call_limit/api_call_limit_ets.ex` around lines 165 - 175, The call to do_upate_ets_usage in the get_quota_from_db_and_update_ets branch currently discards its return value, creating an inconsistency with other callers that assert true = do_upate_ets_usage(...); update that branch so it pattern-matches the result (e.g., true = do_upate_ets_usage(entity_key, quota, count, result_byte_size, metadata)) to keep behavior and defensive clarity consistent with the existing assertion in the module.lib/sanbase/api_call_limit/api_call_limit.ex (1)
138-151: ETS-clear-on-success logic looks correct.Clearing ETS only on
{:ok, _acl}is the right call — it avoids clearing a valid ETS entry whencreatefails after the DB record was deleted, preventing a window where a concurrent request would find no ETS entry and try to DB-create a record thatreset/1is simultaneously inserting.The public function
reset/1is missing@specand@docannotations, which is a coding guideline requirement for all public functions.As per coding guidelines, "Add typespecs (
@spec) to all public functions" and "Add@docdocumentation to all public functions":📝 Proposed addition
+ `@doc` ~s""" + Resets the API call limit record for the given user by deleting the existing + record and creating a fresh one. Also clears the in-memory ETS cache on success. + """ + `@spec` reset(User.t()) :: {:ok, %__MODULE__{}} | {:error, String.t()} def reset(%User{} = user) do🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase/api_call_limit/api_call_limit.ex` around lines 138 - 151, Add missing `@doc` and `@spec` for the public function reset/1: document that reset/1 deletes any existing ApiCallLimit record for the given %User{} then calls create(:user, user) and clears ETS via __MODULE__.ETS.clear_data(:user, user) only on {:ok, _} results; add a typespec such as `@spec` reset(User.t()) :: {:ok, struct()} | {:error, any()} (or narrower types if you can reference the concrete return types from create/2, e.g. {:ok, ApiCallLimit.t()} | {:error, Ecto.Changeset.t()}) so callers and dialyzer have clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/sanbase_web/api_call_limit/api_call_limit_api_test.exs`:
- Around line 653-668: The post-reset 50ms sleep is too short and causes
intermittent CI failures because the async BillingEventSubscriber path (which
hits do_update_usage's [] branch and performs a DB read before touching ETS) may
not finish before the following get_quota_db(:user, context.user) check; update
the test around self_reset_api_calls to either poll/assert_eventually for the
expected DB state (using a small loop with timeout) or increase the post-reset
wait to a safer 100–150 ms to ensure the asynchronous event handling completes
before calling Sanbase.ApiCallLimit.get_quota_db, referencing the
self_reset_api_calls helper, do_update_usage, and BillingEventSubscriber
behavior when making the change.
---
Nitpick comments:
In `@lib/sanbase/api_call_limit/api_call_limit_ets.ex`:
- Around line 165-175: The call to do_upate_ets_usage in the
get_quota_from_db_and_update_ets branch currently discards its return value,
creating an inconsistency with other callers that assert true =
do_upate_ets_usage(...); update that branch so it pattern-matches the result
(e.g., true = do_upate_ets_usage(entity_key, quota, count, result_byte_size,
metadata)) to keep behavior and defensive clarity consistent with the existing
assertion in the module.
In `@lib/sanbase/api_call_limit/api_call_limit.ex`:
- Around line 138-151: Add missing `@doc` and `@spec` for the public function
reset/1: document that reset/1 deletes any existing ApiCallLimit record for the
given %User{} then calls create(:user, user) and clears ETS via
__MODULE__.ETS.clear_data(:user, user) only on {:ok, _} results; add a typespec
such as `@spec` reset(User.t()) :: {:ok, struct()} | {:error, any()} (or narrower
types if you can reference the concrete return types from create/2, e.g. {:ok,
ApiCallLimit.t()} | {:error, Ecto.Changeset.t()}) so callers and dialyzer have
clarity.
In `@test/sanbase_web/api_call_limit/api_call_limit_api_test.exs`:
- Line 653: The setup loop calls make_api_call(context.apikey_conn, []) 30 times
but discards responses; change it to capture and assert each call returned HTTP
200 to fail fast and surface blocked calls — e.g., for each invocation of
make_api_call use pattern matching or an assertion on the response status
(reference make_api_call and context.apikey_conn) so every iteration checks
status == 200 before proceeding to compute quota2/quota.
Changes
{ getCurrentUserNotifications(types: ["publish_insight", "update_watchlist"] limit: 2) { availableNotificationTypes notifications { type user { avatarUrl username id email } } } }{ "data": { "getCurrentUserNotifications": { "availableNotificationTypes": [ "create_watchlist", "publish_insight", "update_watchlist" ], "notifications": [ { "type": "update_watchlist", "user": { "avatarUrl": "https://production-sanbase-images.s3.amazonaws.com/uploads/c978261c467028715af203a83bff99497372f6dbfc164badf5ac71a643cd6fd8_1582111820068_1582111820480.jpeg", "email": "<email hidden>", "id": "4522", "username": "Santrends" } }, { "type": "publish_insight", "user": { "avatarUrl": "https://production-sanbase-images.s3.amazonaws.com/uploads/08487a460702b62087e47ecde2dcdc22b63016db38afff987886d144b9b65f26_1765004371308_1765004369780.jpeg", "email": "<email hidden>", "id": "3031", "username": "SanSights" } } ] } } }Ticket
Checklist:
Summary by CodeRabbit
Release Notes