Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check for custom callouts on app builds #35628

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

Charl1996
Copy link
Contributor

@Charl1996 Charl1996 commented Jan 16, 2025

Technical Summary

Ticket

This PR adds some more insights into how users use the app dependencies feature in an application (or rather, how they don't use it).

The broader intent is to find out what level of adoption we have for the app dependencies feature and if any projects make use of app callouts but don't use the app dependencies feature. This will enable us to start asking "why not".

Notes

Celery task

I think the new additional method should be a relatively cheap one to process, but I have had some thoughts of handing the analyse_app_build method off to celery for async processing, since reporting analytics shouldn't be a blocking operation. Any input/thoughts here would be welcome.

Datadog custom metric

Prereading

Reporting the app ID and domain will count as additional custom metrics on Datadog (for which app_id would be the biggest contributer to increased metric variance), but

  1. assuming hitting the "Make new version" button is a relatively low-frequency action, and
  2. of all new app builds, only those that have custom app callouts will be reported to Datadog

...I think the variance is OK given the low frequency of metric reporting.

Safety Assurance

Safety story

Tested locally. Staging testing to commence.

Automated test coverage

No automated testing

QA Plan

Going through QA

Rollback instructions

  • This PR can be reverted after deploy with no further considerations

Labels & Review

  • Risk label is set correctly
  • The set of people pinged as reviewers is appropriate for the level of risk of the change

@Charl1996 Charl1996 added the product/invisible Change has no end-user visible impact label Jan 16, 2025
@orangejenny
Copy link
Contributor

I have had some thoughts of handing the analyse_app_build method off to celery for async processing

Yeah, I think this should go in a celery task. My recollection is that wrapped_xform parses the form XML and that can be expensive for apps with a lot of large forms. If you don't move the new logic to celery, it's worth gathering some evidence that it won't be a performance hit for large apps (possibly using the app build timings page).

assuming hitting the "Make new version" button is a relatively low-frequency action

I agree the new metric isn't likely to incur significant cost, but just for the exercise, you could use analytics to confirm the actual frequency of this action. We should have a metric in GA for how often this button is clicked.


return copy

def analyse_app_build(self, new_build):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on doing this async.
Gathering metrics for our use should not delay user interaction, seems unreasonable.

if app_has_custom_intents():
metrics_counter(
'commcare.app_build.custom_app_callout',
tags={'domain': new_build.domain, 'app_id': new_build.copy_of},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking on this, same as the other PR, does adding tags effect cost on datadog and how? I assume a new custom metric is a new cost of its own.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there is context about this in the description.

So, we are saying since making new version happens infrequently and that we are reporting only when its applicable this should be okay, seems fair.

I see Jenny already noticed this PR, but Just tagging @gherceg as well for visibility that a new custom tag is being added.
(let me know if its not necessary for us to notify you or someone everytime a new tag is added)

def check_for_custom_callouts(self, new_build):
from corehq.apps.app_manager.util import app_callout_templates

templates = next(app_callout_templates)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be list instead of next?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The app_callout_templates variable holds a reference to a function which yields, so maybe next is a better suit here?

Copy link
Contributor

@mkangia mkangia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks safe enough to me though I would vote to get QA on this since we are touching a critical process used by all users and QA could then check it for different kind of forms/apps so we can be more certain that this won't break anything.

@Charl1996 Charl1996 added the awaiting QA QA in progress. Do not merge label Jan 17, 2025
@Charl1996
Copy link
Contributor Author

Charl1996 commented Jan 17, 2025

I moved the analyse_new_app_build task call to after the new build has been saved to avoid any race conditions. As such I also had to update the function checking for app dependencies added/removed on the previous build now that get_latest_build refers to the actual newest app build and not the previous one to that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting QA QA in progress. Do not merge product/invisible Change has no end-user visible impact
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants