-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inject a script into every page that records userland performance metrics #2
base: master
Are you sure you want to change the base?
Conversation
It's an interesting approach, and one that’s worth experimenting with… A few initial thoughts:
|
}); | ||
}); | ||
|
||
observer.observe({ type: "longtask", buffered: true }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
List of types looks OK to me. From a synthetic PoV can't see anymore from this list I'd want to add https://www.w3.org/TR/timing-entrytypes-registry/
One challenge with this approach is (I think) it'll only give limited visibility into what's occurring in other frames / workers
observer.observe({ type: "element", buffered: true }); | ||
observer.observe({ type: "paint", buffered: true }); | ||
|
||
// Disabled layout shifts for now, since the resulting entries are potentially |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How large is the layout shift data - CLS is one of the things I think people struggle with most?
@@ -241,6 +243,10 @@ def start_recording(self): | |||
self.send_command('Debugger.enable', {}) | |||
self.send_command('ServiceWorker.enable', {}) | |||
self.enable_target() | |||
performance_metrics_script = os.path.join(self.script_dir, 'performance_metrics.js') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to hard code this script into the agent, or should we use the same approach as custom metrics where it's passed from the server as a base64 encoded string?
Guess the question is how much work is it to deploy the agents?
performance_metrics_script = os.path.join(self.script_dir, 'performance_metrics.js') | ||
if os.path.isfile(performance_metrics_script): | ||
with io.open(performance_metrics_script, 'r', encoding='utf-8') as script_file: | ||
self.send_command('Page.addScriptToEvaluateOnNewDocument', {'source': script_file.read()}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading the docs for Page.addScriptToEvaluateOnNewDocument
it appears the script will get evaluated in every frame that gets created?
https://chromedevtools.github.io/devtools-protocol/tot/Page/#method-addScriptToEvaluateOnNewDocument
I distrust wptagent's method of extracting performance metrics via DevTools events. They're a bit messy and we've had bugs in the past where the events are attributed to the wrong frame. This PR is my attempt at switching to a more "standard" way of gathering performance metrics, which is to use the userland JavaScript APIs.
What this PR does is inject a script into every page that creates a
PerformanceObserver
and collects a bunch of metrics into a global variable.Upsides
Downsides