Skip to content

Conversation

@paulgear
Copy link

@paulgear paulgear commented Aug 3, 2025

Second cut at metrics server; now contains documentation update.

This was done with a great deal of AI assistance, although I've also hand-edited a lot of the results. I've included the base level prompts I used to kickstart the AI in the plans/ directory. Hopefully they describe fairly clearly what I was aiming for (although after viewing the results I've changed those requirements somewhat). I've squashed the commits, but if you want to see the ugly details I can push an unsquashed branch instead.

I'm not entirely happy with the tests, but it's ready enough to try. I don't really have any great way of load testing it to see the performance effects of enabling metrics, but it is surviving on my pool host without blowing up: https://www.ntppool.org/scores/2001:44b8:2100:3f00::7b:502

I'd be interested in your opinion, @mlichvar, on what the best histogram buckets for packet size might be. My feeling that the <48,<56,<128,128+ divide is a little oversimplified, but I'm not sure what the other options should be since I'm really only seeing generic requests from IPv6-capable hosts on my pool server.

I've put release binaries on my fork of the repo if anyone wants to test them. The binary is 1.9M on my system, as opposed to about 670K for the master branch build. The memory requirements for enabling client cache are significant, but on my system they appear to have no perceptible effect on CPU usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant