You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 15, 2024. It is now read-only.
It is crucial to have a reliable and objective way to measure node performance because the scalability of the network depends heavily on node hardware, yet we currently have to rely on rep's self-report and stress tests to even get a basic idea on that metric. This method is neither reliable nor quantitative. Thus, it's easy to game and hard to factor into Ninja Score to quickly inform average users of their rep quality.
Given that vote latency analysis is a promising proxy for node performance, we as a community should investigate how much we can improve on current implementation so it can be production-ready. In fact, many reps that struggled quite a bit during the last 40 CPS stress tests still have high Ninja Scores and are continuously gaining weight as a result.
People have suggested setting up different nodes across the world dedicating to measure vote latency of peers to improve reliability and reduce bias from Internet latency. There should also be ways to aggregate data from existing peers and model out the confounding factors. It sounds like a pretty standard date science problem to me, but I could be wrong.
Unfortunately I'm neither a dev nor data scientist so I cannot give much further suggestions on implementation details. But I believe we need to get this ready before the next waves of newcomers arrive to avoid a large group of people choosing the wrong reps. Therefore, I invite the devs and data analysts to join this discussion and help solving this glaring issue in the ecosystem right now.