Skip to content

Conversation

@PastaPastaPasta
Copy link
Member

Issue being fixed or feature implemented

Currently in CBLSBatchVerifier-- used by InstantSend manager / validation, LLMQ Signing Manager and LLMQ Signing Shares Manager-- public keys are aggregated iteratively. This is sub-optimal, as we have an exposed method for aggregating a whole vector of public keys. I figured the batch approach may be more performant. Benchmark below

Size Iterative (ops/s) Batch (ops/s) Improvement
5 keys 340,140 377,262 +11.0%
25 keys 54,089 58,459 +8.1%
50 keys 25,863 28,034 +8.4%
100 keys 12,788 13,918 +8.8%
200 keys 6,367 6,931 +8.9%

How Has This Been Tested?

Breaking Changes

Checklist:

Go over all the following points, and put an x in all the boxes that apply.

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation
  • I have assigned this pull request to a milestone (for repository code-owners and collaborators only)

@github-actions
Copy link

github-actions bot commented Oct 21, 2025

✅ No Merge Conflicts Detected

This PR currently has no conflicts with other open PRs.

@coderabbitai
Copy link

coderabbitai bot commented Oct 21, 2025

Walkthrough

Adds a new benchmark source bench/bls_pubkey_agg.cpp and includes it in the bench build list. The benchmark implements two BLS public-key aggregation strategies (iterative vs. batch) across keyset sizes 5, 25, 50, 100, and 200 and registers them. Separately, refactors batch verification in src/bls/bls_batchverifier.h: removes incremental in-loop aggregation and instead collects per-message-hash vectors of signatures and public keys, then calls AggregateInsecure once per hash and verifies using the aggregated signature(s).

Sequence Diagram(s)

mermaid
sequenceDiagram
autonumber
actor BenchRunner
participant Benchmark as bench/bls_pubkey_agg.cpp
participant RNG
participant BLSLib
Note over Benchmark: For each input size and strategy
BenchRunner->>Benchmark: run(iterative|batch, N)
Benchmark->>RNG: generate N secret keys
RNG-->>Benchmark: N secret keys
Benchmark->>BLSLib: derive N public keys
BLSLib-->>Benchmark: N public keys
alt iterative (old)
Benchmark->>BLSLib: for i: agg = AggregateInsecure(agg, pubKey_i)
BLSLib-->>Benchmark: aggPubKey_iterative
else batch (new)
Benchmark->>BLSLib: aggPubKey = AggregateInsecure(all pubKeys)
BLSLib-->>Benchmark: aggPubKey_batch
end
Benchmark->>BLSLib: Validate(aggPubKey)
BLSLib-->>Benchmark: validation result
BenchRunner-->>Benchmark: record timing

mermaid
sequenceDiagram
autonumber
participant Caller
participant BatchVerifier_old as "VerifyBatch (old)"
participant BatchVerifier_new as "VerifyBatch (new)"
participant BLSOps
Note over BatchVerifier_old,BatchVerifier_new: High-level control-flow comparison
Caller->>BatchVerifier_old: verify(batch_items)
BatchVerifier_old->>BatchVerifier_old: for each hash bucket\n for each sig: aggSig = Aggregate(aggSig, sig) (incremental)
BatchVerifier_old->>BLSOps: Verify(aggPubKey, aggSig, msgHashes)
BLSOps-->>Caller: result

Caller->>BatchVerifier_new: verify(batch_items)
BatchVerifier_new->>BatchVerifier_new: for each hash bucket\n  collect sigsToAggregate[], pubKeysToAggregate[]  (accumulate lists)
BatchVerifier_new->>BLSOps: aggSig = Aggregate(sigsToAggregate)\naggPubKey = Aggregate(pubKeysToAggregate)
BatchVerifier_new->>BLSOps: Verify(aggPubKey, aggSig, msgHashes)
BLSOps-->>Caller: result

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Files/areas to pay extra attention to:

  • src/bls/bls_batchverifier.h — correctness of aggregation ordering, handling of empty buckets, and preservation of verification semantics between insecure and secure paths.
  • src/bench/bls_pubkey_agg.cpp — RNG/key generation determinism in benchmarks and that ASSERT/validation prevents compiler elimination.
  • src/Makefile.bench.include — ensure new source is correctly added to bench build list and build integration.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "perf: optimize public key aggregation in bls_batchverifier.h" directly and accurately describes the main change in the changeset. The title specifically references the file being modified (bls_batchverifier.h) and clearly identifies the optimization being implemented (public key aggregation). This is confirmed by the code summary showing changes to replace iterative aggregation with batch aggregation in that file. The title is concise, specific, and provides sufficient clarity for someone scanning the commit history to understand the primary change.
Description Check ✅ Passed The pull request description is related to the changeset and provides meaningful context about the optimization. It explains the motivation (iterative aggregation is sub-optimal), identifies the solution (batch aggregation), and includes concrete benchmark data demonstrating performance improvements of 8-11% across various key sizes. The description clearly connects to the changes shown in the code summary, where public key aggregation is being optimized in bls_batchverifier.h. While some sections like "How Has This Been Tested?" are empty, the core description adequately conveys the purpose and benefits of the change.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/bls/bls_batchverifier.h (2)

142-144: Right-size reserves for fewer allocations.

Reserve by the number of unique message hashes, not total messages.

-        msgHashes.reserve(messages.size());
-        pubKeys.reserve(messages.size());
+        msgHashes.reserve(byMessageHash.size());
+        pubKeys.reserve(byMessageHash.size());

145-166: Optional micro-opts: reuse buffer and fast-path single key.

Reduce churn by reusing the aggregation buffer and avoid an extra call when only one key is present.

-        for (const auto& [msgHash, vec_message_it] : byMessageHash) {
-            std::vector<CBLSPublicKey> pubKeysToAggregate;
-            pubKeysToAggregate.reserve(vec_message_it.size());
+        std::vector<CBLSPublicKey> pubKeysToAggregate;
+        for (const auto& [msgHash, vec_message_it] : byMessageHash) {
+            pubKeysToAggregate.clear();
+            pubKeysToAggregate.reserve(vec_message_it.size());

             for (const auto& msgIt : vec_message_it) {
                 const auto& msg = msgIt->second;
                 ...
-                pubKeysToAggregate.push_back(msg.pubKey);
+                pubKeysToAggregate.emplace_back(msg.pubKey);
             }

-            CBLSPublicKey aggPubKey = CBLSPublicKey::AggregateInsecure(pubKeysToAggregate);
+            CBLSPublicKey aggPubKey;
+            if (pubKeysToAggregate.size() == 1) {
+                aggPubKey = pubKeysToAggregate[0];
+            } else {
+                aggPubKey = CBLSPublicKey::AggregateInsecure(pubKeysToAggregate);
+            }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e6e8f5 and f7544dc.

📒 Files selected for processing (3)
  • src/Makefile.bench.include (1 hunks)
  • src/bench/bls_pubkey_agg.cpp (1 hunks)
  • src/bls/bls_batchverifier.h (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/bls/bls_batchverifier.h
  • src/bench/bls_pubkey_agg.cpp
src/bench/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

Performance benchmarks should be placed in src/bench/ and use nanobench

Files:

  • src/bench/bls_pubkey_agg.cpp
🪛 GitHub Actions: Clang Diff Format Check
src/bench/bls_pubkey_agg.cpp

[error] 1-1: Clang format differences found by clang-format-diff.py. Run the diff script to format: git diff -U0 origin/develop -- $(git ls-files -- $(cat test/util/data/non-backported.txt)) | ./contrib/devtools/clang-format-diff.py -p1 > diff_output.txt

🔇 Additional comments (3)
src/Makefile.bench.include (1)

26-27: Bench source registration looks good.

File is correctly added to bench_bench_dash_SOURCES alongside other BLS benches.

src/bls/bls_batchverifier.h (1)

145-166: Semantics preserved; nicer separation of concerns.

Batch-aggregating pubkeys per msgHash via AggregateInsecure(pubKeysToAggregate) is clear and reduces in-loop state. LGTM.

Please confirm AggregateInsecure([]) returns invalid, matching the “only duplicates” early-continue behavior.

src/bench/bls_pubkey_agg.cpp (1)

1-131: Verify formatting compliance with CI.

Unable to verify formatting in sandbox environment (clang-format and Python unavailable). The provided verification script requires Python, which is not available. Manually run clang-format locally on src/bench/bls_pubkey_agg.cpp and src/bls/bls_batchverifier.h to confirm CI formatting requirements are satisfied, or check the CI output directly for specific formatting violations.

Comment on lines +5 to +8
#include <bench/bench.h>
#include <bls/bls.h>
#include <random.h>

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add ; drop unused include.

Explicitly include the assert header and remove unused random.h.

 #include <bench/bench.h>
 #include <bls/bls.h>
-#include <random.h>
+#include <cassert>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#include <bench/bench.h>
#include <bls/bls.h>
#include <random.h>
#include <bench/bench.h>
#include <bls/bls.h>
#include <cassert>
🤖 Prompt for AI Agents
In src/bench/bls_pubkey_agg.cpp around lines 5 to 8, the file should explicitly
include the C assert header and remove the unused random.h include; replace the
unused #include <random.h> with #include <cassert> (or add #include <cassert>
and delete the random.h line) so assertions compile correctly and the redundant
header is dropped.

// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.

#include <bench/bench.h>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use bls.cpp instead? There's no need to create new file for each new benchmark test

@PastaPastaPasta PastaPastaPasta force-pushed the perf/optimized-pubkey-agg branch from f7544dc to cefca52 Compare October 21, 2025 14:15
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/bench/bls_pubkey_agg.cpp (2)

1-129: Consider consolidating with existing bench/bls.cpp.

A previous reviewer suggested adding these benchmarks to the existing bench/bls.cpp file rather than creating a new file. This would reduce file proliferation in the benchmark directory.

Based on past review comments from knst. Would you prefer to consolidate this into bench/bls.cpp? If that file is already large or covers different BLS operations, keeping them separate may be justified.


5-8: Fix includes: remove unused random.h and add <cassert>.

The file uses assert() on lines 36 and 58 but doesn't include <cassert>, which can cause compilation issues. Additionally, random.h is included but never used.

Apply this diff:

 #include <bench/bench.h>
 #include <bls/bls.h>
-#include <random.h>
+#include <cassert>
🧹 Nitpick comments (2)
src/bench/bls_pubkey_agg.cpp (2)

11-60: Consider extracting key generation into a helper function.

The key generation logic (lines 17-21 and 48-52) is duplicated between the two benchmark functions. Extracting this into a helper would improve maintainability.

Example refactor:

static std::vector<CBLSPublicKey> GenerateTestPublicKeys(size_t count)
{
    std::vector<CBLSPublicKey> pubKeys;
    pubKeys.reserve(count);
    
    for (size_t i = 0; i < count; i++) {
        CBLSSecretKey secKey;
        secKey.MakeNewKey();
        pubKeys.push_back(secKey.GetPublicKey());
    }
    
    return pubKeys;
}

Then use it in both functions:

static void BLS_PubKeyAggregate_Iterative(size_t count, benchmark::Bench& bench)
{
    auto pubKeys = GenerateTestPublicKeys(count);
    // ... rest of benchmark
}

65-129: Consider using a macro to reduce wrapper boilerplate.

The wrapper functions for different key counts follow an identical pattern. A macro could reduce this repetition, though verify that the BENCHMARK registration system supports this approach.

Example using a macro:

#define BENCHMARK_BLS_PUBKEY_AGG(COUNT, DESCRIPTION) \
    static void BLS_PubKeyAggregate_Iterative_##COUNT(benchmark::Bench& bench) \
    { \
        BLS_PubKeyAggregate_Iterative(COUNT, bench); \
    } \
    static void BLS_PubKeyAggregate_Batch_##COUNT(benchmark::Bench& bench) \
    { \
        BLS_PubKeyAggregate_Batch(COUNT, bench); \
    } \
    BENCHMARK(BLS_PubKeyAggregate_Iterative_##COUNT) \
    BENCHMARK(BLS_PubKeyAggregate_Batch_##COUNT)

// Usage
BENCHMARK_BLS_PUBKEY_AGG(5, "Small aggregation - typical for small quorum subsets")
BENCHMARK_BLS_PUBKEY_AGG(25, "Medium aggregation - typical for medium quorum subsets")
// ... etc
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f7544dc and cefca52.

📒 Files selected for processing (2)
  • src/Makefile.bench.include (1 hunks)
  • src/bench/bls_pubkey_agg.cpp (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/Makefile.bench.include
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/bench/bls_pubkey_agg.cpp
src/bench/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

Performance benchmarks should be placed in src/bench/ and use nanobench

Files:

  • src/bench/bls_pubkey_agg.cpp
🪛 GitHub Actions: Clang Diff Format Check
src/bench/bls_pubkey_agg.cpp

[error] 1-1: Clang format differences found. The diff indicates formatting changes. The step 'git diff -U0 origin/develop -- $(git ls-files -- $(cat test/util/data/non-backported.txt)) | ./contrib/devtools/clang-format-diff.py -p1 > diff_output.txt' reported differences and exited with code 1.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: arm-linux-build / Build source
  • GitHub Check: mac-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: x86_64-pc-linux-gnu / Build depends

@UdjinM6
Copy link

UdjinM6 commented Oct 22, 2025

Interesting. Can we apply the same optimization to signature aggregation? I see a similar pattern here in VerifyBatchInsecure() and in VerifyBatchSecureStep() below.

@PastaPastaPasta
Copy link
Member Author

Yes; I intended to get back to this and implement that as well, but forgot to. Will look at again

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/bls/bls_batchverifier.h (1)

146-170: Fix: prevent aggSig/pubKey mismatch when aggPubKey is invalid or per-hash set is empty

Signatures are pushed into sigsToAggregate (Line 157) before we know if we’ll actually keep the corresponding (msgHash, aggPubKey) pair (Lines 161–166). If AggregateInsecure(pubKeysToAggregate) yields an invalid key (e.g., sums to infinity) or the bucket is effectively empty, those signatures remain in aggSig but there is no matching entry in pubKeys/msgHashes, causing VerifyInsecureAggregated to fail incorrectly.

Collect per-hash signatures locally and append only after validating the aggregated pubkey. Also short-circuit the empty-bucket case and treat an invalid aggregated pubkey as a failure so the caller can fall back.

Apply this diff:

@@
-        msgHashes.reserve(messages.size());
-        pubKeys.reserve(messages.size());
-        sigsToAggregate.reserve(messages.size());
+        // Tighter reserves to avoid over-allocation (especially in per-source fallback)
+        const size_t bucketCount = byMessageHash.size();
+        size_t totalMsgs = 0;
+        for (const auto& kv : byMessageHash) totalMsgs += kv.second.size();
+        msgHashes.reserve(bucketCount);
+        pubKeys.reserve(bucketCount);
+        sigsToAggregate.reserve(totalMsgs);
@@
-        for (const auto& [msgHash, vec_message_it] : byMessageHash) {
-            std::vector<CBLSPublicKey> pubKeysToAggregate;
-            pubKeysToAggregate.reserve(vec_message_it.size());
+        for (const auto& [msgHash, vec_message_it] : byMessageHash) {
+            std::vector<CBLSPublicKey> pubKeysToAggregate;
+            std::vector<CBLSSignature>  sigsForMsgHash;
+            pubKeysToAggregate.reserve(vec_message_it.size());
+            sigsForMsgHash.reserve(vec_message_it.size());
@@
-            for (const auto& msgIt : vec_message_it) {
+            for (const auto& msgIt : vec_message_it) {
                 const auto& msg = msgIt->second;
                 if (!dups.emplace(msg.msgId).second) {
                     continue;
                 }
-                sigsToAggregate.push_back(msg.sig);
+                sigsForMsgHash.push_back(msg.sig);
                 pubKeysToAggregate.push_back(msg.pubKey);
             }
 
-            CBLSPublicKey aggPubKey = CBLSPublicKey::AggregateInsecure(pubKeysToAggregate);
-            if (!aggPubKey.IsValid()) {
-                // only duplicates for this msgHash
-                continue;
-            }
+            // Nothing new collected for this msgHash
+            if (pubKeysToAggregate.empty()) {
+                continue;
+            }
+            CBLSPublicKey aggPubKey = CBLSPublicKey::AggregateInsecure(pubKeysToAggregate);
+            if (!aggPubKey.IsValid()) {
+                // Aggregated key at infinity or invalid: trigger fallback to narrower verification
+                return false;
+            }
 
             msgHashes.emplace_back(msgHash);
             pubKeys.emplace_back(aggPubKey);
+            // Only now that we know the pair will be used, append the corresponding signatures
+            sigsToAggregate.insert(sigsToAggregate.end(),
+                                   sigsForMsgHash.begin(), sigsForMsgHash.end());
         }
@@
-        CBLSSignature aggSig = CBLSSignature::AggregateInsecure(sigsToAggregate);
+        CBLSSignature aggSig = CBLSSignature::AggregateInsecure(sigsToAggregate);
         return aggSig.VerifyInsecureAggregated(pubKeys, msgHashes);

Also applies to: 176-178

🧹 Nitpick comments (2)
src/bls/bls_batchverifier.h (2)

201-204: Tighten reserve sizes in secure step to actual work per iteration

Using messages.size() can grossly over-allocate in per-source or late iterations. Reserve to byMessageHash.size() for all three vectors.

Apply this diff:

-        msgHashes.reserve(messages.size());
-        pubKeys.reserve(messages.size());
-        sigsToAggregate.reserve(messages.size());
+        const size_t bucketCount = byMessageHash.size();
+        msgHashes.reserve(bucketCount);
+        pubKeys.reserve(bucketCount);
+        sigsToAggregate.reserve(bucketCount);

8-12: Include explicitly (don’t rely on transitive headers)

This header uses std::set (badSources, badMessages, dups) but does not include . Add it here for include hygiene and independent compilability.

Apply this diff:

 #include <bls/bls.h>
 
 #include <map>
 #include <vector>
+#include <set>
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cefca52 and fb00f52.

📒 Files selected for processing (1)
  • src/bls/bls_batchverifier.h (5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/bls/bls_batchverifier.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: mac-build / Build source
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: arm-linux-build / Build source
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64-build / Build source
  • GitHub Check: linux64_sqlite-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: Lint / Run linters

@PastaPastaPasta
Copy link
Member Author

@knst @UdjinM6 please review latest

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/bench/bls_pubkey_agg.cpp (1)

7-7: Still missing: Replace unused random.h with <cassert>.

The assert() calls on lines 36 and 58 require <cassert>, while random.h is unused (key generation uses CBLSSecretKey::MakeNewKey()). This issue was previously flagged but remains unresolved.

Apply this diff:

-#include <random.h>
+#include <cassert>
🧹 Nitpick comments (1)
src/bench/bls_pubkey_agg.cpp (1)

13-21: Consider extracting duplicate key generation logic.

Both benchmark functions contain identical key generation code. Extracting this into a helper function would improve maintainability.

Example refactor:

static std::vector<CBLSPublicKey> GenerateTestKeys(size_t count)
{
    std::vector<CBLSPublicKey> pubKeys;
    pubKeys.reserve(count);
    
    for (size_t i = 0; i < count; i++) {
        CBLSSecretKey secKey;
        secKey.MakeNewKey();
        pubKeys.push_back(secKey.GetPublicKey());
    }
    
    return pubKeys;
}

Then use auto pubKeys = GenerateTestKeys(count); in both benchmark functions.

Also applies to: 44-52

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb00f52 and 96dc666.

📒 Files selected for processing (1)
  • src/bench/bls_pubkey_agg.cpp (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/bench/bls_pubkey_agg.cpp
src/bench/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit inference engine (CLAUDE.md)

Performance benchmarks should be placed in src/bench/ and use nanobench

Files:

  • src/bench/bls_pubkey_agg.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build container / Build container
🔇 Additional comments (3)
src/bench/bls_pubkey_agg.cpp (3)

62-87: LGTM: Wrapper functions are well-organized.

The wrapper functions clearly separate benchmarks by key count with descriptive comments explaining the use cases.


89-99: LGTM: Benchmark registrations are complete.

All benchmark variants are properly registered, with iterative and batch approaches paired by size for easy performance comparison.


1-99: Consider consolidating benchmarks per reviewer feedback.

Reviewer knst suggested adding these benchmarks to an existing bls.cpp file instead of creating a new file. While the current implementation is functionally correct, consolidating related benchmarks can reduce file proliferation.

Based on learnings

Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls check ai comments about tweaking includes and adding a helper function + see below

Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK b7f886a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants